A deeper dive into some aspects of my other post about making URLs less important.
This isn’t quite a ‘Part 2’, but more of an expansion of some technological nuances that exist today when it comes to individual services and mechanisms to deal with domains and URLs.
This posts extracts pointed examples of where some of the technologies used as part of the current defence to protect users from bad things on the internet encounter some difficulties.
URLs are bad for humans… so we should be tackling as many root issues as we can to help them, instead of suggesting technical solutions.
Troy Hunt posted recently about humans being bad at URLs, which came about as a result of a bit of tooing and froing on twitter.
I decided to write this because there was a disagreement over what we (the technology and technology security folks) should do about it and the associated recommendations from those professions to general internet users.
Long and short of it, I agree that humans (including Troy and myself) are bad at…
This is the second post of a two-part musing by yours truly.
The first post discusses how UK government technology interoperability is far from easy — some background, problem statement and caveats.
This second post discusses what the future could look like. It is certainly not exhaustive.
You can absolutely achieve all tenants of workstream collaboration using different technical systems. Doing so might be a bit messy to get setup and certainly from a user experience perspective (y’know, that really important cornerstone to technology decision making) but one could argue this helps reduce vendor lock-in.
If you decide you want…
April 2020 in the United Kingdom is a strange time: a global pandemic where beyond the public health crisis and national security issues we see a mass movement away from offices and an over 95% reduction in public transport utilisation.
Working from home, inaccessible ‘high side’ systems (surprise surprise, terminals used for TOP SECRET are not something you take home) and a priority response to COVID-19 has led to a penny dropping moment: UK government organisations (even if we take just the Whitehall departments) use different IT systems and this can make collaboration at pace… difficult.
We currently live in an exceptional time: a global pandemic where beyond the public health crisis and national security issues we see a mass movement to working from home.
Various technological challenges are born from seismic changes to worker patterns/behaviours: geographical diversity is definitely one of them.
This post is triggered by a discussion that it came up in a UK cross-government security forum as to whether home users, on a personal device doing personal things, should be encouraged to use a VPN or not. The VPN would not be provided by the organisation given the personal use.
I haven’t posted (on Medium) for a little while but I was encouraged to write a post about how I handle my personal finances as someone who runs their own business and spends a bit of time researching such things.
This is sort of like when I wrote ‘being safe on hostile WiFi/mobile networks’ — how I live as opposed to just about what I think.
I am mainly writing this so I can point friends etc to it. If you happen to stumble across and it turns out to be useful to you… excellent!
A relatively obvious disclaimer: This…
The ‘exceptional access’ debate is an important one: how do authorised organisations (law enforcement, intelligence agencies and so on) legally and proportionally access encrypted conversations (and so on) without creating broken encryption models, inadvertent mass collection, unintended use or abuse while avoiding the creation of an ‘Achilles heels’ for exploitation by those with less well-meaning intentions.
The UK National Cyber Security Centre (NCSC)’s Ian Levy and Government Communication Headquarters (GCHQ)’s Crispin Robin have written some principles for the exceptional access debate as part of a (well worth reading) series of essays in the ‘Crypto 2018 Workshop on Encryption and Surveillance’…
‘Offshoring’ is a subject that conjures fear and confusion within the hearts and minds of data protection / privacy / cybersecurity professionals through to board level executives.
I spend my consulting time split between central UK government, media and fintech clients who are in very different places ranging between “I’m best friends with the Information Commissioner!” and “we’ve never realised Data Protection was a thing — but we’re not really going to change what we do or how we do it because what you’re saying sounds like a lot of work and/or has a negative sales/marketing consequence in our eyes.”
IP addresses are like flags — poor indicators of trust.
In and of themselves they do not provide strong authentication but we often use them like they do.
Having spent quite some time (read: arguably too long) in the cybersecurity arena I often find myself having the same conversations over and over again (not always bad). However when it comes to external IP addresses as a trust indicator the conversations tend to be the same (so kinda bad) but no one can point to a thing (post/article/guidance) that states a rational opinion — so here I go (hopefully good)
I posted recently about Santa Claus & GDPR and this prompted some debate over my analysis and some encouragement to maintain the theme.
The discussion eliminated the Easter bunny as an option as summarily we felt the Easter bunny presented a reverse legal issue (the Easter bunny does indeed trespass to leave eggs on your property, but more so that we take and eat them which may not be the bunny’s intention so this is just theft on our part)
Have you ever thought about the Tooth Fairy and her* General Data Protection Regulation (EU) 2016/679 (“GDPR”) compliance? …
The thin blue line between technology and everything else.