A deeper dive into some aspects of my other post about making URLs less important.
This isn’t quite a ‘Part 2’, but more of an expansion of some technological nuances that exist today when it comes to individual services and mechanisms to deal with domains and URLs.
This posts extracts pointed examples of where some of the technologies used as part of the current defence to protect users from bad things on the internet encounter some difficulties.
This post also lets me scratch my technologist itch, by talking about some of the things I would like to do — but I have kept out of the main post due to brevity, but also to avoid them being a distraction.
The main post is here — https://medium.com/@joelgsamuel/we-should-make-urls-less-important-f85bf09ceeb0.
All technology involved has nuance
I’m going to handle some caveats now, which is a little earlier in my posts than I would usually do so.
Things that you have to seek out and install are not suitable for the general internet user. Not only will they simply not know, but you would also have a communication issue: you would be encouraging users to seek out add-ons… and again, people make malicious add-ons.
I’ll use HTTPS Everywhere as an example (again, I love it, I use it).
Can you see this page being OK multiple times a day to a general internet user? No way.
Also, users want to use it is highly likely they will simply click through using one of those buttons. I myself usually click the bottom one. The reason I have HTTPS Everywhere installed is to provide me with a decision point, but more often than not I make the decision to go ahead.
If I (probably) know what I am doing, and click through in most cases. What would a general internet user does who also just wants to get to the thing?
Services ‘with CyberSec’ (personally I use an encrypted DNS client to connect to a different DNS service but this is far too complicated for most people) are abstracted from the technology the user is ultimately using, such as a web browser.
DNS RPZ (response policy zone) is effectively how filtering DNS services work. The client is asking for a domain to be resolved, and the RPZ is checked for whether it knows about the query or not — if it does, it responds in line with the RPZ as opposed whatever the global DNS system actually says. This effectively overrides the DNS response with the desired action from the RPZ.
Its great, but… this is disjointed from the web browser. Funky things start to happen.
On this occasion, I was simply blocked from going to the site as my DNS service is configured to stop ad/tracking networks. The web browser doesn’t know how or why, so it simply throws up an error. On face value, the goal is achieved (it stopped a visit to a bad thing) but the error and information presented by the browser is… less than useful.
So my chosen DNS service lets me turn on a block page. The web browser still doesn’t actually know what is going on (DNS RPZ is doing its thang) but it does have something to show me.
This is definitely better as a user experience. The DNS service has a relatively generic block page (and the service is available to businesses as well as individuals) so ‘whoever is in charge of the network’ could be read as my ISP but as a technologist who set up NextDNS… I know that’s me.
However, think of the user’s effort to arrive here. Now realise this is all caveated by encryption: I only see the above page for plain-text connections (HTTP).
We know HTTPS is (wonderfully!) on the rise. Which means this error is increasingly less likely to be presentable to me.
My DNS provider does offer a way of showing informational block pages for HTTPS sites, but I have to install their Root Certificate Authority.
I have chosen not to do this, as much as I mostly trust my DNS provider I don’t trust them that much! This is absolutely 420% not something we should ever be instructing or teaching general internet users to do (install new Root CAs).
Because DNS is not integrated to the browser, the user can be faced with information (mostly not) but ultimately its binary: here is an error (maybe a helpful one) ‘go away’. There is no choice. I sit on both sides of the fence when it comes to if a user should be offered a choice or not, as much of the time they click through… but unless you’re 100% confident the user should never click through — but the bar to arrive at that point is high, without it being considered censorship or invasive etc.
OK, so DNS sounds like a painful way to help users?
Yes. Yes it is.
But lets say I want ISPs to filter out known-bad domains — because I still do.
Implementation and operation costs money and ISPs will want to know exactly what needs to be done.
ISPs will also need the DNS RPZ sources. You probably want to solve this problem once, and there are a lot of threat feeds out there. Do you tell them which feeds to use (then you probably have a consistency problem, plus they will want money if those feeds cost money) or give them the feed. If you give them a feed, you have to make sure it is operated well and is populated (with high confidence information) very quickly.
You probably also want to standardise (as much as possible) the labels, words and stylisations of how block pages are presented (if presented). This includes the DNS responses as well (because in time, we could get browsers to recognise them…*wink*)
The outcome would be all of those ISP customers would be unable to connect to known-bad domains by default — #winning!
ISPs like to be compensated for this kind of stuff, so even if they agree its a good idea, you ultimately need to make them — this means regulations or legislation.
Safe Browsing is a pretty good way of doing what it does. It isn’t an add-on installed by the user, but integrated by the software they are using (a web browser). Its there by default, all of the time for the general internet user — #winning!
Here I am offered more information and a default choice (stay safe) or I go power on through and go ahead (hopefully with more information than I had before, so this is now my choice to do so).
By reasonably connecting the systems together, the user-facing errors and information is just a whole lot better. This also works whether using HTTPS or not… hooray for integration!
How would Joel tackle this problem with technology?
As a technologist I’ll share some things I might focus on — surprise! It is what I have been talking about above.
Harness Safe Browsing (make sure it’s in all the consumer browsers), and feed it with more information so it can catch things sooner. I would also link Safe Browsing into products that aren’t just a web browser (but we’re back to my scope statement).
Consumer/ISP DNS Services
Most consumer ISP services don’t filter out malware/phishing (as we’ve seen above as doing so with DNS means the user has limited information and no choice, you have to be quite confident when the DNS RPZ kicks in) but they should be.
Quad9, 220.127.116.11 and Google DNS could use this as well. Blocking known-bad (very high confidence in intel sources, etc) would be huge wins. A lot of systems know to talk to Google DNS, and even more offer Quad9 and 18.104.22.168 (CloudFlare) as drop-down options. I’d still focus on ISPs, but this is a good surface area.
These two ideas alone should lead to better protections for everyone, by default, without the need to opt-in, buy or install something.
This doesn’t go into fixing fonts, but it would expand known-bad databases and feed systems that matter.
DNS is still in my mind for this one, mainly because DNS underpins so much (even how on-device malware phones home) — you can’t just solve Safe Browsing, you need to keep going up to the common denominator solutions further upstream: DNS is one of those.