We should make URLs less important

Joel Samuel
11 min readNov 11, 2020

URLs are bad for humans… so we should be tackling as many root issues as we can to help them, instead of suggesting technical solutions.

Troy Hunt posted recently about humans being bad at URLs, which came about as a result of a bit of tooing and froing on twitter.

I decided to write this because there was a disagreement over what we (the technology and technology security folks) should do about it and the associated recommendations from those professions to general internet users.

Long and short of it, I agree that humans (including Troy and myself) are bad at reading URLs, especially in order to make a trust decision (is this impersonation? phishing? etc).

The composition of a URL

Scope

For the purpose of this post, I am focusing on URLs ‘seen’ by a human interacting with a web browser — URLs that have been typed in, or those they can be seen in in the magic bar (clicking between websites or a link in an email client that opened in the browser).

For the purposes of this post, URLs are those opaque to the user (mostly) by design — computer interactions with URLs (such as API calls by software or underlying asset loading such as images, scripts, and style sheets).

In terms of people, we’re talking about everyone. Everyone, worldwide, who use web browsers to get online and do general things (social media, buy stuff, sell things on eBay, and move money using consumer financial services whether their bank or PayPal).

A secondary post dives into a bit more of how I would go about some technology solutions.

Problem statement

Between misspellings (gooogle.com instead of google.com), font display (googIe.com versus google.com — which often comes down to just a few pixels), IDN homographs and subdomain lookalike (accounts.google.com.securelogin.84791.net instead of accounts.google.com) it is… extraordinarily difficult for technologists and security professionals (let alone general citizens) to determine trust based on the visuals of URLs. Troy used a great example. Which one of these is a trustworthy domain from a Google powered blog?

blog.google.com
blog.google.cn
blogpost.com
googleblog.appspot.com
app.google.bl0gsite.com

Domains by themselves are almost impossible to build trust around without some sort of prior context or knowledge. Google attempts to measure trust through a variety of methods, including site age, how many other sites link to them etc.

TL;DR

If you think you can simply build technical solutions to fix URLs and their relationships with humans — you’re wrong.

We haven’t fixed this before because we’ve not actually tackled this issue holistically before — excellent teams, limited by scope, time and money.

Ciaran talks about this.

A secondary post dives into a bit more of how I would go about some technology solutions.

Shots fired

Technologists love technology, so its our go-to response

I fall foul as this as well, and if you’re reading this you’re probably a technologist in some shape or form.

I am not saying addressing some of these problems won’t involve technology being changed or built, but if we don’t adequately understand the problem we ultimately end up back in a similar situation.

Training everyone is going to take awhile

Scott was talking about training. *nods* mass training (of all the people) takes a long, long time and the technology/security messaging has to be as simple as possible (read: it won’t be simple enough) and largely has to be ‘drag along’ — such as when Google Chrome and FireFox stop showing some encryption certificates as green (which implied ‘good’ of course) and everyone just dealt with it.

Training is part of it, but in this context we’re likely training so the user can make a choice and that isn’t going to work out well all of the time.

Training is part of it, but we would need to be training people to make better choices. This requires them to conduct an unreasonable number of visual and non-visual checks every time they see a URL.

The immediate consequence of ‘bad’ choices are often zero — there is no feedback mechanism to draw a link between a ‘bad’ choice/site/etc and identity theft or subsequent malware. Sure there may be through incident response and forensics, but that isn’t a luxury for the general internet user.

Things can’t stay as they are now and we only use training as a mechanism for helping the humans. There aren’t enough humans to help those humans. And humans will still be bad at making security choices.

Add-on solutions only work for a handful of people

So, can we just take the humans out of the picture and instead identify phishing sites with the technology? We can already and last month I wrote about how NordVPN’s CyberSec can block this sort of thing outright

So, what’s the answer? … Turns out we do have solutions and as several people pointed out, using a decent password manager is one of them: Solution: use 1password as your password manager. It won’t match the faked domain, hence no password gets entered. That’s why Troy recommends password managers. Specifically #1password

Want to make a meaningful difference to phishing attacks? Stop whinging about fonts and instead get people using an up to date browser that flags known phishing sites running through NordVPN with CyberSec turned on and authenticating to websites using 1Password. Keep educating people, by all means, but expect even the savviest internet users will ultimately be as bad at reading URLs…

In November 2020, the things that usually save a regular internet user from bad things such as phishing sites are SPAM filters in mail services (particularly the major providers who can throw analysis of scale, such as Google Mail or Microsoft Outlook) and if they use a modern browser like Google Chrome or FireFox, they experience Safe Browsing (and other things like it) which warns them about known deceptive or malicious sites.

Both of these things aren’t particularly quick but that isn’t anyone’s fault, it just needs more investment. New phishing campaigns can last only a few hours, so it is always a race against the clock.

I have no real problem with people using and recommending non-malicious VPNs (oh yes, some VPN services do more harm than good) — even though may folks may not need a vpn. However, if a user cannot easily tell whether a URL is good, then there is no way that they can tell whether a VPN is trustworthy either.

Even if you are lucky enough to find a reputable VPN, pay for it and use it, then it still won’t fix fonts. It’s difficult to see how any VPN, even ones run by the internet companies, like Firefox’s VPN or Google’s rumoured new VPN will be able to have a greater effect at blocking illegitimate websites than the browser itself. The browser has far more context about the user’s actions and activity, and is able to operate at a trust level before the communications are encrypted.

A VPN really only protects you against malicious activity by your Internet Service Provider (ISP). That’s great if you are a criminal and believe that law enforcement is tapping your internet connection, but that’s not the threat model than 99% of the internet population faces (and we could go into a free-speech discussion and coercive governments, but that’s not the problem here). A VPN simply moves your entry point to the internet to a location that they can control and filter.

Password managers are great, but like VPNs, the general internet user is not going to become familiar with them, go and pay for one (free do exist), install it and then use it consistently (as consistency is key). They remain one of best tools in the arsenal we currently have, so do I recommend folks use them? Absolutely.

Password managers are probably the best solution that you can get right now to protect against the theft of credentials. Most protect you against malicious web users sending you to a fake domain that visually looks like a target domain, and entering your password. It also allows you to generate a new unique password for every website.

However, password managers, while popular with geeks and technical people (and even then not as much as we’d like) are still difficult and painful for users to use. There are also many users who simply don’t trust them either.

The security community is divided, with outspoken security “guru’s” speaking against the use of password managers. Given this, it’s impossible for a normal user to decide which password manager to use, and difficult for them to get it installed, synced with every device they use to access the internet, and trustworthy enough.

Finally, the problem with the password manager is that it only protects your passwords. The clue is kind of in the name. If you get tricked into visiting a site and handing over other information about yourself, downloading malware ridden versions of a browser, or simply clickjacked, the password manager won’t help you.

If someone has a way of deploying a free password manager with an always-on VPN ‘with CyberSec’ to every internet userincluding those in America, those in Iran, those in China, those conducting criminal acts, and your 7 year old child and your grandfather: add a comment.

There are a plethora of other ‘add-on’ options as well, such as browser plug-ins such as HTTPS Everywhere. They work absolutely fantastically, and as an often security focused technologist I am deeply appreciative of the developers… but they aren’t suitable for mass adoption let alone mass default adoption where everyone gets them out of the box. The problem with opt-in add-ons again is that you are adding more and more trust layers to your computing experience. I might trust the developers of HTTPS Everywhere, but I don’t know who they are, I won’t be told if they hand the project over, and of course, plugins can be malicious themselves.

So, how do we go about solving these old, super-hard problems?

We can’t just build something, particularly in isolation (again). We have iterated DNS solutions over time. We have created third-party password managers. We have rolled out some cool browser add-ons. All of these are add-ons, and thus none of them will be mass adoption to help keep the general internet user informed and safe.

We have seen in-browser password managers (which obviously have a much higher take-up rate than a third-party password manager) but compared to the number of general internet users this is at best fractional.

We need to understand that the technology current available to the general internet citizen user does not offer enough always-on protection, does not always provide enough information, does not provide a choice (in some cases) and the same concepts are presented differently between web browsers.

(As a technologist I also fall foul of what I am about to say) Technologists and security folks are bad at deciding things for other humans. We just want to build something. We solutionise — a lot. We need other professions to balance us out and help us appreciate what we do not understand. Like most technical fields, we think we know more than we do.

Thinking technology concepts are easy for everyone, and users should ‘just’ do something — read more, install a VPN, use a password manager, don’t click a button that is clearly there to be clicked, etc.

Just keep swimming

The current ecosystem has been this way for some time. We have only really had incremental change (I don’t wish to diminish the work of browser teams changing visual indicators etc, but it’s true).

Despite interactions to some technologies (like DNS) they remain disjointed and can be a pretty awful user experience. We have definitely cost people money by pushing them down third-party products. We have popularised VPNs (front page of Wired magazine, etc) but have been unable to deal with malicious VPN services (because internet).

We shouldn’t stop making incremental changes. They are hard won, and many of them have added up to a better internet.

Password managers are great, and so we should keep saying that to users to whom it makes sense.

(We should definitely stop pushing VPNs though.)

Swim upstream

We should consider the problems we have as broad commodity issues. If the problem is a commodity, the thinking changes to look for places we can solve them by default, instead of by add-on.

If you have a national problem, you push national regulators or legislative bodies to fix them. If you have an international problem, you use treaties — etc.

Have all browsers used by consumers behave the same way. Protections that exist by default, so users do not need to install or opt-in.

I also mean solutions that genuinely do work for most people, and that solutions that do not knowingly exclude. (The tech industry has done a great job of excluding those accessibility needs. I am just not interested in any solution that continues that.)

Where is the substance?! Sorry, we really do need the airy fairy stuff

The above may not be substantive enough to some people.

The first thing we need to do is take a step back. The entirety of this post has been written with a basic premise: OK, so users are not going to install a thing, and the defences they get by default are inadequate. Users are often given complex choices, or simply left to fend for themselves while we rave about third-party solutions instead.

If you approach this from the lens of how can we help instead of users should just… or what can we build? Then you may end up in a place where you can build a thing but you will have an infinitely better understanding of what, how and why — and the chances of you solving this for everyone exponentially increase.

Thus far, teams (quite excellent teams mind you, like the Google Chrome folks) work in isolation and without standards. Everyone else will generally follow a good idea, but there are cadence issues.

Browser teams don’t influence ISPs. Browser add-on folks (like EFF who make HTTPS Everywhere) also don’t influence ISPs on a global scale.

We need folks to implement things, and we need them to implement things that behave in the same way.

So, the real actual thing we need? Influence.

With influence, we can bring people to the table (W3C Technical Architecture Group, for example) to extract promises, the ‘when will you do it by?’, set standards and if needs-be draft legislation or update regulatory requirements.

This still might not feel substantial enough for some, but there is a reason why technical solutions have not been forthcoming and the ones that exist require third-party opt in or have not materially changed in years.

Scope and money — the right group of people need to agree the problem exists and what we’re doing now isn’t enough, and then those people need to enable other people, which usually means paying salaries.

The former head of the UK’s National Cyber Security Centre (NCSC, a part of GCHQ) leans into this. Ciaran has arguably less influency now, as he doesn’t run NCSC, but using his voice he talks about the need to step up defence — to me, thats solving problems upstream.

In a joint post, I have spun out what I as a technologist would want to influence to make things a little better.

Is anyone working on this kind of stuff?

Yes, in isolation. Yes, on a per-country basis. No, across the world — just browser teams really (again, they do a great job with what they can influence).

What are you doing about it?

I spend most of my professional time as a consultant to various organisations. Some of them are tangentially responsible for new standards so I will be nudging them into action.

Fortunately I do find myself influencing large vendors from time to time by representing large enterprise customers. Over time, I want to challenge vendors to create new defaults that they push laterally into consumer products.

A secondary post dives into a bit more of how I would go about some technology solutions.

What can I do about it?

If you have a platform and/or relationships with regulators, legislators or vendors — have a conversation with them and see what their thinking is. Do this more than once.

You might find other exciting posts in my Medium profile. I’m on Twitter as @JoelGSamuel.

--

--

Joel Samuel

The thin blue line between technology and everything else. joelgsamuel.com