Can we stop intercepting user traffic (aka Man-in-the-Middle) please?

Joel Samuel
8 min readApr 28, 2018

--

first… clarifying scope

This post focuses on the interception / Man-in-the-Middle (MiTM) of end-user web traffic on a corporate device/network (i.e.: Janice Bloggs on a work laptop using Chrome to get to Facebook) where MiTM is overt.

MiTM between systems or infrastructure is definitely a real thing for lots of good reasons.

What many of us are doing right now

Whatever you’re doing now can be chalked up to ‘it is what it is’ — hopefully you’re looking forward to iterate your network and better serve your users.

A (usually false) sense of organisational security is usually far outweighed by a user’s need to do their job; maintain privacy and keep data (theirs, and the organisation’s) safe.

Installing our own Root Certificate Authority (CA)

For simplicity lets assume an operating system’s Root CA database is well curated.

What you’re probably doing is creating your own Public Key Infrastructure (PKI) structure and installing the Root CA as explicitly trusted onto all of your device fleet.

Enforcing a web proxy

Whether authenticated or not, using a proxy auto-config (PAC) file or not or just plain Group Policy Object (GPO) you’re instructing (or forcefully capturing) web traffic and directing it to a proxy of some kind.

You might have a commercial solution —hopefully it is more than a Unified Threat Management (UTM) device running Squid. If you’re home cooking this, maybe you’ve rolled Squid or Dante.

You might also be using a web-based gateway like Zscaler which is a different paradigm (so I’ll conveniently not mention it again in this post).

Breaking the HTTPS connection

This is the actual MiTM — your proxy is terminating the HTTPS connection itself by issuing a certificate impersonating the destination IP/domain the user is trying to reach and therefore seeing all of the traffic.

Your proxy is then creating a new outbound HTTPS connection (hopefully!) to the original destination and relaying the traffic.

It is now sat in the middle viewing unencrypted traffic, and can do whatever you want it to do.

Blocking websites based on categories

As you’re ‘in the middle’, you’ve configured your proxy to match intended destinations against category lists, and subject to policy show the user different content (for example, a policy violation page).

Logging

From ‘the middle’, you can see full URL paths being visited, so you might be logging all of that.

Hopefully you’re not logging POST data such as usernames/passwords and credit card numbers!

Monitoring/Alerting

Its possible you’re monitoring for policy violations and then alerting on those so your IT teams can tell your People Team (HR) how naughty someone is being.

[Fake] Data Leak Protection (DLP)

From ‘the middle’, in theory you can see where files are being exchanged and you may be choosing to block such — for example, not allowing .exe files to be downloaded or .zip files to be uploaded.

This will probably be entirely signature-based.

It most definitely works in a terrible terrible way with many many false positives and oh so many false negatives — but some sort of risk assessment makes it seem like its all fine.

Breaking things

By enforcing an inspecting proxy, you might be stopping some destinations from working because it isn’t actually HTTPS traffic or because they don’t tolerate MiTM.

An example of this would be Google Hangouts/Meet, when it falls back to using TCP/443 but is not a HTTPS connection.

Your MiTM’ing system could also be less RFC tolerant and maintains its own trust requirements to the point of breaking things that otherwise would have worked without MiTM.

Hiding things

The re-signing using internal CA/PKI structures means the browser (and therefore user) are not seeing the original intended CA information and path, they will see the internal CA/PKI information.

This may be bad for the user, as they may be instructed to look for specific certificates to identify trust or authenticity.

In a worse scenario, it is possible that by re-signing the browser confers more trust than it should.

Downgrading security

Many MiTM capable ‘Unified Threat Management’ or ‘Next-Generation Firewall’ looking systems are stuck on older versions of TLS (usually v1.0) which may offer less security than if the end-user browser connecting to the destination had negotiated better versions, better cipher suites etc.

Maintaining a cumbersome whitelist

You may have a list of things you need to bypass inspection on to keep them working, or a list of categories (hopefully) that you won’t inspect because you are breaching privacy (for example, online banking).

You might also have your network configuration bypassing your proxy entirely for certain IP destinations as the proxy still breaks it.

Caching

A proxy caching assets can be useful to speed up user experience and save network bandwidth. You could also be making their experience much worse and just creating more work for yourself.

Caching app distribution is a very different thing (which is largely worthwhile and works better).

Pretending to manage risk & keeping your Chief Information Security Officer (CISO) / Senior Information Risk Owner (SIRO) happy

Knowing more doesn’t mean you’re doing more to stop bad things from happening.

After all, it is impossible for a user to simply print a confidential document; take a picture of a screen using their phone or simply email it as a non-attachment… right?

Central enforcement solutions (like an intercepting proxy) might be viewed as a catch-all safety net which can lead to neglecting other aspects of the estate.

What we could be doing

Taking a step back and understanding that security includes users (your best, not worst, defence) and good security requires depth.

We should be taking a wholistic view of our systems and user-interactions with those systems to design and implement proportional defences to avoid invasive technical tactics that offer immaterial security value — in this context, don’t look at the middle (your traffic choke point) but where data originates from; is stored and processed from (hint: your users on end-user devices and document stores) and work to transparently protect them (filtering at the edge)

Pay attention to our end-user devices

Given the main purpose of intercepting end-user web traffic is usually to defend against malware… increase your defence where that malware operates.

Proportional end-user device controls — I am not directly talking about installing endpoint protection software (which has its pros and cons) but even just coherent configuration of native tools and settings in client operating system software.

If you do go down the endpoint protection route, premium software license tiers will usually include web filtration components that you can control through central policy. These don’t MiTM traffic as they already work on the device and usually in-browser as an extension.

Heuristic-based detection on end-user devices works within context to understand when something may not be quite right.

Patch your stuff — particularly your end-user devices!

Filtering based on Domain Name System (DNS) results

If you’re in the UK public sector the National Cyber Security Centre (NCSC, a part of GCHQ) have a free DNS filtration service you can enrol your organisation onto. This doesn’t provide organisational policy enforcement (block adult material etc) but focuses on stopping devices from reaching known bad malware sites.

Quad9 acts like the NCSC Public Sector DNS but is open to all and OpenDNS can do the same, along with premium features for organisational policy enforcement.

These solutions are designed for your network’s DNS servers to recurse to as opposed to directly pointing your end-user devices at them (lots of reasons why you don’t want to anyway, such as needing to query .local namespace first etc).

I wouldn’t recommend hand rolling a DNS solution to mimic the above capability (bind9 with shalla etc) as this not only takes effort to create and operate, but because you won’t have the same threat intelligence information and these solutions/vendors are already designed for scale.

Getting email security right

Above and beyond a robust externally facing email configuration nip malware (and SPAM) before it gets to your users and devices.

Email security is easier than it used to be given the uptake of commodity outsourced solutions like Google’s G-Suite and Microsoft’s Office 365. If you’re still running your own mail services, use good defence modules to catch the known bad and triage potentially bad (and this should have a slick user experience).

Filtering based on Server Name Indication (SNI)

Putting Transport Layer Security (TLS) v1.3 to the side for the moment:

A long long time ago in a galaxy far away, we used to assign one IPv4 address per website. Now browsers send an SNI value which indicates which domain name they are trying to reach, so the receiving server can respond accordingly (allowing us to serve multiple unique domains/sites from a single IP address).

As SNI is sent in the clear, you can determine whether to intercept only when you need to (SNI value matches a domain you want to filter).

This isn’t a perfect science and I weighed whether to include this, but for now it is OK and still much better than always intercepting.

Improving your monitoring

Once your device management systems are configured to ensure end-user devices run modern and up to date software, you should monitor that this is actually true.

You should be able to detect the consequence or behaviour of malware such as mass file encryption attempts or a lot of lateral movement on your network. If you have your basic depth in place, the malware would have had to make it past your email filtration and on-device protections to even get this far.

Non-personal endpoint software usually provides centralised reporting, so you know when malware was detected or heuristics picked up odd software behaviour.

Document management systems (including basic G-Suite Drive) should be monitored for irregular behaviour (accidental or malicious) such as bulk document downloads which don’t usually happen (this requires context, a mass file copy might be entirely normal if that is what a user does for their job).

Security monitoring is not a quick conversation so I’ll hit pause on this for the moment.

If you truly need to (hint: most of you won’t) invest in true DLP

I’ll discuss DLP another time, but in short, using a technical step like MiTM to detect malicious data exfiltration means you’ve probably already lost your data.

Commodity DLP through intercepting web filtration components of UTM appliances are quite boring and usually ineffective — data leak~= compressed file being uploaded etc.

File-level encryption within your file/document storage/management should mean that those files can’t be read if the key server is not there to answer the legitimate decryption request — done right, this is all transparent to the user unless files/documents are in a place they shouldn’t be, or accessed by a user who isn’t permitted to do so.

Getting ready for TLS v1.3

NCSC’s Chief Technology Officer blogged about TLS v1.3 in March 2018 and it is a good read to get you thinking about the enterprise side of things.

Quickly commenting on the out of scope

Intercepting system-system traffic is a different matter and if you are doing so in order to defend against lateral movement and understand/enforce traffic between different areas of trust (etc) you should probably continue to do so.

A word of warning: poorly designed interception can degrade security, some examples:

  • the destination domain uses weak ciphers which your client doesn’t actually want to accept — by intercepting, your intercepting proxy may be masking that by offering a good cipher configuration to the client
  • the destination domain has a functional but misconfigured interface (certificates are passed on the wrong order) and your interception fixes this because the client can no longer see this (on the flip side, your interception might be picky about RFC compliance, so instead punishes the user by throwing up an error where their browser directly communicating with the destination domain would have been more tolerant of minor RFC misalignment)
  • your client is designed to check for a specific intermediate CA or certificate but your intercepting proxy will issue its own, therefore the connection fails or has a higher trust than it should

You might find other exciting posts in my Medium profile. I’m on Twitter as @JoelGSamuel.

--

--