IP address access control lists are not as great as you think they are

Joel Samuel
7 min readDec 2, 2018

IP addresses are like flags — poor indicators of trust.
In and of themselves they do not provide strong authentication but we often use them like they do.

There ain’t no party like a TCP retry party

Having spent quite some time (read: arguably too long) in the cybersecurity arena I often find myself having the same conversations over and over again (not always bad). However when it comes to external IP addresses as a trust indicator the conversations tend to be the same (so kinda bad) but no one can point to a thing (post/article/guidance) that states a rational opinion — so here I go (hopefully good)

TL;DR — Public Internet or large WAN (multi-party ecosystem) IP address access control lists (ACLs) for restricting access to ‘things’ (non-production services, Intranets, SSH ports and so on) are summarily useful as “just not public Internet” filters — you must continue to leverage multiple defensive and AAA techniques [in the application layer].

External IP addresses are, at best, mild indicators for trust.

You could also apply this to internal IP spaces between your own networks and systems — network boundaries tend to exist for sensible reasons and a proportional mutual distrust is healthy (for example: payload signing or mTLS)

Scope

Clarity in life is important.

External IP addresses

This post talks about IP addresses on the public Internet (or perhaps large scale multi-party WAN like the Public Services Network)

IPv4 86.14.0.0/15, 86.15.241.0/24 etc not 192.68.0.0/16, 10.0.0.0/8 or 172.168.0.0/12

For simplicity, internal network traffic between client-server and your own web applications (and so on) are out of scope — many of the points discussed here however could still apply.

Purpose

The purpose of IP addresses in this context are to help determine origin therefore confer trust and offer privileges as a result.

The problem (what we are currently doing)

We try and identify the IP address ranges

In order to ‘lock down’ a thing (commonly this will be Intranet pages, non-production systems, administrative interfaces like SSH/RDP/VNC and so on) we have to know ‘who’ to let in.

Finding out that definitive list can take time and be unreliable.

If you miss a range you end up locking people out of the thing they need to access. If/when IP addresses change how will you know this when this likely sits outside of your change control governance?

We never truly understand what are behind these ranges because we can’t see behind them

On face value the wider/narrower the CIDR the more or less possible hosts there are — 86.14.0.0/15 has in theory 131,068 devices/hosts, 86.15.1.0/24 has 254, 86.15.1.100/29 has 6 and 86.15.1.150/32 has one.

However,
The face value is misleading, as there could be an entire building or series of different organisations nested behind a single IP address — and this is incredibly likely to be opaque.

Is this is shared network so all organisations in the same shared office (think WeWork or Regus) using the same egress IP? Does an organisation operate the /29 themselves but the /29 is used by both corporate WiFi and guest WiFi at the same time? Are these egress IP addresses for a VPN service with thousands of users?

We assume what is behind them is ‘good’

In an ideal world an organisation you’re working with would tell you they use a /29 (or so) and can define the scope of that use — exclusive, just corporate devices, not BYOD/Guest etc.

Ideally they would also promise to tell you if/when this may change (office move leading to an IP migration etc)

We assume those devices (and their users) are ‘good’. Not all corporate IT builds are made equal (nor should they be) so your expectation of IT controls, information governance, user security-related training (and so on) more often than not will be very different to what is actually in place unless you go through the effort of assuring this.

Even in a comfortable ‘good’ state you can never be sure that an insider actor (on their side) has not turned bad, or that malware has been installed or a laptop has been left on a train.

Thus, we incorrectly attribute trust

One or more of the combined problems leads us to confer an inappropriate amount of trust onto the traffic coming from that network.

When we incorrectly attribute too much trust we place too high a confidence in the IP address ACL whitelisting method and forsake other controls — leading to a false sense of security.

IP addresses as one indicator in defensive depth (how we should use/trust external IP address in ACLs)

Try and identify the IP address ranges

Unfortunately this problem will persist when you’re talking about different organisations talking to each other.

The best you can do is convey the importance of advance change notification and the consequence of the ball being dropped.

You should maintain (where appropriate/applicable) a single source of truth (I would recommend formatted change-controlled storage in something like github.com) and automatically apply confirmed (merged to master) changes — but admittedly this level of automation isn’t always easy for tin-based blinky-box appliances like on-premise corporate firewalls.

Understand IP addresses are just a mild indicator for potential trust

The complexities and probabilities of IP range hi-jacking and all that aside: operate on the basis that external IP address ranges are maybe who you think they are most of the time but you will never be sure.

When you apply this level of scepticism we might begin to appreciate that IP address based access control lists are helpful but not ultimately reliable — this helps us evaluate how much trust we should be placing and our need (or not) for other controls.

Consider the use-cases

A problem with IP address access control methods is that they are a binary gate — open or closed — and if availability is important (the thing behind the access control is used all of the time) then you must consider the consequence of the deviation or outliner scenarios.

Real-world example: an intranet
There is a client organisation’s Intranet I use a fair bit but it is hidden by an IP ACL (and nothing else…) so I have to install a full-tunnel VPN on my work (not client issued) laptop and I only enable it when I need to access their Intranet — having to shutdown my other split-tunnel VPNs, close tabs/applications (as I do not want that data going through that tunnel) and terminate sessions that won’t survive an IP change (SSH tunnels etc).

In reality their Intranet is one of the life-bloods of their organisation and for internal information all roads lead to this particular Rome — there is content that should not be made public (or at least would be redacted prior to release under FOIA) but their only control is an IP address based ACL so I would bet their is a significant false estimation of privacy. Recommendation? They can keep the IP ACLs if they want to, but they need to add application level authentication (through O365/G-Suite SSO or magic link).

Real-world example: people directory
Another client organisation allows the entire directory of over x,000 staff including names, role, building location, contact details and picture to be viewed based on source IP address. If you login (magic link or G-Suite SSO) you can then edit content (you also happen to be able to edit page but thats a different problem…). Recommendation? Remove the IP ACL that confers privilege and always require authentication.

Real-world example: dynamic infrastructure
Leveraging technologies like Amazon Web Service (AWS)’ CloudFront or Elastic Load Balancers result dynamic load shifting often resulting in quickly change IP address allocations or the inability to insert IP ACLs to begin with — if you find yourself in a position where your technical configuration doesn’t even support IP ACLs, you should think long and hard before breaking the configuration or putting in workarounds so it does.

Now imagine something far more important like administrative access to the AWS Console during an incident — would you want your team trying to deploy or spin up a VPN at 3am? What happens if they need a VPN to manage other infrastructure at the same time — will they need to keep switching VPNs and disconnecting sessions to do so? What is the probability of this unfortunate scenario compared to regular BAU where a single full tunnel VPN would be OK?

Implement defensive depth

With less trust in IP addresses as a filtration method, we remember we should always do a good bunch of other things:

  • log access/activity
  • monitor access/activity
  • actual authentication (client certificates, magic links, usernames/passwords, single/same sign-on, multi-factor authentication etc)
  • actual authorisation
  • build in defences against denial of service attacks, brute force attempts and credential stuffing

Filter out the noise if it still makes sense to do so

IP addresses used as this kind of sensible and well-considered filtration method still provide a handy filter of “not the public Internet”

While your web, SSH, RDP, VNC services (etc, as you’re so reasonably inclined) should have enough defensive techniques applied that they could withstand the pressure of the big bad world wide web being able to reduce noise you don’t need by simply stopping that traffic will always remain useful.

So, what are you saying?

External IP address access control lists are useful as part of a wider set of controls.

If you have other defensive and AAA measures in place but would like to filter out tertiary noise after assuring your use-cases are relatively airtight then you can introduce external IP address ACLs in an effort to save yourself from the likes of random port scans or brute force attempts.

Can we have some real-world examples?

Since you asked so nicely :-)

Real-world example: reducing MFA prompts
If your corporate staff WiFi was suitably access controlled (in reality this could still be PSK even with a little bit of signal bleed into the carpark etc) and had a clear egress range of IP addresses (so you definitively know those IP addresses were just people on your own corporate staff WiFi) you could leverage the proximity probability of those individuals (devices) and reduce the amount of times you prompt for MFA.

Real-world example: making sessions longer
Similarly to the above, you could allow sessions/tokens to last for 30 days instead of 7 (and so on) if the session is only active from this predictably and ‘known’ location

You might find other exciting posts in my Medium profile. I’m on Twitter as @JoelGSamuel.

--

--

Joel Samuel

The thin blue line between technology and everything else. joelgsamuel.com