Phishing versus Defence-in-Depth

Joel Samuel
10 min readJul 11, 2023

--

This post is brought you by two sticks and pure rage, because corporate simulated phishing campaigns are nearly always awfully done.

I recently heard Emma Wicks speak about user-centric security and advocate for user-first ways of going about cybersecurity— I’ve learnt a lot from Emma and others over the years, and credit her with transforming the way I view technology and security-in-technology: I’m far, far more user-centric than I used to be.

I am of the view that the vast majority of phishing attacks are successful as a result of the IT and/or cybersecurity teams failing to implement good IT systems, defences and detections. I’m not a fan of just blaming the intern.

Emma’s talk spurred a broader discussion on where does the end-user sit within the end-to-end lifecycle of a cyber attack — in this case, phishing.

Text reads “our employees are the first line of defense”
Nope. They aren’t the first, or last.

Whats wrong with simulated phishing campaigns?

I really struggle with them as they shift the burden to the end-user, and are rarely a test of the IT/cybersecurity capabilities (tools, systems or teams).

The most effective and positive simulated phishing campaigns I have seen are done ‘with’ the organisation, not against or to it — people are brought together to look at previously convincing phishes, and work to make their own. It’s fun, educational and blame-free. They aren’t emails sent out of the blue.

The next best don’t blame the end-user if they ‘fell’ for the phishing test, but I still question what is being measured and the culture cost of still conducting phishing tests — because its still probably annoying users.

The worst phishing campaigns look for “teachable moments”, and are sadly often conducted by teams who don’t understand technology or cybersecurity, so the automatic view is to shift responsibility to the end-user instead of the hard work in improving how technology is operated and secured. These campaigns are generally always adored by CxOs, risk owners etc, probably because its a thing to see on a slide deck and it feels like ‘something is happening’.

I have a small sympathy for stretched security teams, who use phishing exercises as something they can do, in lieu of other things they want to do but don’t have the money for (they would love an EDR system but can’t afford one, etc). Its only a small sympathy, because ‘I get it’ but all its doing is bugging users and feeding the cycle: phishing exercises don’t improve security, but security isn’t being improved, so conduct more phishing exercises. The beatings will continue until morale improves.

The culture cost

Hacking off end-users is counter-productive to security in the long term.

Even if phishing exercises are done well, end-users who ‘fall’ for these exercises won’t feel good about, will loathe any training they are sent on, and may be less likely to engage with security teams in the future.

There are some really personalised attacks that technology will find difficult to initially detect, so require a human to spot them to begin with — example: someone in finance receives an invoice from what looks like someone at a supplier they normally receive invoices from, but this time its new bank details etc — but that person is now less likely to spot them if you’ve used up their time, patience and mental capacity on inane training and stupid tests. Nor will they bother calling you to say “this looks weird…”

Edit (2023–07–13): Michelle (a security awareness specialist) takes this one step further and says that simulated phishing exercises lead to vanity statistics and boards are less and less tolerant of these.

Swim phishies, swim

Lets run through two typical malicious email campaigns as technical workflows.

Phishing for credentials

  1. Bad actor sends the phishing email.
  2. Receiving mail server processes it, and allocates it to the right mailbox, resulting in one more unread email.
  3. User opens the email, often loading or pre-loading any external HTML assets referenced in the email (DNS lookups, IP connections etc)
  4. User clicks a link in the phishing email, which might be something benign (part of pretending to be a legitimate sender) or the main call to action the attacker wants the user to click on
    Lets imagine this is a login or fake password reset button that goes to https://gmаil.com/resetpassword
  5. A browser launches, and does a DNS lookup for the domain in the URL (if a domain). DNS service answers the DNS lookup query.
  6. Browser checks the IP/domain against reputation lists.
  7. Browser connects to the corresponding IP address of the phishing site. Network facilitates these IP connections.
  8. Browser displays the attacker’s phishing page.
  9. User ‘falls for it’, and enters their credentials which might be their work email address, password and maybe MFA/2SV if not phishing resistant.
  10. Sophisticated attacks will act as a proxy, in some cases ‘passing through’ the user’s credentials to the actual service, so the user is also actually being logged in.
  11. User has provided what the attacker wanted, the user is redirected to the legitimate service, or told their is an error and non-obviously sent somewhere (maybe back to the legitimate service’s login page to try again).

Delivering malware through a remotely hosted file

  1. Bad actor sends the email — they are super sneaky, so the first stage malware is actually held on a legitimate file hosting service such as Google Drive, Microsoft OneDrive, DropBox etc.
  2. Receiving mail server processes it, and allocates it to the right mailbox, resulting in one more unread email.
  3. User opens the email, often loading or pre-loading any external HTML assets referenced in the email (DNS lookups, IP connections etc).
  4. User clicks a link in the email, which might be marked as “click here to securely download the invoice”
    Let's imagine the URL is https://drive.google.com/file/d/1obw87p0QGWVAAGhhDS1YKLLckaVW1riX/view?usp=sharing
  5. A browser launches, and does a DNS lookup for the domain in the URL (if a domain). DNS service answers the DNS lookup query.
  6. Browser checks the IP/domain against reputation lists.
  7. Browser connects to the corresponding IP address of the phishing site. Network facilitates these IP connections.
  8. Browser downloads the file to the user’s Downloads folder or temporary cache location.
  9. The user navigates to their Download’s folder or browser download pane, and opens the file.
  10. The malware executes as per its programming, doing all sorts of terrible, terrible things — depending on the operating system, payload and attacker intentions: this could vary from digging through files/information in the user-level space, downloading further malware from the internet, and/or escalating permissions to influence the system-level.

Defence-in-Depth

There are some common elements to how the phishing email and malware delivery work. They both:

  • sent by mail platforms, some of which do detect outbound abuse
  • arrived by email, through a corporate email system
  • resulted in DNS lookups
  • resulted in IP connections
  • resulted in the browser loading websites

So,

  • corporate email system should be filtering out bad emails — this happens at the platform level as well as the tenancy level
  • corporate DNS systems should be blocking known-bad domains and informing detective domain reputation signals
  • corporate networking systems should be blocking known-bad IPs
  • modern browsers should be using domain reputation

Phishing effectiveness

In the phishing example (phishing for the user’s corporate email account credentials), the attacker would replay these credentials. This could readily be defeated with phishing resistant multi-factor authentication, Microsoft 365 Azure Active Directory Conditional Access rules and so on.

Even if successful, because the MFA and access rules the IT and cybersecurity teams failed to setup properly, platform intelligence, security configurations and detective monitoring can quickly pickup bad behaviour much faster than the end-user can — if they even can tell its happening: end-users don’t read audit logs, SIEMs should ingest and process them against detection use-cases and support threat hunts.

Malware effectiveness

I chose a more sophisticated scenario where a file is being delivered by a reputable service on purpose, so this is much harder to solve at the DNS, IP or browser level — I didn’t want anyone to think I was only picking the simple/easy attack styles.

The reality here is that the first time the corporate ‘understands’ the malware file is during download, as the endpoint protection software scans it in memory and again on disk.

Endpoint detection & response (EDR) systems will then understand the file as it tries to run, thats whether its a .app file (macOS), .exe file (Windows) or a Microsoft Office Word file with macros and scripts. In the Microsoft Office file scenario, Office can be configured to not load macros.

Good EDR platforms can readily detect a Microsoft Office document spawning a number of child processes, some of which connect to the internet or start grabbing files from the user’s space.

Technology understands technology, cybersecurity teams (probably, hopefully) understand cybersecurity

Network firewalls, DNS systems, EDR platforms etc are all better at understanding what technology is doing than the end-user.

How would an end-user reading a document know that in the background the file is opening up connections to command and control and downloading more files?

Cybersecurity teams are meant to be aware of the cybersecurity threats their organisation/colleagues will reasonably face, and ensure systems have good security configurations and conduct security-focused monitoring.

In all of this, the end-user — lets say an engineer in a factory who uses their computer for 5 emails a month and timesheets, or a knowledge-based worker who processes invoices so is sent Word files, PDFs and links to Word files and PDFs hundreds of times a month — knows the least. Why do organisations frequently think they are the people to make responsible for ‘being alert’ to cyber threats?

Paraphrasing Emma (quite heinously) from her talk: “Be alert? Be aware? Why? What from? When? All the time?!”

The end-user is not the first, or last, line of defence

If the two examples are anything to go by, there is a lot of technology that could and should be doing the heavily lifting.

Lets take a look at where the end-user its in these scenarios based on these two attacks:

  1. The email service’s global defences/intelligence
  2. The configuration in your tenancy of the email service — such as tagging emails as suspicious (same name as their manager, but the email address is randomaccount@hotmail.com)
  3. The configuration profiles of the corporate mail client(s)
  4. The corporate DNS systems
  5. The browser-based reputation checks
  6. The corporate network-level systems
  7. The endpoint protection systems — including attack surface reduction
  8. The end-user
  9. The endpoint detection & response (EDR) system
  10. The platform’s access rules (for example, Microsoft 365 conditional access and multi-factor authentication)
  11. The corporate DNS/network-level systems for subsequent activity — malware C2, redirections between phishing pages etc
  12. The platform’s audit/log activity (for example, Microsoft 365 SharePoint and Azure AD access logs) — for indications of strange behaviour, such as mass file downloads, new email forwarding rules and so on

Plus, detective security across the corporate platforms, EDR, DNS and networks. Each of these layers should be monitored by a team whose job it is to think about security.

This does get complicated with the use of personal IT for work (commonly known as Bring Your Own Device), but thats an organisational risk problem, not an end-user one — mostly.

What is the end-user responsibility?

I usually get in trouble with security and IT teams when I say this — I think the end-user’s responsibility is just to do their job… click away!

Their vigilance should be to the contextually abnormal which IT systems can’t detect as it’s within a relationship context, not a technical metric. For example, being sent an invoice to open when their job doesn’t involve processing invoices.

If an email or link is determined by technology with high confidence to be ‘bad’ it should be blocked but it’s not always that simple (hence SPAM folders). Often emails are shown to the end-user but with additional warnings. End-users should be paying attention to the warnings IT/cybersecurity teams have created — for example, the big banners at the top of emails that warn about an email from someone in their address book, but this time the email address is different and that might be OK but to consider that. This often happens if you usually email Joel Samuel on Joel.samuel@domain.com, but now you have an email from Joel Samuel on j.samuel@outlook.com – it may or may not be the same person.

They should know its OK to think thats strange, and how to report the strangeness quickly to the right team — do they contact security@ or do they hit the ‘report’ button in Outlook, etc.

Did you see the easter egg?

Did you notice the IDN homograph in the https://gmаil.com link above?

I wrote it, and I can barely see it, to the point where had to check multiple times I had done it properly — copy and paste into a browser, it will come up as https://xn--gmil-63d.com (fortunately, this is defensively registered by Google so no one can use it to pretend to be gmail.com under this IDN permutation)

I wouldn’t fault anyone for not noticing that in an email client or webpage as they went about their busy work day.

If you are reading this, you’re probably in IT/cybersecurity — so if you didn’t notice, what makes you think a busy ‘regular user’ would…?

“But… I would still like to run simulated phishing exercises”

You really have to ask yourself why, if the outcomes will actually be useful, what else you could (or should) be doing, and what cultural impact this may have in your organisation.

“I am a much smaller organisation, I can’t afford enterprise-grade security tools or teams”

This post isn’t really aimed at smaller organisations without dedicated IT/cyber teams.

Small-to-medium organisations still have a burden to protect personal data, and so on, and there are easy access tools to help do this. Any organisation can also proportionally outsource, and IT providers who provide services to SMEs should be baking in good security.

Business-tier Microsoft 365 and Google Workspace licenses still come with a lot of security features. You can still provide staff with a browser with reputation checks. You can still put a malware filtering DNS (such as 1.1.1.2 and 1.0.0.2 from CloudFlare) on the office’s internet router and think about encrypted DNS (with malware filters) directly on company smartphones, tablets and laptops such as with Cisco Umbrella or NextDNS.

Credits

  • Emma Wicks for her fantastic talk which sparked the whole discussion (and then reviewing the post)
  • Andrew Cousins for writing up the list of defensive features/systems that sit around the end-user, so I could shamelessly copy it

You might find even more exciting posts in my Medium profile. I am on Twitter as @JoelGSamuel.

--

--

Joel Samuel
Joel Samuel

Written by Joel Samuel

The thin blue line between technology and everything else. joelgsamuel.com

No responses yet