Reasons why Simulated Phishing Campaigns are a bad idea

Being in charge of infosec is a little tricky. You get hired with the expectation of quickly ramping up, understanding everything that's going on, and protect against any practical cybersecurity attack. In the meantime anything that goes wrong, you're responsible for. Not saying you will be blamed for it, but you got to deal with the aftermath, like incident reporting and convincing stakeholders which holes need to be plugged fast. You also got to keep learning while dealing with this work and you'll be in a good enough position to at least have a good picture of what's going on. Documentation usually gets outdated mere weeks after it is written and there is only one way to learn about the organisation and the services, and it's via information gleaned from collegues in various positions. You take out your copy of ISO 27002 and find there are a ton of controls to apply in in order to better security. You have no power or deep system understanding to apply the controls yourself, thus you need help from your colleagues.

This is where it gets tricky:

  1. You have no authority over these colleagues, they do not report to you. 
  2. You have no power either, they do not depend on you to do their job. 
  3. The only thing you can use is your influence, and you must use that wisely.

In the best case scenario which I find most common, your colleagues understand your position and your mission and they would be happy to help you out in whatever you need to know. Worst case you get stonewalled, and you end up escalating to higher ups as a last resort. 

In one of my previous jobs the ramp up was going exceptionally well even though the tech environment was complex. People were forthcoming, and I even had a few 'good snitches' who would feed me extra information about questionable coding practices that were going on, hoping that I would dig into this and make things right. I was always tactful in my approach, and it was good, I liked the guys and they liked me.

One fine day my manager told me that he wants to have this phishing campaign conducted to prove some point that we need improving. I had never led one before so I thought it would be cool, and for those who fall for it, it would merely be like deceiving someone by doing a magic trick, or so I thought. Using one of the popular platforms out there, I carefully created a template made to look like an internal email, launched it, and started to gather the results. I still remember the awkwardness of having some people reporting these emails as potentially malicious and myself being unable to say any more than 'thanks for reporting'. What followed next are these lessons I learnt why this was a bad idea. 

1. Resentment and Erosion of Trust

When I came out clean that a phishing test was conducted it was an absolute disaster. I am not talking about some people disputing they actually clicked on the emails, but I am talking about the attitude change. From being the guy to whom you ask about authentication protocols or discuss stuff, I suddenly became regarded as someone who is trying to trick you, and I don't blame them one bit. Suddenly I could feel resentment when sitting next to someone to discuss the latest issue, and the months and years of building a collaborative relationship got torn down by a stupid exercise. Rebuilding that trust was no easy task and I cannot say that things ever got to the same level. 

2. Lack of Realism

Most campaigns often use a sort of pre-determined template and the mere act of clicking on an email link automatically puts you in the idiot category. Frankly, this is unfair. In 2023, the single act of clicking on a link is unlikely to cause your machine to be compromised. The main damage is that an attacker would know he hit an a valid email address from the encoded parameters in the URL. Is this always the case? What is dangerous is usually what you do AFTER you click on the link i.e. it's only bad if you accept to download the file the page is offering you, or to install the Chrome extension or other crap it's telling you to. 

In the 2000s there were nasty vulnerabilities, notably for Internet Explorer allowing for an attacker to gain access to your machine via the mere clicking of a link. This type of exploit is now way harder to achieve as most browsers run in sandboxed environments and have a much better self-update mechanism. Reflected XSS is also a possibility, but that points to a vulnerability in the software running on the network rather than the act of clicking itself.

To prove my point, you know where else is full of garbage links? Search engines. If you are in tech you definitely have entered a rabbit hole of searching for an issue which seemingly no one else has. You end up in the 3rd page of search results and it's where all the crap shows up such as this one.  

 

You would be lying if you never clicked on such links and if not it means that you never looked hard enough for an issue. These links would take you to a series of redirects to either pornography sites or something still scammy like this. 

What do you do now? Guess what, close the tab and get on with your life. If clicking on a bad link equals compromised machine, then 90% of endpoint machines would compromised. Again, the only issue is what you do after you end up on such pages. 

3. No Awareness to actual phishing attempts. 

What you don't see in the simulated phishing campaigns are actual phishing attacks that are happening in the wild.  CEO impersonations, Payroll staff impersonations, Gift card scams, Skype malware, changing creditor's IBAN accounts and many more. 

The phishing simulators have a predefined engine, thus generating only a few types of possible phishing mechanisms. This renders your collegues desensitised to the actual scams happening out there. 

4. Focuses on Blame rather than Education 

Unless it's directly paired with an educational campaign, this is just a gotcha exercise. 

5. Resistance to Reporting 

When individuals get used to receiving these simulated emails, they would be less inclined to report them a security threat. In a free country, when the guys running a campaign are so keen to report you as a statistic and send you to re-education camps, you would be less inclined to cooperate. As a result, when a legitimate security threat does emerge, employees might be more hesitant to cooperate or report it promptly, fearing repercussions. These exercises erode the spirit of trust and collaboration. Rather than employees feeling comfortable reporting potential threats without fear of punishment, they now see cybersecurity as an authority.