In 1919, a man founded a company to sell dubious investments in international postal trading. The company made no profit or tangible product. Despite this, the man sold millions of shares in his company. With each investor, he would funnel the money into dividends to earlier investors.
People fell all over themselves to invest in his, ironically named, Securities Exchange Company, despite there being no actual security being exchanged.
You know this guy. His name was Charles Ponzi, and his namesake is attributed to a classic scam, also known as a pyramid scheme. Ponzi’s great innovation was not scamming people–that is as old as time. Rather, it was enlisting people to sell his fraudulent shares to friends and neighbors. He recruited his friends to sell to their friends, and so on. His innovation was exploiting the trusting nature of people.
These scams are still around today, just more sophisticated. A new Netflix series, Dirty Money, investigates modern scammers such as payday lenders and drug company scams. You would think that after 100 years, and numerous indictments and public outcry, people would know to avoid these scams.
Nope. Ponzi scams are alive and well in 2018. They are extremely successful… for the scammers.
So, what can Ponzi’s scam tell us about cybersecurity?
It is In Our Nature
A few years back, I was at a conference with a number of CEOs. One of them asked me, “How do we get our users to stop clicking on all this damn malware?” I smiled and said, “Cut off their fingers.” The CEO’s eyes widened in a brief moment of horror that quickly changed to a smile and laughter. Yet as macabre as my answer was, there is a shred of truth to it. You cannot stop people from wanting to connect with other people.
Quite simply, people are gullible. Human beings are hardwired to trust people, particularly friends, family members, and authority figures. This is because human civilization depends on trust.
In his book, the Annals of Gullibility: Why We Get Duped and How to Avoid It, author Stephen Greenspan describes how human communities have evolved and flourished on a foundation of trusting each other. When people trust each other, they can establish a community where each member plays a role. This allows the community to grow because there is efficient separation of duties. Moreover, trust is how a community comes together to ward off threats.
Trust is the foundation of civilization.
If people cannot trust their leaders or neighbors, then the community collapses. Human history is filled with tales of civilizations crashing when people lost faith in their community and its leaders. Hackers exploit the trusting nature of people. Phishing and social engineering scams are consistently successful not because people are unaware of the threat, but rather because trust is innate.
Success Rates
Lately, I hear a lot of cybersecurity people talk about the need to strengthen the “human firewall.” The idea behind this term is that we must build up the defenses of people to resist clicking on malware or falling for phishing campaigns.
The increased focus on the human firewall is partially a reaction to all the big breaches. In all these big breaches, the real firewalls (as well as other security technologies) failed to stop the attack. Most of these breaches began with a phishing email or social engineering attack.
Our own experience at Anitian has proven this fact, repeatedly. Our average success rate for social engineer tests is well north of 50%. In a recent red team test we performed, blogged about here, it was a social engineering attack that got us in the door.
Hackers use social engineering because it works…consistently. In fact, while I was writing this blog, one of our clients had a big phishing outbreak, which our Sherlock Cloud Security team caught!
Blame the User
When the millions of dollars spent on technology and “kill chain threat intelligence agents” fails to stop an attack, what do you say to a furious Board of Directors or CEO?
“It’s the human firewall!”
This is the reemergence of a bad habit that security professionals routinely rely upon. Rather than accept responsibility for poorly managed systems or an ineffectual security program, they blame people for being too trusting and gullible. They demand “security awareness programs” to make people more suspicious, paranoid, and mistrustful of everybody.
Now that sounds like an episode of Black Mirror (also on Netflix).
Stranger Things
The human firewall is a lie. It is delusional to think “security awareness training” and “anti-phishing” newsletters are going to deprogram millennia of human behavior out of people. People naturally want to trust other people. Trust is healthy.
In his book The Speed of Trust, Stephen Covey discusses how mistrustful companies or communities are profoundly less productive, capable, or creative. Trust accelerates everything. Trust is also one of the foundations of elite teams of soldiers. In his book Leaders Eat Last, Simon Sinek goes into great detail describing how elite Army and Marine units have a deep bond of trust between each other and their leaders. Sinek clearly illustrates how trust makes these teams so effective.
Mistrust is not healthy. People who mistrust everybody cannot be trusted. As such, everything becomes uncertain and volatile. People cannot function in an atmosphere of persistent uncertainty.
This is why security awareness is ultimately ineffective. It tries to teach people how to mistrust, which is contrary to our need for a secure and healthy environment.
Trust Better
This is not to say security awareness training is useless. To make these efforts more relevant, they must teach people how to trust better. An ideal security awareness program focuses on:
- Validating authenticity: how users can verify people, sources, and information
- Sharing securely: tools for sharing with trusted sources
- Data protection rules: help understand what kinds of data need additional protection
More specifically, a security awareness program should not spend any time on:
- Hacking or phishing techniques
- Risk assessment methodologies
- War stories about people getting hacked
- Threatening users in any way
Take Responsibility
If you really want security awareness to work, security professionals and leaders need to take responsibility for building security programs that actually protect the business. This means making all those NGFWs and “kill chain threat agents” do what the vendor promised. Furthermore, it means you must have people managing and monitoring those systems day and night, so when there is an issue, security can step in and resolve it.
In other words, security needs to start doing its job and stop blaming the users.
Security is Essential for Growth, Prosperity, and Innovation
When people feel secure, they trust more. Trust feeds innovation, creativity, and prosperity. This is why we have police departments, security guards, and airport checkpoints. They may not stop all crime, but they provide a layer of protection for our communities.
Information security professions must do the same. If your security program cannot handle the fundamentals, such as vulnerability management, strict network access controls, or encryption, then you have no business lecturing end users on how they need to be less trusting. Furthermore, no amount of “human firewall” training is going to compensate for weak security controls.
Scams work because we are human, not because we are weak or stupid. It is simply human nature to want to trust other people. Nothing we do as security professionals are going to change this. We need to accept, for better or worse, that the human firewall is a lie.