Seems Legit: How Phishing Scams Hijack Our Systems of Trust
If you are online at all (and if you’re reading this, you obviously are), you are almost certainly savvy enough to identify an email from the proverbial Nigerian prince as a scam. But what about an email from someone whose name sounds vaguely familiar, inviting you to open a document or click a link? What if it’s a text from your boss? Many of us are swimming in treacherous online waters every day, and we’re more liable to get hooked by malicious phishing scams than we may think.
Research by Darden Professors Sean Martin and Bobby Parmar, along with their University of Michigan colleague Julia J. Lee, explores the role trust plays in people’s vulnerability to phishing schemes. The study suggests that the same instincts and signals people use consciously or unconsciously to establish trust in the offline world are the very same that get us into trouble online.
The vast majority of people are terrible at judging the safety of online messages, and phishing scams exploit this vulnerability, with effects ranging from annoying to disastrous. Learning to spot phishing attempts isn’t easy, because our vulnerability isn’t due to ignorance or negligence, but rather the very nature of how human beings make judgment calls when it comes to trust. But understanding why we’re at risk is the first step.
Plenty of Phish in the Sea
Phishing is a type of social engineering designed to trick the victim into acting on the hacker’s behalf by sharing personal information, transferring money or clicking a link or attachment that can install malicious software on the user’s device. Usually, phishing attacks come in the form of emails or other messaging tools.
These attacks are becoming more and more sophisticated and difficult to spot. Hundreds of thousands of people in the U.S. alone were hooked on phishing scams in 2021, costing individuals and businesses millions of dollars and compromising millions of people’s personal data.
In order to “hook” victims, phishing schemes are designed to take advantage of the unique vulnerabilities of human users of technology — including and especially by hijacking our instinctual systems of determining trustworthiness.
The Other Social Distance
In “Social Distance, Trust and Getting ‘Hooked’: A Phishing Expedition,” which appeared in Organizational Behavior and Human Decision Processes, the researchers tested how vulnerable employees in a workplace are to phishing attempts based on markers of social distance. Note that social distance in this context means psychological signals of similarity to oneself. It is generally accepted that, for better and for worse, if you perceive another person as similar to yourself, you will instinctively trust that person more.
The research suggests that people transfer this tendency to online interactions, even though we cannot verify who the person on the other end of the communication is, as we can in “real” life.
In a randomized field experiment, employees at a Midwestern insurance firm that handles sensitive client data were randomly sent one of three fake phishing emails disguised in varying degrees of social distance or closeness. All three featured the same subject line, content and document attached. The first email had no identifying information at all. The second, “socially distant” email appeared to be from a fictional person named Jooa Xi Lee (so named because Asian Americans are underrepresented in the region in question), whose signature suggested they worked in a related field. The third, “socially close” email came from the fictional Jacob Anderson who worked in the same industry. These variations represented categories that people tend to use unconsciously to determine whether someone they don’t know is trustworthy or not: interpersonal similarity, occupational similarity and locational similarity.
Perhaps it is not so surprising that the socially close phishing email hooked more employees in this firm — yet the high success rate of all three raises concerns. The control email, with no identifying elements at all, still hooked 18 percent of those who received it. The “socially distant” email from Jooa Xi Lee hooked 27 percent of recipients, and the “socially close” email from the fictional Jacob Anderson hooked a whopping 39 percent of recipients.
This is bad news for cybersecurity.
Martin, Parmar and Lee’s study didn’t test phishing attempts that look like they come from someone the user actually knows, a slightly more sophisticated deception that is also common. This research only explored how much people trust messages from people who seem similar to themselves.
Digital Chainsaws
We know that the human element is the biggest vulnerability when it comes to cybersecurity, and we know that people are very bad at determining trustworthiness online. So what do we know about how to counter these risks?
A combination of awareness of the chinks in our armor, education on how to behave online, and preparing contingency plans for when these things happen are all necessary for changing our behavior online and maintaining security. Not to mention a healthy dose of good, old-fashioned skepticism.
“We actually need a different standard for behavior online,” says Martin. “You should probably be suspicious of just about everything that’s sent to you …. We can have awareness and try to change our relationship to electronic communication.”
Martin says it may help to think of the internet as a useful but dangerous tool. “I view it a lot like I view my chainsaw,” he said. “It’s an incredibly useful tool. And if I take my eyes off it while I'm using it, really terrible things are going to happen.”
Sean R. Martin and Bidhan L. Parmar co-authored “Social Distance, Trust and Getting ‘Hooked’: A Phishing Expedition,” which appeared in Organizational Behavior and Human Decision Processes, with Julia J. Lee of the University of Michigan.