top of page
  • Claire Tills

Scared of flying but no breach response plan? Risk perception and information security

It’s commonly known that we (humans) are bad a evaluating risk. We're consumed by worry about risks that are unlikely to impact us and disregard risks that are more likely. People are afraid of flying and worried about terrorist attacks but don’t think twice about getting in their cars or putting off annual check-ups. There’s a lot of research and writing about why humans are bad at risk assessment.

Two things that drive our poor risk perception are: 1) being bad with percentages and, 2) being bad with ambiguous language. The Psychology Today article at the top of those results by Eric Horowitz gives a nice overview of both of these limitations. To paraphrase one example from this article: a 30% chance of rain in Seattle is different from a 30% chance of rain in Phoenix. Even though the actual percentages are the same, people perceive them differently based on their own prejudices. I talked to a friend about this and they said that they were more likely to bring an umbrella in Phoenix because, in a place that doesn’t get a lot of rain, 30% is higher than it is in a place that does get a lot of rain (opposite of what the study found). It doesn’t matter that 30% is 30%. Weather reports aren’t the best example but the pattern holds true with other percentages.

In addition to math troubles, language can also be a stumbling block. Terms such as “unlikely,” “improbable,” or “rare” don’t mean the same thing to everyone. They don’t even mean the same thing to one person across situations. Thankfully, health communication has put a lot of effort into techniques for influencing risk perception and motivating action in order to alleviate threats or mitigate damage. Through empirical research, Vincent Covello and his colleagues found several factors that influence risk perception:

These factors are what people evaluate to determine whether or not the risk of a particular behavior is acceptable, worth it. We make choices throughout the day that involve risk, some bigger than others. We’re more comfortable with that risk if we enter into voluntarily, if we have control over the situation, and if we think any adverse effects can be reversed. On the flip side:

“Levels of concern tend to be most intense when the risk is perceived as involuntary, inequitable, not beneficial, not under one’s personal control, associated with untrustworthy individuals or organizations, and associated with dreaded adverse, irreversible outcomes.” (Covello et al., 2001, p.384).

Going from research to real life

The first step in motivating protective/preventative action is making the target audience actually accept that they are at risk. The research for health communication and disaster preparedness (think FEMA) focuses on larger scale campaigns, mostly. Working on a smaller scale, person to person, can be easier and can be based on the same principles.

Showing irreversible damage, catastrophic potential

For information security concerns, large scale campaigns are definitely applicable but most of the anecdotes I’ve seen have been on a smaller, more personal scale. It’s internal teams or consultants trying to convince executives or some other individual or small group to take protective action.

While these smaller scale efforts can be easier because appeals can be more tailored, the specific quirks of information security can complicate things. Two surveys published in June of 2017 indicate that companies aren’t internalizing infosec risks. Help

NetSecurity reported on Deloitte’s findings that three quarters of executives are confident in their ability to respond to infosec incidents while most companies still have no response plan. Another survey by Guidance Software (reported on by Kimberly Crawley for InfoSecurity) found that less than half of survey respondents think that they’ll need to respond to an incident in the next year, even though more than half said they had to respond to one within the previous year. These survey results are showing the cognitive dissonance that isn’t necessarily unique to infosec but is a major factor in risk communication. People like to think that risks are what happen to other people and we are better at handling crises than others.

Using communication strategically can help move individuals past these mental barriers. You are trying to counter those misconceptions - you are actually likely to face an infosec crisis and you need to be prepared, your organization isn't magically more resilient than anyone else's. This can be done by highlighting similarities between your audience organization and ones that have experienced a recent crisis to bring home their vulnerability. Is the organization running a similar configuration, do they have similar policies, or are they a type of organization that is being targeted? Refer to qualities in the risk perception factors table. If they are taking unnecessary risk by operating as they are, stress the lack of benefit to the risky behavior. Stress to them that the potential damage makes any benefit worthless.

Discussions about security shouldn’t come once a crisis has broken. This means having risk awareness in advance the support for which has to come from above. From the technical side, risk assessment and awareness is complex. However, before a technical risk assessment can be done, the assessment needs funding. The sort of risk assessment I’m talking about here can be more qualitative, anecdotal data used as a jumping-off point to the more intense, technical audits. According to CCSI, the complexity of infosec is one of the main reasons small business (and probably businesses of all sizes) don't pay enough attention to security. To get buy-in, simplify infosec and pull back to more basic appeals and language.

Next post, I'm going to dive a lot more into how to put together these appeals. I'll discuss efficacy in more detail and explore the important considerations for making infosec recommendations.


Boss, S. R., Galletta, D. F., Lowry, P. B., Moody, G. D., & Polak, P. (2015). What do users have to fear? Using fear appeals to engender threats and fear that motivate protective security behaviors. MIS Quearterly, 39(4). 837-864.

Johnston, A. C., & Warkentin, M. (2010). Fear appeals and information security behaviors: An empirical study. MIS Quarterly, 34(3). 549-566.

bottom of page