Attribution is a linguistic overlap between crisis communication theory and information security. Both of these fields rely on attribution and, while they are similar at the conceptual level (who is responsible), they are very different in how they are used. How terms like responsibility and attribution are used differently complicates responses to infosec crises and makes productive conversation more difficult. I want to tease out those differences over the next couple of posts. First, I'm going to present a few cases that highlight how the two types of attribution can interact during an incident.
Attribution as a communication concept is more about perception and public outrage. In a previous post, I introduced Situational Crisis Communication Theory, a model for crisis response. SCCT is based on how much responsibility the public attributes to an organization. It doesn't matter if public perception is accurate, the reputational threat is still there. Accuracy is much more important in technical attribution and technical attribution might influence public perception, if communicated effectively.
Case One: SPE
I touched on attribution briefly when I did my Sony case study which is a good example of where technical attribution and attribution as a concept of communication overlap. Moving forward, I'll refer to technical attribution and use responsibility or blame for the communication concept just to keep it simpler. The Sony case was really a perfect storm when it came to attribution. Both technical attribution and responsibility were all over the place.
In reading media coverage, public statements from involved parties, and the employee lawsuit, I found that only certain aspects of the technical attribution were considered in statements regarding responsibility. The focus was on what Sony didn't do to secure employees' PII rather than on the identity of the perpetrators.
"Cybercriminals were able to perpetrate a breach of this depth and scope because SPE failed to maintain reasonable and adequate security measures to protect the employees’ information from access and disclosure."
"[Sony] failed at numerous opportunities to prevent, detect,
end, or limit the scope of the breach."
People preferred to focus on Sony's responsibility because of how outraged they were over Sony's handling of the situation. (This might also be due to perceived experts disagreeing about the technical attribution.) Naturally, Sony didn't want this responsibility placed on it and lashed out at those accusing it of wrongdoing.
When it came to technical attribution, experts questioned the FBI's finding that North Korea was responsible for the hack. While media coverage did mention the connection to North Korea, the focus was on Sony's (mis)handling of the case - responsibility over technical attribution.
Case Two: WannaCry
Perhaps it's because this case was a few years later and the public was more familiar or perhaps it's because it was a ransomware case but the technical attribution was more in the forefront in this case. Yes, there were some conversations about what organizations infected with WannaCry could or should have done to avoid it but the narrative was much more balanced. The narrative was more consistent that the responsibility, the blame, should be on those who initiated the ransomware. However, that caused some snags as well.
Now Patrick Gray is a big name in many circles but these tweets weren't seen by the general public, probably. However, Fortune is a pretty popular news source.
This piece by Jeff John Roberts lists: Microsoft, infected organizations, the NSA, cybersecurity companies, and finally "the Bad Guys." Other popular sources like the LA and New York Times covered the NSA connection and discussed Microsoft's responsibility.
The technical attribution raised questions of responsibility, who is to blame for this sort of incident? With whom should the public be outraged? The entity that developed/discovered an exploit, the one that released it to the public, the one that deployed the ransomware, the one that failed to patch its systems, the technology that failed to detect the ransomware? To the more technical-minded folks, it might be a simpler answer than it is for the rest of us. The rest of us are likely to follow our personal biases when selecting a target of outrage, or follow the news media's lead.
The tangling of technical attribution and public outrage was also a factor in the DNC breach. Public sentiment was a driving force behind where the outrage was focused, not technical attribution. Questions like: who ordered the hack, who's behavior made it possible, who benefited from it, who obstructed the investigation, and whether any of those things actually happened, dominated the public discourse. Additionally, the content of the leaked information created outrage that distracted from the technical attribution.
Even if an organization does everything correctly, from a technical standpoint, the public may still hold it responsible. While both parts of that situation are unlikely, less extreme versions can happen. If an organization has a bad reputation (particularly for customer service) or has a history of infosec crises, it's more likely that public outrage will focus on that organization regardless of actual responsibility. When the public discourse turns against you, the results of technical attribution may not be enough to protect you.
I bring these cases up where technical attribution, responsibility, and blame all landed differently or tangled together because they illustrate how the same concept - attribution of responsibility - used by different types of people with different intent can create confusion. The ways that different groups place responsibility can increase the distance between them and make working together more difficult. - separated by a common language and all that. In the cases I've shown, the technical attribution is often ignored or cherry-picked. How can we refocus the public discourse on results of technical attribution? Is that always the best option? Increasing public value for technical attribution may benefit preparedness by making the public more aware of how these incidents happen. It may also reduce misplaced public outrage. These are both good things but could there also be negative repercussions?