Doing information security, and doing it well, is a tough task, and not only because of the variety and complexity of the various technologies we rely on. The inherent limitations that prevent technology from making decisions autonomously pose some of the largest challenges. The problems our profession will face in the future are complicated and multifaceted, and cannot be solved simply by applying a complex mathematical model.
While colleagues who I greatly admire hold opposing views, to me the issues facing public and private cyber-security teams are much deeper than simply developing more and better automation to create self-healing networks that can detect and mitigate attacks themselves.
The Fallacy of the Computation-Only Approach
Computers, networks and software are are morally neutral tools incapable of perceiving right and wrong without human input.
Even with machine-learning capabilities, there is no algorithm for understanding the user’s consciousness or intent. Since all the actions of computers are driven by human intentions, the actions of code or computers, whether they’re malicious, benign or somewhere in between, are human constructs that can only be fully understood by humans.
This is why I don't agree that information security issues can be handled solely by technology. Human behavior is always subjective. A machine by itself will not be able to judge human intentions - it lacks the tools to determine right and wrong. Only a method that combines a machine’s computational abilities with a human’s ability to perceive the actions of a machine and human together can solve security problems. Machines are much faster than humans at parsing and making sense of the large amounts of data organizations collect. With this information, humans can determine the best course of action.
When programmers write software, they take an abstract idea and translates it into machine actions. In this process, all of the subjective information about the original idea is lost, and what remains are the commands needed for the machine to accomplish the given task. This is the inherent problem with technology - you cannot use it exclusively to understand subjective context, such as the programmer's original intent. However, by using technology we can translate a machine's actions into a language that will help a human being figure out intent.
The Solution: Decomputation - A Machine Translator
In order to understand intent, we need to collect the observed technical activities, put them in context and represent them in a way that helps humans recognize the motive behind the actions. In other words, we need a way to "decomputize" observed technical actions, and use technology to translate events in a way that allows security teams to determine intent.
Here is an example: Let's say we observed an application that continuously records audio from a laptop microphone and sends it to a remote server. Is that software being maliciously used, or is it configured by the user to continuously record his activities? We need contextual information to answer these questions. If we can determine that the user ran the software and continued to control it, we could easily identify this activity as benign. However, if we find that the program started to record audio by itself, without user control, then we can assume this action is malicious. Even so, the recording application could be configured by the user to run silently.
The ability to decomputize machine behavior and reveal human intention is key to making smart security decisions. While we are still in desperate need of more and better automation, IT security is not a discipline that can or should be completely automated.