Threat Detection: Making the Complicated Simple Again

There are certain immutable things in cybersecurity; the volume of threats will only ever grow, the acceptable time for businesses to be offline will only get shorter. What is clear is the longer you are breached, the greater the potential for business disruption and commercial impact, and ransomware has moved that scale from days or weeks to hours or minutes. 

Against this, cybersecurity will only get more complex: some things just don’t add up. 

Back in the 90’s, when I started in Dr. Solmon’s antivirus, typical endpoint security solutions were basic signature detection, it either matched and therefore we knew it was threat X or Y and the decision was simple: BLOCK it, otherwise ALLOW. 

Come the mid 90’s and adversaries were starting to use techniques to obfuscate signature detection with simple code obfuscation techniques or more advanced dynamic code modification such as polymorphism and metamorphism. 

In the late 90’s, for my degree paper, I coded what in effect was a very early EDR-style tool. Rather than looking for the threat, it looked for the threat behaviour. I took control of the operating system (DOS - for those of you that are old enough to remember it), and gave the user the ability to verify if each operating system request should go ahead or not. 

The positive side of this approach was the ability to detect all threats including those that were not captured in the signature file; however, the big downfall was that it required a human decision based on one single piece of technical information. Don’t get me wrong, it was a very basic proof of concept tool, but the point I want to highlight is that the concept of detecting abnormalities is not that hard. 

The problem comes in the stage where someone has to make the actual decision, “With this knowledge, should I BLOCK or ALLOW?” Talk to any SoC analyst today and they would love to detect and block faster, as the longer the decision takes the greater the potential cost, yet the costs of getting the judgement wrong can be just as great. The simple fact is that unknown threat detection and response is made on a human judgement call based on the evidence they can find. 

Over the years, vendors have built more and more cool detection tools and capabilities to improve the ability to gather any suspicious artefacts. It makes sense that more evidence means stronger confidence in the identification of the problem. 

However, in our strive to solve the immediate problem, we have skipped over the process complexities in solving the problem, which has become the bigger problem. Bigger and bigger datasets add more lag and require more correlation between disparate data sets, which requires integrations and, typically, human analysis. This is why SOCs are getting bigger and slower, and many are now looking for the next iteration of how a SOC should work.

We need to refocus our priorities: how do we use all the necessary data gathered to find complex threats? How do we digitally construct the process after the easy bit, which is finding an indicator or the first breadcrumb of a potential cyber incident, into a complete package of credible evidence that we have the confidence to make a decision against to BLOCK or ALLOW? 

By the way, that evidence is not just the what, the indicators of compromise, it's also the how, the attackers' operations or Indicators of Behavior that make the what’s occur. These operations actually often stay far more constant, and being able to correlate these across your business is vital in understanding the actual business impact. 

We have to focus on the process (the data science) and not just the data acquisition. You could argue the order of some of these, and add in many more, but the process at a simple level has to involve:

  • Scrubbing all the data for initial clues and prioritising which deserve analysis
  • Validating the quality of that evidence - how trustworthy is it? Should you progress the evidence further?
  • Cross referencing a myriad of data sets to find associated clues, all too often only done by a human understanding of one clue well enough to know what are the likely connecting clues, then trawling through those for a match. This process is often iterated numerous times before that BLOCK or ALLOW decision can be made.

So what do we really need to do differently tomorrow? Focus on the process (how you apply data science to the data) and not just the collection of data itself. Consider what is critical to the success of your security team. Everyday I see teams huddle and focus on new ways of detection, to be honest I think at this stage we have enough for now. 

What I virtually never see is focus on usability. This is not how the user interface looks and feels, this is more about the human effort required to take the outputs from each of these tools and correlate them into a scalable, repeatable process that ALLOW or BLOCK decisions at the pace demanded by the business. 

To compound the issue, the scale and scope of what security teams have to protect and monitor continues to grow, and the time expected to identify the problem continues to shrink as operational resiliency expectations grow.

As I close, a quick true story to re-enforce the point: a number of years ago I was involved with an organisation testing behavioural detection capabilities. They did a lot of rigorous testing, and eventually came back suggesting all capabilities were roughly equal in their outcomes. 

What they didn’t say however, was that some tools produced more than 100x more events than others; but the processes, the data science, were no better– meaning the mean time to achieve the outcomes were very different across the tools, where some took way longer than others to get to the ALLOW or BLOCK decision. They tested the capability, not the useability. Having more data itself, by the way, isn’t a bad thing as long as you have the process (the data science) to leverage it at pace and scale.

Consider how much time your business would allow you for an ALLOW or BLOCK decision in the event of a ransomware attack, and then challenge your team to determine if they have the processes (the data science), the capabilities, and the skills to achieve this. 



Cybereason is dedicated to teaming with Defenders to end cyber attacks from endpoints to the enterprise to everywhere - including modern RansomOps attacks. Learn more about ransomware defense here or schedule a demo today to learn how your organisation can benefit from an operation-centric approach to security.

Greg Day
About the Author

Greg Day

Greg Day is a Vice President and Global Field CISO for Cybereason in EMEA. Prior to joining Cybereason, Greg held CSO and CTO positions with Palo Alto Networks, FireEye and Symantec. A respected thought leader and long-time advocate for stronger, more proactive cybersecurity, Greg has helped many law enforcement agencies improve detection of cybercriminal behavior. In addition, he previously taught malware forensics to agencies around the world and has worked in advisory capacities for the Council of Europe on cybercrime and the UK National Crime Agency. He currently serves on the Europol cyber security industry advisory board.