<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=116645602292181&amp;ev=PageView&amp;noscript=1">
Sarah Maloney

Subscribe to receive weekly blog updates

Attack attribution: It's complicated

Post by: Sarah Maloney

Attack attribution has traditionally relied on the connection of technical indicators found in the victim’s environment to threat actors. However, recent research suggests that anti-forensics techniques like overwriting data and metadata are commonly used, making attribution a not so simple task.

Over the past few years the evolution of commodity malware to include polymorphic designs, excessive packing, dynamic domains/C2 infrastructure and other techniques have eroded the key tenets of how most in the cybersecurity community conduct threat analysis and attribution. In addition, it was recently confirmed that threat actors study and mimic the indicators of other hacking groups to create misattribution, making these indicators almost completely obsolete.

How the security community degraded its capabilities

Cybersecurity in the early to mid-2000s turned into a video game between threat actors and defenders where the only real world effects were the harm to the victim. Although in most cases, the defenders lost the fight, every engagement taught the attackers something. Every detection, action, and success a network defender took or had, informed the attacker about how his operation was exposed. This learning process took a large leap forward in the 2010’s when security companies started being much more open about how they tracked, caught, and eradicated intrusions into client networks. These reports were akin to handing an opposing football team your defensive game plan for them a month before the game. Nevertheless, in a race to demonstrate prowess and increase street cred the community has increased the pace of this reporting. The primary methods used to detect and attribute threat actors is so often written about and so well understood that even low-level actors are creating “advanced” anti-detection techniques.

Death of IOCs

Since the early 2010s, threat actors have slowly been reducing the value of indicators of compromise (IOCs). We’ve seen the evolution of malware from a static, hard-coded IP address for command and control to the dynamic leveraging of social media applications to send command and control messages within legitimate traffic streams. Host-based artifacts are also being rapidly phased out-the increased use of polymorphic malware and fileless intrusion methods make file hashing and white listing increasingly meaningless techniques.

We’ve seen advanced actors using commodity malware to blend in with the standard SOC noise-making advanced attacks look like garden variety malware infections. In fact, the floor for tools and capabilities that are readily available has risen so much, that unique, advanced capabilities are often more of a hindrance than an aid in operations. Defenders are now faced with the dilemma of either having IOCs that are so perishable that they are often useless before they are fully discovered in an incident response, or the IOCs are so generic that they are meaningless for further research and attribution.

False Flags: No Longer the Realm of Covert Operations

Cyberspace doesn’t permit positive control of cyber tools. Once the tool is used, it is discoverable and can be repurposed. The use of a nation’s tools, tactics, and techniques (TTPs) by “imposters” has generally been used to increase tensions between the victim state and the alleged attacker.  The efficacy of these attempts has largely been based on 1) how well the false flag operator understands the tools they are using and 2) how well they understand the country they are trying to incite into action. Recently, researchers discovered an entire program that was designed not to assign blame to another country but rather was focused on reducing the risk of attribution of an intelligence operation.

This changes the dynamic significantly, since rather than just avoiding getting caught, the new goal is to mimic as closely as possible the indicators, infrastructure, and tooling of another adversary. Any unit’s ability to do this is predicated upon their ability to collect intelligence about threat actors who they are trying to mimic. From a technical standpoint, gathering the requisite information is difficult but achievable for most advanced actors. This makes attribution more like guessing the identities of people at a masquerade ball than actual science.

Build it better, stronger, faster

Overzealousness on the part of the cybersecurity community combined with some very intelligent threat actors has regressed cyber attribution to the level of the early 2000s. Any actor has more than enough tools at their disposal to avoid being attributed based on current industry standard methods. Unfortunately, there is no technology-based answer to regaining the ground the community has lost.

Faster, more precise, technical analysis still does not get around the fact that TTPs and infrastructure perish often before their discovery on a network in most advanced intrusions. Instead, the industry needs to experience a sea change in regards to how we analyze intrusions and the threat actor groups.

To regain attribution, we need to focus on the actors, not technical information. Behaviors rather than IOCs are going to produce higher fidelity results. But most importantly, this time around, as we develop a new more robust methodology, we need to be significantly more careful about how we share information about the specific techniques we’re using. In the cat and mouse game of cyber, every piece of information is valuable. Freely divulging how we orchestrate defenses and attribution is tantamount to telling the adversary how to defeat us. Our community must take an evolutionary leap and create communities dedicated to safeguarding this information while disrupting malicious cyber actors if we hope to become peers with the adversary again.

IOCs, attack attribution, Attribution, TTPs