January 29, 2020 |
Experienced Senior Security Executive with a demonstrated history of working in the computer and network security industry: product, engineering, security experience. Extensive publications and patents, big company and entrepreneurial track record. Multiple awards from industry, public sector and academic institutions. Personal mission to fulfill the obligation of security to the world.
40+ years in industrial instrumentation controls, and automation. 20+ years in cybersecurity of industrial control systems. Authored Protecting Industrial Control Systems from Electronic Threats ISBN: 978-1-60650-197-1. Authored Cyber Security Chapter Electric Power Substations Engineering. Authored Cyber Security Chapter Securing Water and Wastewater Systems
Born in Israel in 1975, Malicious Life Podcast host Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.
In 2007, created the popular Israeli podcast Making History. He is author of three books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.
Malicious Life by Cybereason exposes the human and financial powers operating under the surface that make cybercrime what it is today. Malicious Life explores the people and the stories behind the cybersecurity industry and its evolution. Host Ran Levi interviews hackers and industry experts, discussing the hacking culture of the 1970s and 80s, the subsequent rise of viruses in the 1990s and today’s advanced cyber threats.
Malicious Life theme music: ‘Circuits’ by TKMusic, licensed under Creative Commons License. Malicious Life podcast is sponsored and produced by Cybereason. Subscribe and listen on your favorite platform:All Posts by Malicious Life Podcast
Picture a truly depressing building. It’s ten stories high, built of cement, and grayer than the sky is blue. The pretty, wooded areas on either side of the parking lots only serve to enhance just how ugly the building truly is with its water damage, dirty radiators sticking out from the windows, and an extended entryway that looks like it was designed during whatever the worst era of modern art was. And you can see where the mortar connects each face of the building with the other faces–as if the whole thing were glued together like the four slices of a gingerbread house.
Though it is no architectural masterpiece, this actual building I’m describing to you has housed many talented people doing significant work over the years. The organization that calls it home was founded by one of history’s most famous scientists. The chemists, engineers and technicians who’ve passed through its halls made notable progress in weapons and ammunitions technology throughout the 20th and into the 21st century, when a group of men who met behind those ugly, concrete walls managed to create the most dangerous malware in existence today.
PART 1 RECAP
We left part one of this two-part episode with our hackers having breached the Petro Rabigh petrochemical plant, and uploaded custom malware we now know as “Triton” onto its most sensitive safety systems. But the pathway that allowed them to do that was fraught with oversights–some obvious, some less so.
Those of us from IT are used to using effective, even state-of-the-art tools, on computers that are, if not new, probably only a few years old. We can upgrade our tech often because it’s not prohibitively expensive, and it’s mobile. Replacing a laptop, for example, is as easy as buying a new one and dropping your old one in the trash.
You can imagine how difficult it is to do this when your machines are 10,000 pounds, and connected to a system of dozens, even hundreds of pipes, cables and other 10,000 pound machines, all running 24/7 all year – because critical infrastructure isn’t a 9-to-5 business.
Industrial systems can live for decades, because of how difficult they are to replace, and how necessary they are to the functioning of the machines connected to them. For example, a Schneider Safety Instrumented System – SIS – is built to stay on for years and years and years, nonstop. There are SIS’s that have been running longer than your children, maybe you yourself, have been alive.
Because of the costs to remove and replace, much of the industrial tech we’re using in 2020 is leftover from before the millennium. It’s often difficult to apply modern security practices to these old hunks of metal, or industry professionals are so used to working one way that it’s tough to convince them to change. Industrial engineers in the 80s didn’t have to think about whether their machines were connected to the internet!
In Petro Rabigh, the first point of vulnerability was how each layer of the plant connected, digitally, to one another, allowing for a distinct attack path.
[Joe Weiss] I’m Joe Weiss. I’m the Managing Partner of Applied Control Solutions and I also am the Managing Director of ISA99, which is automation and control system cybersecurity. That’s kind of my volunteer efforts as well as with other standards organizations. I’m considered to be an expert on controlled systems and control system cybersecurity.
Joe Weiss isn’t just considered an expert in control systems security, but a legend. Not my words. For his speech at 2019’s S4 conference–one of the industry’s biggest annual gatherings–he was introduced as a, quote, “legend.” Nate Nelson, our senior producer, talked with Joe.
[Joe Weiss] Part of the background of the control systems is they started out being very flat systems with no segmentation. So if you got into a control system, you basically could traverse the entire control system network. One of the big issues that ISA99 came up with was the need to segment that very flat network and be able to have, if you will, some networks to try to prevent, if you will, malware from jumping from point A to point B. You already theoretically had a DMZ, a demilitarized zone, or a proxy between the IT networks and the control system networks. So this was the intent to extend that segmentation into the control system network.
Much of the industrial equipment in operation today was designed before something like Triton was even conceivable. It’s why these systems don’t necessarily have effective, built-in separation between network layers, or separation between the machines built for control and those built for safety.
[Joe Weiss] Today, you are allowed to mix what’s called basic process control and safety. So the question you ask is – or another way of saying it, “Are people learning the lessons from Triton?” which says do not mix basic process control with process safety because once you do that, you effectively no longer have process safety. All you have really left is control.
What Triton was demonstrating, you do that, you’ve lost safety. There are some real, real, real important lessons that are coming out of Triton and part of the question is, “Are people listening?”
Petro Rabigh were facing lots of problems in defending their systems. But they did get lucky in one sense: their hackers were unprepared when their plan went awry.
Last episode, I told you how the Triton hackers breached Rabigh safety systems, causing the entire plant to trip twice. Just as important as what they did, however, was the order in which they did it.
To cause any measurable impact on an industrial site, you have to hack the distributed control systems which control all the plant’s machinery. Controlling an engineering workstation at a plant is like gaining administrator privileges over a laptop–it gives you equal authority to the handlers of that machine, to do with it whatever you please.
The Petro Rabigh hackers–who we’re going to refer to as “TEMP.Veles,” the name given to them by investigators at FireEye–focused more on the SIS, safety instrumented system, layer of the plant. The layer where safety systems lie, whose only goal is to prevent the kinds of disasters that could kill people.
It is this that allows us to reasonably infer a motive: that TEMP.Veles weren’t simply trying to cause a ruckus. They didn’t intend only to take over the control systems, or only cause a disruption in the plant’s operation. They intended to initiate a process that would put lives at risk. Of what kind and at what scale we do not know.
Luckily they failed.
On August 4th of 2017, multiple SIS controllers entered a failed safe state after an internal validation test carried out by three redundant processing modules failed. Essentially, three separate input-output modules, which were supposed to have outputted the same results, did not, because they had been tampered with. An automatic shutdown was triggered, and plant operators were alerted.
[Joe Weiss] Triconex is a triple redundant system. Like you mentioned three separate independent processors or controllers, if you will, comprise Triconex. That’s why the “tri” for Triconex. OK? What that was there for was reliability. This made it a very, very, very highly reliable system, which is precisely what you want if you’re dealing with safety.
What got lost is reliability and security are not the same thing. So the fact that it was highly reliable in a sense helped mitigate the lack of security issues because it tripped the plant. It was fail-safe and tripped.
According to postmortem analyses, the shutdown appeared to have been an accident. TEMP.Veles probably meant to stay undercover, but they erred, and alerted their victims to their presence in the process.
Having been spotted, the first thing they tried to do was cover up their tracks by rewriting the code left on the machines. This was, after all, highly secretive, highly valuable code. If analysts got a hold of it, those analysts would be able to decipher and build defenses against it.
The attackers were too late. The analysts on site had already made copies of the Triton files.
This left TEMP.Veles with just one more card left to play. They could try to, you know, blow everything up.
Think about it. They were caught, and the highly valuable malware they’d probably spent months or years writing would never be more valuable than this very moment, when it still couldn’t be stopped. They remained connected to the engineering workstations from which they could launch an attack. Even if some of the plant safety systems remained operational, some did not. Maybe that was enough to cause some damage.
But TEMP.Veles weren’t prepared. There’s no evidence that they actually had a malicious OT payload to deliver, even if they wanted to.
Usually we assume hackers will write malware before attempting to use it. The Triton hackers worked backwards. They breached Petro Rabigh, studied its systems, and built Triton to disable the safety layer. Only after that was all done were they, presumably, going to inject the kind of software that could’ve allowed them to manipulate engineering workstations and, as an extension, sensitive operational equipment. We can only speculate as to why that software was never seen. Maybe they figured it was useless, if the safety systems were still operating. Maybe they hadn’t gotten around to writing it yet.
This gave a window of opening to the analysts on the case–who, by this point, included personnel from Saudi Aramco, the U.S. government, as well as security firms across the industry. While TEMP.Veles scrambled, these experts were doing their best to kick them out of the system. They began by changing passwords and implementing two-factor authentication over all user accounts in the network.
It might have seemed like a good idea at the time–after all, 2FA and changing passwords are good security practices. But it failed. The hackers already had a foothold in the plant’s IT network, and made easy work of getting around the new security measures. They changed phone numbers associated with certain accounts such that every time a 2FA code was requested by a plant operator attempting to log into a compromised account, the code was filtered through the hacker-controlled website.
IT FIXES TO ICS PROBLEMS
The website E&E News described the feeling at that moment after the security response failed. Quote: “Petro Rabigh was living out any large organization’s cyber nightmare: It was squaring off against a highly sophisticated adversary, or perhaps multiple adversaries, that had demonstrated deep knowledge of their target’s systems and the ability to shift tactics on a dime.”
While these hackers were sophisticated, it wasn’t just their talent and skill that allowed them to subvert Petro Rabigh’s security response. Nor was it because the security response was poorly implemented–it wasn’t. The reason why adding two-factor authentication and changing those account passwords failed might be because it was the right solution to the wrong problem.
Try imagining a fly in my podcast studio. It’s annoying me while I’m recording, so I swat at it. It works: the fly goes away. I think to myself “interesting…swatting my hand seems to make annoying problems go away.” Soon I finish narrating, and head out to lunch. I’m sitting with Nate, the Senior Producer of our show. He’s talking my ear off about a pimple he’s got on his nose. It’s annoying. I think back to the fly in the studio, and remember: swatting my hand makes annoying things go away! So I reach out and swat Nate in the face. But he doesn’t go away! Of course he doesn’t–I tried applying a solution that worked for one kind of problem, in a situation entirely different in nature, requiring its own kind of solution. If I’d just politely told Nate to shut up, it probably would’ve worked. But if a second fly comes into my recording studio after lunch and I politely tell it to shut up, it probably won’t leave.
The Petro Rabigh security response team made this kind of mistake. Passwords and 2FA are IT security solutions. Triton was an engineering problem.
I’ll explain. Antivirus, firewalls, network monitoring, layered security–all of these tools are used at the IT, and to some extent DMZ, layers of an industrial plant. And while they are useful, they are not sufficient in protecting actual operational equipment on-site, and the workstations which manage them.
Triton was a hard lesson in how IT solutions fail industrial systems. According to those who were called in after the second outage, Petro Rabigh actually had a pretty secure cyber defense posture, by typical industry standards. They should’ve been alright. Their achilles’ heel, so the story goes, was misconfigured firewalls at the DMZ layer.
But the real, fundamental weakness was that a misconfigured firewall was the one thing standing in between able hackers and safety-critical systems. If you’ve listened to Malicious Life before you’ll know: all software can be hacked, no matter if you’re dealing with a corporate IT network, industrial security systems, or any other type of computer on the planet.
In fact, the DMZ firewall wasn’t the only firewall that failed in the Triton breach. On top of their analog switches–which, for all intents and purposes, represent the primary means of keeping malware out–Triconex 3008 controllers have built-in firewalls which did nothing to prevent or red-flag the Triton files.
[Joe Weiss] All the way back in August of 2010, they had installed Invensys, which was bought by Schneider. All the way back in 2010 had installed the – what’s called now the Tofino firewall into the Triton system. So I had a real question and this is again different – this is what I’ve found over the past couple of weeks because the real question is, “Why didn’t the Tofino firewall identify this malware?”
The reason was is because the firewall didn’t have context to it. So the firewall was able to say, “Were these packets coming from where they were supposed to come?” The answer was yes. Were they going to where they were supposed to go? The answer was yes.
Were the packets “well-formed”? The answer was yes. So the Tofino firewall didn’t identify that there was malware being sent. One of the changes now – and this was from a discussion with Schneider, you know, a couple of – you know, before I put the blog out. This is why they’re now incorporating at least one of the OT vendors’ products in, so that they can get context. Not only is it coming from where it’s supposed to come. Is it going to where it’s supposed to go? Is the packet the right size? But also it’s what’s in the packet, what’s supposed to be in the packet.
Whether Rabigh’s DMZ firewalls were configured correctly or not, with enough time, TEMP.Veles were probably good enough to get through. So long as the SIS’s were left in Program mode, the Tofino firewalls were functionally useless against never-before-seen malware.
So, if all software can be hacked – how can we hope to protect industrial control systems? well, one possible solution is to use hardware. Joe Weiss gives us two examples in which using hardware devices instead of software could have saved lives.
[Joe Weiss] In December, I was at a medical device cyber-security conference. You’re asking, “What do medical devices have to do with electric …?” You know, et cetera. Well, two things. One was there have been – there were a number of fatalities from an X-ray imaging device and this has been written up in many, many, many safety books. It occurred because the vendor of this product had a hardware safety system and decided to replace it with a software-based safety system and you could start thinking of – you know, whether it’s firewall or whatever. Bottom line, it was not configured properly and it ended up overly radiating and killing six people.
This is a very big deal. If you want to talk further, you can look at the – since you think about the Boeing Max 737s and the sensors there. There needs to be hardware interlocks. They cannot be replaced by software and still have that same measure of safety, period.”
In other words, If the IT network was physically disconnected from the OT network at Petro Rabigh, TEMP.Veles wouldn’t be able to remotely get through. If a piece of hardware was their only path in–and that hardware couldn’t be manipulated except by someone physically on-site–TEMP.Veles would have required an insider to infiltrate the premises.
Instead, twas no single point between the IT and SIS layers at Petro Rabigh, where a connection couldn’t be made. The software that was in place at Rabigh simply was not enough, in the face of such a sophisticated attacker as TEMP.Velez. We can make these assumptions with relatively high confidence because of the sophistication of the Triton attackers’ methods.
Also because we, basically, know who they were.
We know, from a FireEye investigation, that they did most of their work between 7:00 and 15:00 Greenwich Mean Time, which correlates with typical workday hours in Eastern Europe, Africa, and most of the Middle East. Their code, with evidence of language artifacts consistent with Russian dialect, was probably deliberately rewritten into English in its later stages, in order to mask its origins. We also know that multiple versions of the Triton malware were tested from a single IP address–which often tracked Triton news online–and was registered from a particularly depressing, ten-story cement building in the Nagatino-Sadovniki district of Moscow.
“[Sam Curry] My name is Sam Curry. I’m Chief Security Officer for Cybereason.
So the thing about Russia is it’s well-funded and has an awful lot of agencies. We talk about Russia like it’s one thing. I mean there are dozens of groups that are very good at development of tools.”
The Central Research Institute of Chemistry and Mechanics may be an academic institution, but from the moment you walk through its front doors, you’ll know this isn’t, you know, NYU. Mobile phones must be left with security, which is why so few photos from inside the facility exist. The bare white, fluorescent-lit hallways are lined with thick, grey metal doors housing big, winding industrial machinery and who knows-what else behind them.
Behind at least one of those doors, in the months and possibly years leading up to summer 2017, the Triton malware was being written and tested. Among its other military engineering and science departments, the Central Research Institute’s Center for Applied Research is dedicated to developing methods for securing critical infrastructure, and its Center for Experimental Mechanical Engineering, in addition to developing military technologies, researches methods of enterprise safety under emergency scenarios. Now, just because it houses industrial systems experts doesn’t mean the Central Research Institute must be the source of Triton. Nor does their affiliation with the Russian government mean that members of the Institute would be interested in carrying out an attack in a foreign country.
What these facts do suggest, however, is that the Central Research Institute in Moscow is, with the exception of industrial plants themselves, one of the world’s very few places capable of getting their hands on exclusive technology from Schneider Electric. You simply can’t create such custom malware as Triton without having your hands on the device that it hacks into, and Triconex 3008 devices are not seen anywhere in the world besides those plants where they’re actually deployed.
As an analogy, imagine trying to get your hands on an iPhone 12 before release. You and I, and most of the world’s population outside of Apple and its manufacturers, would have no earthly way of swiping one. Only a nation-state-level actor might have the ability to leverage an insider at the company, intercept one along its supply chain, or steal one by some other creative means.
Because SIS’s are so sensitive, Schneider Electric only exists so long as its devices are secure, and its devices are secure only so long as they are exclusive.
Triton was dangerous, and pioneering. But the lesson of the Triton story isn’t Triton. It’s what Triton revealed about us, and our systems. It’s not that we’re unable to defend against this malware–we now have the source code; we very much are. It’s that we’re systematically underprepared to defend against this kind of threat–a threat that’s much bigger than just one malicious program.
“[Sam Curry] But to some degree, there will always be rogue software that’s going to test and probe our systems and there’s nothing inherently good or bad to many of the techniques that are made, whether it’s done in a Russian lab or elsewhere or an Iranian lab or a North Korean lab or an American lab or who knows, right? Or Israeli or French. Don’t know.
When software is made good or bad, it can be put to good or bad purposes. Simply saying, “if but for the Russians developing this, we would be safe.” is a false security. Yes, it’s terrible that they may produce a lot and then that stuff disseminates. But if that went away, we wouldn’t really be that much safer because we’re in a multi-program world with people for whom hacking, developing assets, by which I mean compromised systems for use. They develop them for when they need them. But also delivery mechanisms and pay loads, that’s going to happen anyway. All that will be different is the rate at which it happens and even benign software can be used for that purpose by somebody who needs to assemble something bad.
Instead what I would encourage is yes, we should try to reduce the geopolitical tension that these – to these things and there are reasons why people hack. We should also discourage the development and dissemination of these toolkits absolutely.
But we should also realize that we need to become more secure with higher cost of break, more resilient, more anti-fragile, more able to survive and find these things faster in spite of that. This is really the reason why I’m such a big proponent of making tools for red teams and making tools for probing our own defenses. The more we resist the negative software, the exploits, the malware, then the more we build up our immune system to it, the better off we will be.”
In May 2015, two years and one month before Triton first tripped the Petro Rabigh petrochemical plant, the Department of Homeland Security published an emergency response report on a malware threatening industrial control systems called BlackEnergy. The report read, in part, quote:
“If You’re Connected, You’re Likely Infected! Some asset owners may have missed the memo about disconnecting control system from the Internet. Our recent experience in responding to organizations compromised during the BlackEnergy malware campaign continues to bring to light this major cybersecurity issue—Internet connected industrial control systems get compromised. All infected victims of the BlackEnergy campaign had their control system directly facing the Internet without properly implemented security measures. The BlackEnergy campaign took advantage of Internet connected ICS by exploiting previously unknown vulnerabilities in those devices in order to download malware directly into the control environment. Once inside the network, the threat actors added remote access tools, along with other capabilities to steal credentials and collect data about the network. With this level of access, the threat actor would have the capability to manipulate the control system.”
Two years before Triton, the DHS openly, indirectly, indicated that a number of U.S. industrial sites had been infected by a sophisticated adversary deploying the BlackEnergy malware which, six months after the report’s publishing, would be used to temporarily, remotely shut off power to over 200,000 Ukrainian citizens.
“[Joe Weiss] Six months before the first Ukrainian cyber-attack, DHS in their bi-monthly magazine and had laid out and said if you connect your control systems to the internet, you will be hacked. Then they went on to give step-by-step things that would – could occur if you did that. Those step-by-steps were almost precisely what the Russians did in the 2015 December cyber-attacks there. The Russians followed the DHS guidelines.”
More recently, the security firm Dragos–home to first responders to the Petro Rabigh attack–has been tracking more recent activity from TEMP.Veles, which they refer to as Xenotime. According to their monitoring, the Triton hackers have moved past oil and gas to exploring the U.S. power grid–over twenty different targets in the power sector alone, between plants and transmission stations and distribution stations–probing their networks for vulnerabilities and remote login portals.
Dragos’ director of threat intelligence told Cyberscoop how they’ve, largely, stuck to the Triton script: breaching IT networks via phishing or watering hole attacks, stealing admin credentials from plant engineers and then spending months hidden, probing the network for attack paths into the lower layers. Not only have they succeeded in breaching real-life U.S. facilities but, according to Dragos’ director, one of those incidents included a successful breach of safety instrumented systems.
Rabigh may be 6,100 miles away from the U.S. east coast, but the next industrial systems hack may not be. Triton is a lesson for the security industry, but also the hackers. The question that remains is: who learned their lesson? The answer will determine whether the malware designed to kill humans will, ultimately, fulfill its purpose.
[Sam Curry] Yeah, you should have processes to patch and yes, you should have – you should not do things like leave your SIS in a program mode. But that’s not the weakness here. This is about making better and better processes in handling risk over time and this was an early warning for the industry. I think more will come and we’re in a brave new world where this sort of thing is going to become more and more common. So get ready now.