Malicious Life Podcast: In Defense of the NSA
The NSA is one of the world's most formidable intelligence operations. We spoke at length with Ira Winkler, CISO, Skyline Technology Solutions, who started his career at the NSA - check it out...
Malicious Life Podcast
The largest hack in U.S military history may have been conducted by... The NSA. In 1997, a wargame conducted by the NSA showed just how unprepared we were for a potential cybernetic strike- in 4 days, NSA hackers were able to take down entire military networks. It revealed the dire consequences of a possible cyberattack, and even more alarming- it revealed a third actor, quietly hiding in the shadows...
The Malicious Life Podcast by Cybereason examines the human and technical factors behind the scenes that make cybercrime what it is today. Malicious Life explores the people and the stories behind the cybersecurity industry and its evolution, with host Ran Levi interviewing hackers and other security industry experts about hacking culture and the cyber attacks that define today’s threat landscape. The show has a monthly audience of over 200,000 and growing.
All Posts by Malicious Life PodcastBorn in Israel in 1975, Malicious Life Podcast host Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.
In 2007, created the popular Israeli podcast Making History. He is author of three books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.
Malicious Life by Cybereason exposes the human and financial powers operating under the surface that make cybercrime what it is today. Malicious Life explores the people and the stories behind the cybersecurity industry and its evolution. Host Ran Levi interviews hackers and industry experts, discussing the hacking culture of the 1970s and 80s, the subsequent rise of viruses in the 1990s and today’s advanced cyber threats.
Malicious Life theme music: ‘Circuits’ by TKMusic, licensed under Creative Commons License. Malicious Life podcast is sponsored and produced by Cybereason. Subscribe and listen on your favorite platform:
All Posts by Malicious Life PodcastJune 9th, 1997. Over the next four days, officials across the United States Department of Defense experienced problems in their computer systems. Streams of scrambled, inexplicable data popped up on screens. Files got deleted and hard drives re-formatted. Emails stopped sending and receiving–or even worse, sending but saying something totally different upon arrival. Phone lines broke. Fax lines became overloaded with requests. Computer monitors simply shut down, without rhyme or reason.
High-ranking members had no clue of what could be going on, nor of the scope of the problem. Some didn’t even notice it happening. Few did, or knew how to do, anything to stop it. No branch of the U.S. defense complex–from the Joint Chiefs, to the Pacific Command, to the Pentagon and more–were spared. One particularly concerned individual sent an email to his commanding officer: “I don’t trust my command-control.” That email was intercepted.
This all, it turns out, was by far the largest hack of the U.S. military in history.
Luckily, the hackers responsible for initiating such a devastating attack were…the NSA.
Penetration Testing
Hi, my name is Ran Levi, and this is the Malicious Life podcast. In this episode: penetration testing, or the act of intentionally breaching a system in order to test its strengths and weaknesses.
Think of pen tests like a really honest roommate–the type of person to tell you about the spinach caught in your teeth, and that you look fat in your jeans, so you’re all good when you leave the house. One fundamental quality of pen testing is that, sometimes, a really bad grade indicates a really successful test. After all, what would you prefer: the embarrassment of having your roommate point out your shirt is on backwards, or not knowing about it, leaving the house, and walking down the street totally unaware that you look ridiculous?
Eligible Receiver ‘97, the event that opened this episode, was one of those cases that sits somewhere in an ambiguous middle ground: a thoroughly successful exercise, and also evidence to a hugely incompetent military infrastructure, depending on how you look at it. Nowadays, it’s a sort of legend–the first large-scale exercise of its kind, and the wake-up call that truly brought information security to the table for 21st century America.
ER97–as it’s often referred to–is just one example of what makes penetration testing one of the most common and effective strategies in information security today. More often, though, it’s a practice employed by private companies that want to avoid the potential for lost profits in a hacking affair. Think about it: in just these first two seasons of the podcast, I’ve covered a whole list of incidents where major companies experienced huge losses from hacks that, theoretically, could have been prevented. It may be that the biggest problem with penetration testing is that it’s not common enough in businesses today.
But government is a different beast entirely. Corporate databases are one thing, but when you’re talking about hacking and exposing vital systems of government and defense–the very systems which keep nations running and people safe–the waters get muddier. So my question is this: can and should governments be using penetration testing for cybersecurity purposes? If we’re to address the 21st century threat of cyber warfare effectively, is this the way to go? To answer this question, I’ll tell you the story of ER97, the most significant penetration test in U.S. government history.
Eligible Receiver `97
Eligible Receiver was an exercise in what’s commonly referred to as “Red Teaming”: you get a group together from within the organization, and assign them to play the role of the enemy. It’s sort of like in the movies, when the detective has to think like the criminal to catch them. The advantage of red teaming is that instead of just waiting for the enemy to surprise you, a team of friendlies can dedicate all their mental energy to trying to beat you, just as the enemy would, so you know what to do when such an attack actually occurs. It’s wargaming parlance for what ultimately amounts to standard penetration testing.
It’s worth acknowledging how forward-thinking it would have been for NSA Director Lieutenant General Kenneth Minihan to propose an information warfare exercise in 1997. Eligible Receiver was an annual war sim run by the U.S. Joint Staff, meant to shed light on some important consideration the DoD might face in the near future, but as Minihan later recalled in a 2016 panel discussion at the University of Maryland University College, there was far from any consensus in 1997 within the Pentagon–even within the field itself–that so-called information warfare should be of concern, especially for warfighters. Minihan realized that to get hardened, old dog Pentagon and army officials to see things his way, he couldn’t just talk–he had to demonstrate the problem, in a big way. This lead to his most radical decision: Eligible Receiver ‘97 would be an NIEX–’No Notice Interoperability Exercise”. The NSA red team would be given legal precedent to conduct their operation without notifying those they’d be targeting. Only a tight circle of individuals directly involved in the project were afforded knowledge of its existence.
To give a sense of how closely those involved kept the secret: during the exercise, Richard Marshall, NSA counsel to the red team, was informed by a superior in his department that he was being put under investigation for potential espionage. Turns out, a colleague on the NSA security staff noticed Marshall frequently using an encrypted mobile phone, and being at work during strange times of the day. Marshall’s higher-up had to inform the staff member of the reason for this odd behavior, but gave only the most limited amount of information possible in his explanation, and ordered for the investigation process to continue, for fear that stopping it midway might raise more suspicion.
Preparation for the event took place over a number of months, as Director Minihan navigated the bureaucratic and legal considerations involved in trying to authorize, essentially, a major attack on the U.S. government. Eventually, Minihan got Secretary of Defense William Perry to sign off on the project, and it was allowed to commence.
Rules Of The Game
The red team would consist of about 40 members, including ten part-timers and a dozen in the field, mostly from the NSA’s Information Assurance Directorate. Basically, as the infosec leaders of the world’s premiere spying agency, these would be 40 or so of the world’s most qualified people to do the job. They would be playing the role of a rogue antagonist state, with the goal of compromising vulnerable U.S. command-and-control systems, and disrupting America’s ability to coordinate and respond to the attack with force. Such systems would be attacked for real, in real life. And while the Red Team certainly wouldn’t attempt to fully incapacitate the entire U.S. military complex, they would make it their job to infiltrate every corner of the Defense Information Systems Agency network–the one underlying the information and communications technology for the whole DoD, including the president and vice president. They’d probe how to get in, and then enact some form of minor havoc to demonstrate what powers a real breach might give to potential outside hackers.
The game had a few rules: first, the hackers would have two weeks to do their bidding, but could only use publicly available, open source tools–nothing the NSA had that one couldn’t buy from a store or access through the greater internet on a private computer. Second, and most limiting, the Red Team could only go after departments whose websites included a login banner. Third, the Red Team was to make sure that none of their actions would actually permanently harm the Defense Department’s safety structures–the effects of the attack were to be exclusively surface-level, and temporary in their nature. Fourth: a sort of “kill switch” was made to be, that could halt the exercise should it be deemed necessary to do so.
Additionally, the game’s organizers held the right to stop or restart the game, or impose further rules or limitations, should Red Team be too successful. This is standard operating procedure in red team simulations generally–that the game shall only proceed once Blue Team (the opponent–America is always Blue) prevails. After all, the goal of the exercise is to help the government patch up any holes it has, not totally embarrass everyone involved. Right?
Well, in reality, it didn’t quite end up working out that way. As soon as the hacking stage of the game began, all Hell broke loose.
Wreaking Havoc
Red Team deployed three squads, or RATs (Red Attack Teams), as they were referred to within the group–one working from an apartment in St. Louis, another an apartment in Hawaii, and the third aboard a ship at sea meant to simulate a vessel hijacked by North Korean soldiers. Those teams at sea and in St. Louis were deliberately set up as sacrificial pawns, to allow Blue Team opportunity to practice shutting them down.
The NSA teams began by conducting reconnaissance–sifting through the U.S. Military Network and Defense Information Infrastructure for access points and administrative access, all the while making their presence appear entirely innocuous within the systems. Once they found the weak points, Red ran wild; monitoring secret messages between defense personnel, implanting messages to confuse response teams, and trying out other methods just to see what they could accomplish.
Once the at-sea squad’s presence was detected by the Navy, a Special Ops team was ordered to attack their ship out in the Pacific Ocean. The execute command was sent out, but the Red Team responded by sending a false order countering the original, and halting the mission. The Red Team not only controlled their opponents’ communications, but were actively toying with them. Blue Team became so confused that they lost total confidence in their comms system. They ultimately were made to execute the operation, but only after the Red Team revealed their identity, at which point it was more of a ‘gimme’, since all the drama had been seeped out of the affair.
In some instances Red performed denial-of-service attacks, and in others, they simply instructed system administrators to look for and open a text file planted on their computers, which would read something along the lines of: “A denial-of-service attack such as this would have been initiated at this time, had this been real.” More often than not, they exfiltrated large amounts of data, mostly to see what was for the taking. In all, Red’s Chief Targeting Officer Mr. Keith Abernethy later calculated his team used only about 30% of its operational capabilities and that, given more time and malintent, they could have wreaked havoc far greater.
In an attack that spanned departments, wings, teams and officials, in just two instances was the Red Team’s presence detected. I already mentioned the first earlier in the episode: an email to a commanding officer that read “I don’t trust my command-control.” In the other, a single network operator at the Marine Forces Pacific noticed some unusual behavior occurring on his host system (a major strategic target in Red Team’s strategy). He decided to take action, tightening his firewall to allow for only necessary communications with those he trusted. In doing so, he created the only roadblock Red Team couldn’t penetrate–the single system in the whole affair that became totally secure.
In most cases Red covered their tracks well, and as systems went under one by one, Defense officials had no idea any of it was happening. It all became too much for the acting president of the exercise, General Robert Clark, who shut down the whole operation not long after it started.
Red Team, which had two weeks, completed all their goals within four days. Yes, that’s right: a team of hackers was able to take over the computer network of the entire U.S. Defense organization in just four days. The Pentagon. Pacific Command. All of it. The Military Command Center–no less than the President’s line of communication to give military orders during wartime, went down on day one. Everyone blinked, and it was over. The game organizers either didn’t have enough time to stop Red Team, or just didn’t know how.
Most of the work didn’t require much heavy lifting at all. The department that gave Red Team the most trouble, if any, was the intelligence directorate of the Joint Staff, otherwise known as J-2. The hackers were having a hard time getting into J-2’s server, so one team member tried something different: he up and called their office, claiming to be an IT worker for the Pentagon. He said he needed to reset all of their passwords, since there was a technical problem with their system. Without second thought, the person on the other end of the line gave up the current password needed for access, then Red Team broke in.
Emily Williams
ER97 was alarming on many fronts, but remember: in 1997, social networks weren’t even around to further complicate the issue. Nowadays, instant online communication and improved social engineering means hackers have an even greater ease of access into corporate or government networks. Here’s a story of a more modern U.S. government penetration test–one that, tellingly, looks very different from ER97.
At 5:42 p.m., Emily Williams receives a Facebook message: “Hey do I know you?”
5:45 p.m., “I thought I knew you from Hungry Howie days,” she replies.
5:47, “Dang you’re right that was forever ago hahaha”
6:10, “Sorry, I had to look at a photo haha those were wild days! I hope life has treated you like gold!”
If this sounds like the most boring conversation ever to you, trust me, it wasn’t. What’s going on here is perhaps every social media user’s worst nightmare. This man was not catching up with his old friend Emily–in fact, Emily never existed. Unbenounced to him, behind the other screen, Emily was a guy named Joey.
Joey Muniz and Aamir Lakhani are two security researchers who in 2011 were asked to test a US government agency’s security systems. The agency in question–its identity classified for purposes of security and, probably, embarrassment–specializes in cyber warfare. As one might imagine, it had fairly sophisticated cyber-defense systems already in place. Joey and Aamir sought to examine how resistant these defenses were not to conventional means of hacking, but to social engineering attacks. For this purpose, they created a fictitious character of a pretty young woman named Emily Williams.
Social Penetration
Muniz and Lakhani created Facebook and LinkedIn accounts for Emily, giving her a false residence address, even a social security number. They gave her a degree from the University of Texas, ten years’ work experience in IT (even though she was only, reportedly, 28 years old), and social media profile pictures provided by a waitress at a nearby Hooters, who agreed to have her face used in the experiment. To complement the illusion, the two also planted false information on sites outside of social networks: for example, they posted on her behalf on MIT forums. This way, a Google search for Emily also produced results that matched, at least on the surface, the false information in her profile.
After completing Emily’s profile, the two researchers cast their net throughout the targeted government agency. They sent friend requests to employees and subcontractors, and within a few hours they–or, rather, Emily–already had about sixty friends on Facebook, and fifty-five on LinkedIn. Very few employees, Joey and Aamir discovered, declined the requests. Some even went a step further and gave her unsolicited Linkedin endorsements for her skills.
Next, the researchers changed Emily’s LinkedIn status to being a new employee in the organization, in a technical role. Emily’s new job enabled her to accumulate more new connections, including salespeople, HR, development and subcontractors in the agency. As time passed, Emily even began to receive invitations for meetings, job offers, and dinner invitations from male friends.
Now came the opportunity for the researchers to reap the fruits of their labor. They posted Christmas, Thanksgiving and Happy New Year greeting cards on Emily’s profile, and invited her friends to click on them. The link on the greeting card led the victims to a webpage loaded with malicious code that would take over their computers through vulnerabilities in their browsers. Using the malware, the researchers collected passwords and similar information that enabled them to penetrate the corporation’s network, open a new account and grant the account administrative access to all the information they could possibly want to obtain. Game, Set, Match.
A Hundred Percent Success Rate
In describing the attack after the fact, Joey and Aamir emphasized the power social networks gave them in implementing social engineering attacks. “Every time we include social engineering in our penetration tests we have a hundred percent success rate,” said Aamir Lakhani. “Every time we do social engineering, we get into the systems.” This applied even past the Emily Williams experiment, as companies who heard of their work later hired Muniz and Lakhani, who found similar success everywhere they went. They tricked regular people, but also information security officials, and employees of antivirus software companies; just about everyone falls for the bait.
Muniz and Lakhani attributed some of their experiment’s results to the imaginary Emily being an attractive woman. This, they believed, influenced employees to grant her special favors, ranging from sooner-than-usual access to network capabilities and, in one instance, a free company laptop.
The ease with which Emily duped her colleagues was almost comical. There was, of course, the Hungry Howies conversation I already touched on. In another case, the researchers observed two employees chatting on Facebook about the approaching birthday of no less than the organization’s head of information security. The said CISO did not have social accounts at all, so the researchers sent him a birthday greeting card from an email address which appeared to belong to one of the two employees who discussed the subject on Facebook. The executive clicked on the link, the malware did it’s thing, and the two researchers gained almost unlimited access to every corner of the agency’s computer network from there.
Not everyone fell victim to these attacks so easily, though. The president of the organization, to his credit, refused Emily’s Facebook friendship request after failing to find her in the organization’s employee directory. Another individual called out Emily, though not by name, in a Facebook post warning his network about friend requests from shady profiles. The paragraph-long post concluded with: “That’s why I deleted her. Smells like a troll.”
But still, the ubiquity that the social networks allowed the researchers in their study was so great that the they had no trouble finding the loopholes necessary to crack the corporate network. Much like Eligible Receiver ‘97, the proposed 90-day experiment was functionally over within a week of deployment. They’d already achieved their initial objectives by then–they only continued on for the three months to see how far it could go.
Emily Williams’ story is a great and very worrisome example of how intelligence agencies can use social networks to infiltrate organizations, steal classified information, and even plant backdoors for use in a future conflict. Not only were Muniz and Lakhani able to move within the highly guarded government system however they pleased–viewing, altering, installing whatever software they wanted –but they even reported to have found sensitive documents on national political leaders and American state-sponsored attacks.
Without a doubt, social engineering is a powerful tool for those who know how to use it properly, and a very difficult challenge for the information security departments. Honestly, even to me, in talking about it retrospectively now, Emily almost seems like a real person…
Aftermath of ER`97
Back to Eligible Receiver ‘97. In the aftermath of the exercise, observers described the scene as in turns quietly stunned, and flurried with activity. The game caught everyone involved off-guard–even the Red Team, who didn’t nearly expect to have such an easy time of it. It would take months for the NSA to fully detail all the network holes it found during the game, and for the Defense Information Systems Agency to bring its systems back to one hundred percent function. Perhaps the most obvious revelation of them all, though: officials at the Department of Defense realized after ER97 that, even past the event itself, there was nobody within the entire organization whose job it would have been to oversee and address such a cyber attack. The department promptly created a position for chief information officer, and set up an Information Operations Response Cell which may have later influenced the formation of the U.S. Cyber Command.
John Hamre was sworn in as Deputy Secretary of Defense the month after Eligible Receiver occurred. In other words, he was the poor sap who had to listen to and address all the negative diagnostic reports. Eligible Receiver, to him, was a sort of red pill moment (a la The Matrix). Recalling the incident years later in an interview with Frontline, Hamre said “I think our consciousness is so different now. It’s just like Sept. 11 changed our consciousness about the vulnerability of airplanes. Eligible Receiver changed a lot of our consciousness about the vulnerability of cyber warfare.” Later, in a 2003 interview for PBS’ ‘Frontline’ series, Hamre recalled that: “over the last five years, there’s been a tremendous increase in awareness of the problem. There’s been a lot of, I think, improvements in the community, probably not to the degree that’s required, but I don’t think it’s like we haven’t thought about this. [. . .] the cyber security awareness today is thousands of times stronger than it was five years ago, when we first conducted Eligible Receiver.”
And yet, in another sense, Eligible Receiver got buried below other pressing matters of the day. Facing 9/11, and the two conventional wars that followed, the threat of cyber war was forced to take a backseat in the 2000s. To this day, not many details of ER97 are known to the public–or even within U.S. government and military–outside of those I’ve covered in this episode. Its impact is felt perhaps most so in the government agencies who’ve organized their own, similar simulations in the time since. For example, one event inspired by Eligible Receiver was conducted in 2005, between 75 members of the CIA’s Information Operations Center, codenamed ‘Silent Horizon’.
Another instance involved the Department of Homeland Security in 2003, with over 300 persons in 50 other public and private sector entities located in 14 different places across the country. Codenamed ‘Livewire’, its intention was to broaden the scope and response system to national cyber hacks. The scenario presented was a prolonged attack by sophisticated foreign antagonists on American privately-held infrastructures, from utilities to financial markets, and the goal of the exercise expanded past ER97’s government-specific bounds by helping integrate relevant private companies in this theoretical threat to national cybersecurity. In its stated goal, Livewire was successful, with several agencies reporting having codified their cyber attack response protocols in response to what was learned during the exercise. Livewire, though, reached one major conclusion that ER97 didn’t: perhaps government’s role in addressing cyber attacks isn’t so obvious, and there’s no catch-all way of addressing the issue generally. Perhaps the nature of the attack should determine to what extent it’s the responsibility of government versus private companies to respond, with state-sponsored war crimes requiring fundamentally different response systems than network malfunctions brought on by a mischievous lone hacker sitting on their living room couch.
Should Penetration Testing Be Used in Government?
Ultimately, then, Eligible Receiver was the event that brought penetration testing–generally a practice within the commercial sector–to government. Its sheer effectiveness was felt not only in its result, but its method–of red teaming as a means to fix holes in information security.
But…should penetration testing be used in government?
It’s one thing for private, corporate databases to come under scrutiny, and quite another when it comes to vital government networks responsible for the workings of nations and the safety of civilians. For one, even a simulated network takedown could open the door to real-life malicious agents. There’s also the issue we touched on in our episode on the power grid: the government and the army don’t ever close up shop for the night, so any period of time, at any time, is too much time for essential systems to be less-than-functional.
The more recent known U.S. Red Team exercises didn’t occur on nearly the scope of ER97 and, critically, didn’t involve major real-life consequence. Games like the Emily Williams experiment will remain a nice way for the U.S. government to offer teachable moments to its employees, but hardly achieve the scope and effect that ER97 did, as they’re relegated to localized organizations and consist of more surface-level activity like social media toying. Livewire, likewise, largely took place in situation rooms, and conference spaces–not in the Pentagon itself, or boats out in the Pacific Ocean–and was conducted against a simulation of the computer systems in question–not the actual thing. This, ultimately, may be evidence to the flip side of ER97’s legacy…
Eligible Receiver ‘97 was everything it set out to do, did, and should have done, and provided an invaluable service to the U.S. government. It was also dangerous–not only time consuming -for major U.S. army and defense sectors, but damaging to their essential computer systems, and, should it have gone another way, even life-threatening to some of its participants. After all, it’s amusing to remember now that a Special Ops team was sent to mock-capture a prop ship from NSA friendlies. On the other hand, had Red Team not successfully intercepted and altered their initial orders, that combat unit would not have then been apprised of the situation, and they literally would have conducted a combat mission against the ship, with no reason to believe it not under enemy control.
It may well be that Eligible Receiver’s success was also its demise. The fact that the exercise went over so well in the first place is only due to the very reason it was conceived of in the first place: the U.S. was thoroughly unprepared, and had no response system to address such an attack, real or simulated. But precisely in its steps taken in response to ER97 (in addition to a whole slew of other factors), the government has become far more prepared, and has the systems in place to respond quickly to a cyber attack. In this sense ER97 may have, by virtue of having achieved its stated goals, prevented any future recurrence of such an exercise, on such a scope. After all, it’s hard to imagine what would’ve happened if Blue Team really were ready for Red–one would hope that Red would’ve had the ability to stop the game, before it was too late…
Before I go, there’s one thing I haven’t mentioned yet about what happened during Eligible Receiver ‘97. While inside the networks, the Red Team ran into some, let’s say, unexpected guests. Like two separate thieves who happen to rob the same house, the NSA spotted other external actors within the system–ones that weren’t part of the game, or the U.S. government. The threat the game was meant to highlight, it turns out, was not theoretical at all. Real world foreign adversaries had already invaded.
The NSA is one of the world's most formidable intelligence operations. We spoke at length with Ira Winkler, CISO, Skyline Technology Solutions, who started his career at the NSA - check it out...
Jeff Man was one of the first people at the NSA to make the transition from hardware to software, and he shares with us his experiences from that period - check it out…
The NSA is one of the world's most formidable intelligence operations. We spoke at length with Ira Winkler, CISO, Skyline Technology Solutions, who started his career at the NSA - check it out...
Jeff Man was one of the first people at the NSA to make the transition from hardware to software, and he shares with us his experiences from that period - check it out…
Get the latest research, expert insights, and security industry news.
Subscribe