Malicious Life Podcast: The Morris Worm Pt. 1

We’ve introduced you to some of the seminal malware attacks that have shaped cybersecurity history. Perhaps no other incident in history, though, has had the effect on how we think about computer security today as the Morris worm.

Gene Spafford
About the Guest

Dr. Eugene Spafford

Full professor in Computer Sciences and Electrical and Computer Engineering at Purdue University

Eugene Howard Spafford (born 1956), commonly known as Spaf, is an American professor of computer science at Purdue University and a leading computer security expert.

A historically significant Internet figure, he is renowned for first analyzing the Morris Worm, one of the earliest computer worms, and his role in the Usenet backbone cabal. Spafford was a member of the President's Information Technology Advisory Committee 2003-2005, has been an advisor to the National Science Foundation (NSF), and serves as an advisor to over a dozen other government agencies and major corporations.

ran-levi-headshot
About the Host

Ran Levi

Born in Israel in 1975, Malicious Life Podcast host Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.

In 2007, created the popular Israeli podcast Making History. He is author of three books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.

About The Malicious Life Podcast

Malicious Life by Cybereason exposes the human and financial powers operating under the surface that make cybercrime what it is today. Malicious Life explores the people and the stories behind the cybersecurity industry and its evolution. Host Ran Levi interviews hackers and industry experts, discussing the hacking culture of the 1970s and 80s, the subsequent rise of viruses in the 1990s and today’s advanced cyber threats.

Malicious Life theme music: ‘Circuits’ by TKMusic, licensed under Creative Commons License. Malicious Life podcast is sponsored and produced by Cybereason. Subscribe and listen on your favorite platform:

All Posts by Malicious Life Podcast

Malicious Life Podcast: The Morris Worm Pt. 1 Transcript

Eugene Spafford: I found out later that one of my friends was the duty officer at the Pentagon, at the room where the bridge was between the internet and the MILNET and he had been given orders that if anything happened, he was supposed to turn a key, open a box and hit a red switch, which would actually cause an explosion in the chassis and physically separate the networks.

Interviewer: It’s kind of a Hollywood style explosion.

Eugene Spafford: Oh, yeah, very, very. Well, a military thinking at the time. This was positive disconnection. So he was working late at night and the call came in to blow the net.

Hi, I’m Ran Levi and welcome to the Malicious Life podcast. In previous episodes of our show, I’ve introduced you to some of the seminal malware attacks that have shaped cybersecurity history. Perhaps no other incident in history, though, has affected how we think about computer security today as the Morris worm. To get a sense of how big a deal Morris was, consider this: it was responsible for taking down, at its time, ten percent of the entire internet. Ten percent! All from a program only 99 lines long.

For some, the Morris worm was a fascinating thing–a wake-up call to an emerging new field. For others, it was frustrating, or scary–the kind of thing that could lead Pentagon officers to order an explosion in their own building. For the worm’s creator, it was a bit of everything. All that aside, though, Morris has already affected your life in major ways you may not yet have realized. By the end of this episode, you’ll know what I mean.

“Robert Morris has a very unusual quality: he’s never wrong.

That’s how Paul Graham–founder of Viaweb and co-founder of Y-Combinator–once famously described his longtime friend. Robert Morris has been described in many ways in the past three decades: reserved, controversial, genius, careless, a folk hero. Amidst all the noise, it’s difficult to discern much truth about the man himself. Despite what must have amounted to hundreds of requests for an interview, Morris has never really been one to speak to the public. Instead, others over time–journalists, friends and acquaintances, grand jury members, podcasters–have had to infer for themselves exactly what type of person would be capable of writing and unleashing a piece of code that just about threatened to take down the entire internet.

What we do know is that Morris was destined for a life of computers. His father, Robert Morris Sr., was a well-known computer scientist and cryptographer in his own right: a coauthor of the UNIX operating system, and chief scientist of the NSA’s National Computer Security Center, described by one New York Times reporter as “one of the Government’s most respected computer security experts.” The junior Morris was exposed to computers at a young age–long before they became mainstream commercial items. He’d go on to earn his bachelor’s degree from Harvard, where he’s said to have come up with the idea for his worm while learning about arrays in class. In 1988 you could’ve found him at Cornell’s graduate school, where he mostly kept to himself but held a modest reputation amongst friends and colleagues as something of a computer wizard.

But I’ll tell you what gets me about Robert Morris, and it’s has nothing to do with the mythos surrounding his life. It’s the fact that, on the day he released his malware onto the internet-at-large, he was still six days away from turning 23 years old. We’re basically talking about a kid here. A kid who managed to earn himself a real big time out.

Before I go on, it’s worth reinforcing that the internet in 1988 was–as you already know–not nearly what it is today. It was the same internet we have today, but it manifested quite differently.

Eugene Spafford: My name is Eugene Spafford. I’m a Professor of Computer Sciences at Purdue University.

Interviewer: Let’s talk about the networking environment into which the Morris Worm appeared. Back then, there was the internet or what was later to become the internet as we know it today. But it was much more restricted to government organizations and academia than it is today, right?

Eugene Spafford: Effectively. Also some large companies. No commercial use was allowed. There were really two sets of networks that were out there. There was a defense-related set of networks that were using government and for government purposes and then there was a research-oriented aspect of networking that was in use at – as you said universities. Many companies like Digital Equipment Corporation, IBM and so on.

The number of people in the world who even owned computers was minuscule compared to today.

Eugene Spafford: It was in the tens of thousands. It was probably under 100,000. One estimate that was widely quoted at the time said 60,000 machine were kind of put together.

For reference, about half of the world’s population today–some three and a half billion people–have access to the internet. In 1988, the number of people with computer access would have landed somewhere in between the population size of Sheboygan, Wisconsin and Tuscaloosa, Alabama. Suffice to say this was not a commercial technology, so most every computer was housed in the building of an academic institution, government, or a sizable corporation.

The internet was very much in its primitive stage at this time. Most users didn’t use computers of their own but logged in remotely to mainframe servers. Security was low or nonexistent, networks were a mostly local phenomenon, and functionality was limited to the sorts of things useful only to those university researchers, corporate employees, and government officials using it. In other words, the internet was sort of like a little community: its separate sectors didn’t necessarily interact so much, but you wouldn’t have to go all too far to draw a line from one end of the net to the other. It’s the reason why, say, a malware written to jump from one computer to another in such an environment could cause a lot of havoc very quickly.

Eugene Spafford: Well, this was released on the evening of November 2 and it turned out that November 2nd was my wedding anniversary at the time. So I wasn’t online that evening and gone out to dinner, had a nice evening. The next morning, I got up early and had some coffee and was trying to log in to read my email and one of the machines that I normally used was unresponsive. It was still there. It was still up. But the load had soared into the hundreds, way beyond what was normal.

So I realized something was off. I asked the staff. I called them on the phone and asked them to reboot the machine, which they did and very shortly thereafter, it began to slow down and slow down and I managed to do a process snapshot and saw a lot of processes running that shouldn’t have been there, that they were unfamiliar.

So I went into the office and I think I did spend about the next 18 hours straight there in the office doing de-compilation, communication, writing up results.

Waking up on the morning of November 3rd, 1988, thousands of professors, government employees and industry professionals across the United States woke up to find that their computer monitors had developed minds of their own. I got to speak with the one man most responsible for figuring out why.

Eugene Spafford is going to be a sort of spirit guide to this two-part episode. I chose him because, in the few days following his anniversary evening, he managed to produce what’s considered the first definitive analysis of the Morris worm.

Eugene Spafford: I started taking it apart further to see how it worked, how it spread, what the algorithms were. I wrote a long report on this that turned out to – probably have been the most read technical report ever out of Purdue and still available if anybody wants to find it.

That first morning, though, he–along with all his other colleagues–didn’t know left from right when it came to what in the world was happening to their systems. Emails were clogged. Computers were becoming catatonic, to the point of non-functionality. Within mere hours, the worm had infected such a large fraction of all American computers that it threatened the very internet as we knew it.



Eugene Spafford: Well, the news got out certainly and it became very newsworthy. So one of the things I had to cope with at a certain point was calls from news media and the university encouraged me to take the calls and fill them in on information.

So that’s one of the things I did. Most of the calls were pretty uninformed. So I ended up putting together a fact sheet that I could fax to them with background. One of the calls, for instance, was asking about whether this virus would be jump to the user population, which was a fascinating question.

Interviewer: Ah, you mean kind of turning into a biological virus?

Eugene Spafford: Yes. Yes. They really didn’t have a clue as to how things worked.

Try putting yourself in the shoes of someone witnessing the Morris worm back in 1988. Computers are still the domain of professional circles. The capabilities of microchip technology are only gradually starting to make themselves known. Computer viruses aren’t even really a consideration for those outside of the most knowledgeable few. In fact, viruses were such uncharted territory that, yes, you heard him the first time: Gene Spafford actually got a call from a legitimate reporter asking if the computer virus could spread to infect human bodies. There were even some fun conspiracy theories floated:

Eugene Spafford: Well, conspiracy theories are always going and people will find connections even if there aren’t any there. The one that I heard that was sort of varied and frequently spoken is that this was actually something that had been developed internally to the agency and that the younger Morris had gotten a copy of it and set it loose out on the wide world and they were trying to set him up as the fall guy.

And lest you think the media or the general public was disproportionately uninformed, listen to how the United States military reacted to the same news:

Eugene Spafford: On the government side, there was a lot of concern because they didn’t know what this was or who it was from and one of the things that I raised at one point with some of them, they had thought about, which was there were thousands of copies of this. But they didn’t know that all those thousands of copies were the same and it’s hard to tell until you take them apart or compare them somehow.

So it might have been used as camouflage for a more targeted attack. They didn’t know whether this was exploratory, whether it was something real. At that time, the military side of the early internet was still connected. I found out later that one of my friends was the duty officer at the Pentagon, at the room where the bridge was between the internet and the MILNET and he had been given orders that if anything happened, he was supposed to turn a key, open a box and hit a red switch, which would actually cause an explosion in the chassis and physically separate the networks.

Interviewer: It’s kind of a Hollywood style explosion.

Eugene Spafford: Oh, yeah, very, very. Well, a military thinking at the time. This was positive disconnection. So he was working late at night and the call came in to blow the net. Well, he’s in a small room and he knew if the explosives went off, not only would that damage his hearing, but it would be weeks before they got everything replaced. So he just went over and pulled the plug from the wall.

Interviewer: Smart guy.

Eugene Spafford: He was never disciplined from that. Yes, he was a computer scientist. So he understood how it worked. But –

Interviewer: But did it really interrupt military operations anyway? No? It was purely on the academic commercial side of the internet.

Eugene Spafford: Yes, pretty much. Taking that offline just cut down – cut off some of the communication for email. But it wasn’t widely used at the time and within the military system, they could still use the connectivity.

So this is where we were at: press questions about biological viruses, conspiracies about government cover-ups, and military orders to literally blow up the Pentagon. You’d think at this point someone would jump in and shout: “It’s okay everyone, don’t worry!” The problem, of course, is that even the most expert computer scientists really were worried, and didn’t yet know that everything would end up okay. Not one person in the whole world yet knew how to stop the virus from spreading–even, to some extent, the one man you’d imagine must.

Amidst all the chaos, Robert Morris was holed up in his room, more worried than anyone.

The Morris worm was so destructive that it even surprised Robert Morris himself. All because of a single error of judgment programmed into its code. The “bug” wasn’t just one part of the code–it was inherent to the very nature of the worm itself.

While often confused for one another, it’s important to understand the distinction between a computer virus and a computer worm. Both viruses and worms are malware that replicates itself by infecting multiple, often large numbers of computers. However, a worm is perhaps the even more precarious of the two. Where viruses require an active agent in order to spread – users sending emails, or, back in the day, swapping floppy disks–worms do not. Worms, instead, have the power to propagate on their own, through networks. The Morris worm, often considered the first mass-scale worm ever created, sort of wrote the rulebook on this one. Robert Morris directly injected his malware to the internet from MIT–in order to disguise that he was actually from Cornell–and from there it was hands-off.

Nobody had to even open their computers that evening of November 2nd to be infected. At the moment it was put online, the Morris worm spread through MIT’s network outward, to those who were somehow connected to those at MIT, to those connected to those connected to MIT, then those connected to those connected to those connected to MIT. You get the point. At no stage did anybody have to visit a website, open a drive, or even turn on a monitor.

Instead, Morris targeted Unix systems. From an already-infected machine, the worm would begin its work by searching for other machines and internet hosts within a network. In order to make a new breach, the Morris exploited vulnerabilities in three specific network protocols: sendmail, finger and remote shell. These protocols handle the transfer of email, exchange of user information, and command line instruction between computers within a network, respectively. Of course, the specifications of each of these channels aren’t so important as the fact that they’re all means for one computer to communicate with another. Whether it’s through shell commands or any other method, the Morris worm simply needed a way to copy itself from one computer to another. Morris knew of all these vulnerabilities from his previous work with Unix systems.

But there was one fatal flaw in his work. As part of its self-replication, Morris wrote a function to check if a computer visited by his worm was already infected by it. If the computer replied with a positive, the worm would leave and move on to the next host. This way, the program wouldn’t just replicate itself endlessly on any given machine it encountered. However, Morris also anticipated that system administrators might try to cheat his code, by simply programming their machines to automatically reply with a false positive. To address this potential flaw, Morris appended one more component to his creation: for every one in seven machines that replied “yes”, the worm would go ahead and park itself in the system anyway. Clever, right?

To get a sense of what the 1-in-7 rule did for the Morris worm, consider an analogy. You sit down at a restaurant, and your waiter hands you a fork and knife. Then he goes around and serves others as well. Your waiter is nothing if not attentive, but he has one strange quality. It’s a busy day, so he’s running around back and forth, back and forth from the kitchen. You notice that every time he walks by your table, he checks that you’ve still got your silverware. One in every seven times he does so, he just decides to give you a new fork and knife anyway. After a while, your table is so cluttered with silverware that forks are falling on the floor, napkins are going everywhere, and there’s no room left for your actual food.

For a computer, too many forks and knives mean slowdowns and crashes. Because the worm moved so fast on the internet, oftentimes computers got visited not once, but many, many times. Each new instance of the worm adds more processing load, creating a bottleneck until machines freeze over. To give you a sense of this process in action, researchers documented the trajectory of the Morris worm on systems at the University of Utah. Here’s how it went:

6:00 p.m. – At approximately this time in the day, Robert Morris unleashes his worm onto the internet.

8:49 p.m. – The worm infects a computer at the University of Utah.

9:09 – The worm begins to attack other computers in Utah’s network.

9:21 – The load average on the system reaches 5. For reference: load average describes how hard a computer is working. A computer such as this–at the University, in the evening–might be expected to be around 1. Any load above 5 causes notable delays in data processing. Moving on…

9:41 – The load average reaches level 7.

10:01 – Load average 16.

10:06 – Users lose their ability to use their computers, as so many copies of the Morris worm have affected the network that new processing instances cannot be started.

10:20 – The system administrator kills off all of the worm copies.

10:41 – The system becomes reinfected. Now the load average reaches 27. Remember: normally the load average would be just 1.

10:49 – The administrator shuts down the entire system, then restarts it back up.

11:21 – Another reinfection. Load average hits 37.

You wouldn’t have guessed that, in his original design schema, Robert Morris intended for his worm to be unnoticed. In fact, he’d attempted to include mechanisms to hide the code from system operators.

Eugene Spafford: It’s known that the code had characteristics in it to try to hide itself, to try to keep itself established, even if it was eradicated from some machines.

So it was intended to be stealthy and maintained persistence. So if he was trying to demonstrate something, it’s not clear what he was trying to demonstrate with that in terms of motive.

Morris’ worm included a few primary defense mechanisms. The first involved sending a random string of digits through to a new host machine, in order to gauge the quality of a network connection prior to infiltration–sort of like sending your friend into a haunted house before you, so that if anything pops out you don’t have to take the brunt of it. Once the breach was accomplished, the worm would encrypt and rename the files it used, then delete any reference to them in the computer’s file system. Finally–perhaps best of all–in order to not make noise in a machine’s runtime data, the Morris worm would periodically die and respawn itself. Doing so would make it much less obvious to administrators that the program was always running in the background, always taking up space.

When Robert Morris became aware of the nuclear destruction of his program, he called up an old friend.

Andrew Sudduth was a world-class rower. In 1984, he was part of an American team that took home silver at the Summer Olympics. He also happened to be a talented hacker who’d made friends with Morris while they were at Harvard.

It was about 11:00 p.m. on the night of November 2nd when Sudduth received a call from his friend. Sudduth happened to be sitting and chatting with another friend of his: Paul Graham.

According to reporting from the Washington Post, Graham was the one to pick up that call. Morris told him what was happening, Graham hung up and told Sudduth. Half an hour later the phone rang again–this time Sudduth answered–and Morris offered some suggestions on how they might protect their systems. In later legal testimony, Sudduth recalled that his friend “seemed preoccupied and appeared to believe that he had made a ‘colossal’ mistake.”

More and more anxious as the night wore on, Morris called Sudduth again at 2:30 a.m–this time with a request.

At 3:34 a.m. on November 3rd, Andy Sudduth posted an anonymous bulletin to the Usenet newsgroup server system on behalf of his friend. The message gave directions on how to kill and defend against the Morris worm. “There may be a virus loose on the Internet,” the post began. “Here is the gist of a message I got: I’m sorry.”

But there was a problem. Do you want to guess what it was?

Well, I’ll tell you what it was…in our next episode of Malicious Life.