Malicious Life Podcast: The Propaganda

Governments around the world have been making devious use of the internet as a platform to spread, not malware, but propaganda. As in all wars, propaganda is a huge part of the modern cyber war. Join us as we explore the roots, and the most creative uses of the internet to spread information and disinformation in an attempt to affect entire countries.

Graham Cluley-1
About the Guest

Graham Cluley

Blogger, Podcaster, Researcher

Graham Cluley is an award-winning security blogger, researcher and public speaker. He has been working in the computer security industry since the early 1990s, having been employed by companies such as Sophos, McAfee and Dr Solomon’s. He has given talks about computer security for some of the world’s largest companies, worked with law enforcement agencies on investigations into hacking groups, and regularly appears on TV and radio explaining computer security threats.

samanth-subramanian-a
About the Guest

Samanth Subramanian

Writer and Journalist

Samanth Subramanian is a writer and journalist based in India. He studied journalism at Penn State University and international relations at Columbia University.

Samanth's first book Following Fish: Travels Around the Indian Coast was praised by critics such as William Dalrymple and Ramachandra Guha, and won the Shakti Bhatt First Book Prize. It was also nominated for the Andre Simon Book Award. .

His second book This Divided Island: Stories from the Sri Lankan Civil War was nominated for the Samuel Johnson Prize and the Royal Society of Literature's Ondaatje Prize. He became only the second Indian writer after Suketu Mehta to be nominated for this prestigious award for literary non-fiction. Dalrymple writing in The Guardian considered it a remarkable and moving portrayal of the agonies of the conflict that "will stand as a fine literary monument against the government’s attempt at imposed forgetfulness".

ran-levi-headshot
About the Host

Ran Levi

Born in Israel in 1975, Malicious Life Podcast host Ran studied Electrical Engineering at the Technion Institute of Technology, and worked as an electronics engineer and programmer for several High Tech companies in Israel.

In 2007, created the popular Israeli podcast Making History. He is author of three books (all in Hebrew): Perpetuum Mobile: About the history of Perpetual Motion Machines; The Little University of Science: A book about all of Science (well, the important bits, anyway) in bite-sized chunks; Battle of Minds: About the history of computer malware.

About The Malicious Life Podcast

Malicious Life by Cybereason exposes the human and financial powers operating under the surface that make cybercrime what it is today. Malicious Life explores the people and the stories behind the cybersecurity industry and its evolution. Host Ran Levi interviews hackers and industry experts, discussing the hacking culture of the 1970s and 80s, the subsequent rise of viruses in the 1990s and today’s advanced cyber threats.

Malicious Life theme music: ‘Circuits’ by TKMusic, licensed under Creative Commons License. Malicious Life podcast is sponsored and produced by Cybereason. Subscribe and listen on your favorite platform:

All Posts by Malicious Life Podcast

Malicious Life Podcast: The Propaganda Transcript

Information Operations: Cyber ​​War on the Social Front

Hello, and welcome to Malicious Life. I am Ran Levi.

YouTube, 2015. An African-American soldier stands in front of the camera; he wears a military uniform, a military helmet, and holds an automatic weapon in his hands. Nearby is a black book – The Muslim Koran – leaning against a rock. The soldier emits a series of obscenities, aims his weapon and fires ten bullets at the book.

Cut. The next frame is a close-up on the holy book, perforated with bullet holes.

As you can imagine, this video provoked angry reactions from Muslims around the world who were furious at the humiliating attitude toward their Holy Book. I don’t know of any violent demonstrations or terrorist attacks that took place following this specific video, but experience has shown that such sparks have high combustion potential, especially in high friction areas such as Iraq and Afghanistan. The obvious question is – who is that soldier photographed in the video? The answer will surprise you: a bartender in St. Petersburg.

In March 2016, the BBC published a detailed investigation of the disturbing video. It revealed that almost every detail of the video was fabricated in one way or another: the uniform worn by the soldier is an outdated American uniform that went out of regular use about a decade ago. The military helmet, well, is not a military helmet at all–it’s used mainly for training and base jumping. Both items can be easily purchased on civilian web sites. The book in the video is not a Koran at all, and the bullet holes are just…stickers. It became obvious that someone had invested considerable effort in creating this falsehood, with the intention of causing massive chaos…

A Different Kind of War

In the first season of our program, I talked about the history of malware: reviewing the trends, events and characters that shaped the world of information security in the past forty years. In this upcoming season, we will get into an even juicier topic that is becoming increasingly pertinent to the future of infosec, and humanity as a whole: cyber warfare. Stuxnet, the worm that wreaked havoc on Iran’s uranium enrichment plant in 2010, was the first cyber weapon to have broad public exposure, but it was not the first and certainly not the last. You may not know it, but cyber ​​wars between major powers have been taking place under our noses for over two decades now, with the vast majority of it still unknown to the general public. This is why we gave this season it’s name: The Invisible War. In the following episodes of “Malicious Life” we will dive into the various characteristics of this new and unique form of warfare: from assault and defense to wargaming, propaganda and espionage.

The first thing to understand about cyber warfare is that it’s very different from any form of warfare we have known to date – and this fact is well illustrated in the video example with which I opened the episode. The BBC’s researcher investigating the story was able to locate the main star of the video: a man of African roots, living and working in St. Petersburg. The man’s girlfriend was employed by a Russian company called the “Internet Research Agency”–an organization better known in Russia as “The Trolls of Olgino”. The Internet Research Agency is an unofficial arm – or perhaps only one of many arms – of the Russian government in a protracted war that it is waging against its enemies at home and abroad. In this episode we will discuss the use of the Internet by the Russian government in its wars, and what can be learned from their efforts in beginning to understand the role and character of psychological warfare in the 21st century. The term now used to describe this type of warfare is “Information Operations”. a report published by Facebook in 2017 defines “Information Operations” as:

“Actions taken by organized actors (governments or non-state actors) to distort domestic or foreign political sentiment, most frequently to achieve a strategic and / or geopolitical exit.”

The Internet Research Agency

Adrian Chen is an American journalist who wrote for Gawker, The New York Times, and is now at The New Yorker. Chen is well known for tracking and exposing online trolls: people who disseminate hateful and inciting content, poisoning the online environment for the rest of us.

In 2014, Chen first heard about the Internet Research Agency, and the fake and fabricated content it regularly distributes. Nowadays, after the 2016 US presidential election and the widespread exposure of the Russian attempts at media manipulation, we know much more about ‘fake news’ – but back then, in 2014, it was far less well known.

Chen’s curiosity led him to two interesting news stories in 2013. The first was about a hazardous materials leakage at a chemical plant in Louisiana, and the other an outbreak of the Ebola epidemic in Atlanta. Both reports were hoaxes. But in both cases, it was a sophisticated and planned hoax, with amazing attention to detail. Videos of fake newscasts were uploaded to YouTube, fabricated articles appeared on fake news sites, and dozens of Twitter accounts tweeted reports, acting as witnesses to the ongoing events. It was not the work of a bored troll of the kind Adrian had known in the past–it was a meticulously planned operation designed to maximize fear and panic among the public.

Chen investigated the news sites, YouTube accounts and Twitter profiles involved in these hoaxes, and the trail led him to the Internet Research Agency, the same organization that is now known to be responsible for pro-Trump ads on Facebook in 2016.

At the end of 2014 Chen visited St. Petersburg to investigate the organization’s activities, and interviewed local journalists and former employees of the company. Fake news, learned Chen, is a booming industry in Russia. Hundreds or even thousands of companies, commonly referred to as Troll Farms, produce huge amounts of fabricated content every day: blog posts and news stories praising Putin’s government, angry comments against Putin’s political rivals, and tweets and Facebook posts against Ukraine and the US. It is trolling on an industrial scale. Many of these companies receive their budgets from the Kremlin, or from businessmen connected to the Russian government.

#Triumfalnaya

Why is Putin’s regime so determined to poison the Russian Internet? Apparently, it is because the Internet in Russia has traditionally attracted intellectuals and liberals who are opposed to the current regime. A good example of this online struggle is a civil protest that erupted in 2011 following regional elections to the Russian parliament, which several prominent bloggers and journalists accused of being crooked. The protesters used social networks to gather crowds to central squares in about 70 cities across Russia, including Triumfalnaya Square in Moscow. On Twitter, “#Triumfalnaya” was the hashtag around which the protests were rallied.

President Putin’s “intelligence” services went into action, and a few thousand Twitter handles–which until then had hardly been active–suddenly came to life. The previously dormant accounts flooded Twitter with millions of tweets–most of which were meaningless gibberish–but with the #Triumfalnaya hashtag. These types of social media accounts are called False Amplifiers: fake profile networks designed to increase the distribution and resonance of messages through shares and retweets. The practical significance of this flooding was that important messages from protest leaders were buried under the piles of spam, therefore preventing the demonstrators from using Twitter for effective organization.

Since 2011 there has been a clear increase in Putin’s use of Information Operations for two main purposes: sowing confusion and chaos in the population of a rival country, and maintaining the stability of his regime. These two goals are separate, and organizations such as the Internet Research Agency employ different strategies to achieve each.

Within Russia, the main purpose of Information Operations is, as Chen put it, to muddy the waters of the internet. For example, bloggers who work for troll farms usually write about everyday topics – culture, shopping, fashion – and only occasionally publish posts with political content, praising the Putin administration (or opposing the Ukrainian government, with which Russia is in a protracted conflict). By sharing mostly innocent content – videos, memes and so on – it is very difficult for the ordinary citizen in Russia to distinguish between authentic and false voices. This, in turn, takes away from the degree of trust one can reasonably have in the information they see on the Internet – including, of course, the messages of the political opposition.

Russian Propaganda

Russia’s strategy for Information Operations in other countries is different. Their military doctrine views cyber war as an ongoing effort that is not limited to times of direct conflict. Chen says the information operations outside of Russia are “more aggressive and more akin to information warfare than propaganda.”

For example, for some time the Internet Research Agency operated a Facebook page called Secured Borders, posing as an American page and sharing pictures, posters and patriotic videos. In actuality, this page was what’s known as an “Astroturf group”. These groups are designed to promote a certain idea or agenda, and initially include only “fake” profiles. Over time, these groups attract real users who agree with the agenda of the group and organically disseminate the messages planted by its operators. The choice of users to whom these messages are directed is not random, though: smart algorithms can segment the social media population into subgroups by identifying characteristics such as place of residence, socioeconomic status, and political opinions, not unlike the way advertisers segment potential customers into subgroups. The agent can then push their misinformation to targeted demographics. This tactic worked well in the case of Secured Borders, and the patriotic page attracted over one hundred and thirty thousand likes.

The establishment of such Astroturf groups is not a simple matter, since all Social Networks employ a variety of techniques to identify fake profiles. Unlike the operation of simple, script-based bots, operating Astroturf groups require patience and heavy human involvement. But if is successful, they allow agents to spread false information in a manner that appears to be authentic, which can be used to implant doubt and confusion among the citizens of a rival state (one of the well known aims of classical psychological warfare throughout history).

In Secured Borders, the messages the operators planted were deliberately divisive and hateful. For example, an image of Hillary Clinton wearing a Hijab with the false quote: “I think Sharia law will be a powerful new direction of freedom and democracy,” or “Up to 5.7 Million illegals may have voted in 2008 election – this is unacceptable! Only U.S citizens should be allowed to vote”.

The Russian operators of the page were not satisfied with the messages alone, but were actively trying to ignite unrest and even riots. For example, they created a Facebook Event for an anti-immigrant rally in the small town of Twin Falls in Idaho under the title, “Citizens Before Refugees”, and invited the page’s followers to join it. The event’s description read:

“Due to the town of Twin Falls, Idaho, becoming a center of refugee resettlement, which led to the huge upsurge of violence towards American citizens, it is crucial to draw society’s attention to this problem.”

In this particular case, luckily, it seems that the provocation attempt failed. Few people came to the demonstration, if any. (Maybe because, well, it’s Idaho)

Another example of the activity of the Internet Research Agency is a Twitter account with the handle @tpartynews, whose profile picture was a teapot in the colors of the American flag. The account retweeted posts from conservative American media outlets such as Fox News, and occasionally contributed messages of its own – again, mostly political and divisive messages in areas that are known to be sensitive with the American public, such as LGBT matters or gun rights. An example of such a tweet is –

“Illegal Immigrants today.. Democrat on welfare tomorrow!”

Tpartynews had about twenty-two thousand followers, and its tweets were viewed 1.5 million times before it was blocked by Twitter.

Assisting these sorts of pages is the fact that social media provides an “echo chamber” that plays to humans’ natural tendency towards confirmation bias. For those with particularly extreme views, the ability of Twitter or Facebook to funnel content that already aligns with what you are more likely to believe, read and share, allows for contagious and potentially dangerous ideas to spread like memes. So while, for example, it is possible that some fake news websites were created by Russian Intelligence for purposes of disrupting the American electorate, there was probably no real need for their intervention at all. For every wacko conspiracy theory, there is someone or some group who can take advantage of the dynamics of social networks in the global era, to either push an ideological agenda, or simply make money off others’ gullibility. Nate Nelson, our producer, brings a story that exemplifies this dynamic well.

Veles, Macedonia

In the time since Donald Trump got elected president of the United States, a whole lot of punditry has gone into deciding exactly what swung the election his way. By now we’ve all heard the theories. There were swing states from Wisconsin to Florida, swing districts like Maine’s 2nd, even swing counties. All of these places pale in comparison, though, to one small city that had arguably more impact on this election than any other. It’s got a population of about 50,000, low-income, tends to be pretty gray…

[SAMANTH] “…it’s bleak. It looks fairly decrepit. There’s snow all over the streets and there doesn’t seem to be much life in the town.”

It also happens to be located some four and a half thousand miles off the American east coast. I spoke with a journalist who got to travel to this town, and meet some of its inhabitants.

[S] “My name is Samanth Subramanian. I’m a writer and a journalist and I live in Dublin. I wrote a piece for Wired Magazine earlier this year about fake news bloggers who are based out of Veles in Macedonia.”

“Fake news”, in the Trump era, has become an essential phrase in the global English lexicon. Although the term is now often attributed incorrectly, there is such a thing as real fake news (excuse the oxymoron). Late last year, independent investigations from The Guardian and Buzzfeed found that a lot of it–over 100 different fake news web addresses–originated in this one down-and-out town in the southern Former Yugoslav Republic of Macedonia.

Veles is the kind of place that doesn’t get written into most maps, and never really used to receive much attention. It’s usually pretty boring–depressing weather, little entertainment, not much for young people to do besides sit around and smoke cigarettes at coffee shops. You couldn’t have predicted beforehand that all of these otherwise unassuming factors would have contributed to what made Veles such an ideal hot spot for a fake news blowout.

Then again, Macedonia’s as wired-up as any other small country. Children who go to school learn English, allowing them access to the world of American culture and politics. And Google Ads money–which may not amount to much for most American businesses–translates to quite a bit in lowly Veles.

What’s really surprising? The perpetrators of these crimes are, for the most part, just teenagers. Really, it’s the adults of Veles who tend to be on the outside looking in–without that particular amalgam of skills and motivations that makes young men so well-suited to the game. While on assignment in Veles, Samanth spent time with a group of kids, mostly one particularly successful fake news perpetrator who calls himself Boris (and whose real name it should be noted is, for privacy reasons, not actually Boris).

[S] “So Boris was 18. He was sort of this moody kid with really close-shaved hair. He smoked a lot. He had dropped out of school already by the time I met him because this was sort of – clearly the way to make money. He watched a lot of sort of American movies. He listened to gangster rap.”

Well, these were teenagers who sort of were online almost all the time. I guess they were sort of more proficient in English compared to their parents who had sort of never for example watched American TV or watched American cinema. So they knew a little bit of English. They knew sort of the cultural context in which the American election was happening.

They were sort of really plugged into social media. They knew how to sort of operate Facebook in a way that their parents possibly didn’t or didn’t care to know.”

So what of the websites themselves? Much of their content is inflammatory by design. Key words like “Wow!”, “Shocking!”, and “Breaking!”, in all-caps are proven to bait clicks. The web domains themselves are made to look official–one example, USAPolitics.co, was one of the sites operated by Boris. Headlines from the homepage of USAPolitics range from the pseudo-normal, meant to appear legitimate-sounding, like “As Trump tweets, legal community turns eyes to John Roberts”, to the flat-out inflammatory, like “END FOR HILLARY!? WIKILEAKS Releases Candid Photos of Hillary Clinton”. A middle-ground example might look something like “POLL: Do you support Trump if he supports an arrest of Soros?” Spelling and grammar in these cases generally ranges from the bad to the ugly.

If there’s one thing you need to understand about these websites, though, it’s how little their creators actually care about what they’re writing about. This isn’t to say that no fake news out there is written for political gain. In Veles, though, it’s all about one thing: money.

This, ultimately, may be the oddest part of the whole mess: that the perpetrators here fundamentally did not have a stake in what they were causing, and gave very little thought to the consequences of their actions. Yes, what Boris was doing was directly aimed at the presidential election, but his motivations were totally unrelated. It’s sort of like how Albert Einstein’s discoveries in the field of physics cleared the way for the invention of the atom bomb–his intentions were not to help create the most dangerous weapon in human history, but his actions assuredly contributed to its occurrence.

More so than these specific European teens, though, the precedent that the fake news phenomenon set in 2016 is what we’ll be worrying about long after Trump is gone.

[S] “I mean Veles in my mind from last year was only what you might call a proof of concept, right? I think they figured out a way in which they could propagate news that were patently fake. They figured out how to use tools that are already out there like Facebook. How to use these tools to do this kind of work, to propagate this kind of news.

But I think more than anything else, what’s interesting is that if it has been proven that this can be done, what stops election campaigns themselves, the campaign teams themselves from setting this up and running this as a little shop of their own?”

It may be that we’ve already crossed a point of no return, where the internet’s ease of access will allow for misinformation to become as much a part of the media landscape as truthful journalism. With all the efforts from tech companies to crack down on it, the fact is that fake news has done more than give us incorrect facts–it has confused the issue as to what is fundamentally true and what isn’t. If weaponized, who knows what’s next?

[S] “I mean teenagers using it in Macedonia. I mean if that was all we had to worry about, there’s not much to worry about, right? I mean what – the real fear is, is that there’s going to be sort of governments and corporations that use these technologies to sort of put out or spin versions of what is the truth. There are terrorist organizations that could use this as well. As you rightly pointed out, in times of warfare, there are governments that can use it against each other to the extent that nobody really knows what the truth is anymore. Once it gets into sort of this institutionalized method of operation, I think that’s when we really need to worry. I don’t think that’s very far away at all.”

The Sting

While staying at St. Petersburg, Adrian Chen spoke with former employees of the Internet Research Agency, revealing some of the organization’s inner workings. The Internet Research Agency employs some four hundred employees, most of whom are highly fluent in English, and these employees work 12-hour shifts, from 9 am to 9 pm. They connect to the Internet through proxy services to hide the IP address of the organization, and each day employees receive a list of external and internal political messages and a daily quota of posts and comments of various kinds. The Employees enjoy relatively high salaries and do not necessarily identify with the political goals of their employers.

Chen did not try to hide his activity from the Internet Research Agency and even interviewed one of its managers. But he soon discovered that he had underestimated the sophistication of the people behind this organization.

During his research, Chen ran into a young woman named Katarina Aistova, who used to work for the Internet Research Agency. He approached her by email and asked her for an interview. Katarina initially refused and did not want to cooperate with him, but eventually agreed to meet. Her condition was that her brother could also join the meeting, to protect her, if necessary.

Chen agreed, and the three met in a small Chinese restaurant. The brother, Chen discovered, was a very unusual type: his head was shaven and his arms and shirt were decorated with Neo-Nazi tattoos and symbols. Chen felt a bit anxious about the guy, but during the entire conversation with Katarina the brother just sat across the table in silence, watching them through dark glasses. Katarina told Chen that most of her work at the Internet Research Agency included translating documents from English to Russian, and only occasionally she was asked to write pro-Russian comments on American sites. But, Katarina said, she didn’t mind it: she loves Russia and believes Putin is a good leader for her people.

Chen left Russia on April 28. A day later, an article appeared on a Russian website called the Federal News Agency – known to belong to the Internet Research Agency – which was titled:

“What does the journalist from The New York Times, have in common with the Nazi from St. Petersburg?”

Chen read the article and could not believe his eyes. Translated from Russian, the article read –

“American journalist Andrian Chen, known for his investigations and revelations, […] visited the northern capital of Russia, where he met with the no less famous Russian Nazi Alexei Maksimov (a.k.a “The Fly”). What do they have in common, and what did the foreigner need – it’s a question that needs an answer. Nazi Maksimov is well known both in Russia and abroad: the media previously reported that he is the leader of the international neo-Nazi community Tottenkopf, which is engaged in the propaganda of neo-Nazi ideology in Russia and Europe.”

The article was accompanied by pictures of Chen sitting in the Chinese restaurant with Maksimov and did not mention Katarina at all. What’s more, the shot’s angle was carefully chosen so that the young woman sitting next to Chen at the restaurant was not shown in the frame. Only then did Chen understand the sting operation orchestrated against him by the Internet Research Agency. Katarina was a lure – and he was caught in the hook. The Internet Research Agency preempted the release of his investigative report into the troll farm so that it could claim that any findings against it were just part of a conspiracy designed to provoke internal unrest within Russia. Otherwise, why would Chen meet with the well-known neo-Nazis? Do you find this strategy familiar? … Chen experienced firsthand the way troll farms poison the Internet in Russia and fill it with false information, so much so until truth and fiction are interwoven and no one can tell right from wrong.

Over the next few days, the propaganda article was published on a number of other websites, with some changes and alterations. One version accused Chen of being an undercover agent of the CIA. In another, he was an agent of the NSA. Dozens of Twitter accounts shared the stories with the hashtag “recruitment of Nazis”. Fortunately for Chen, he was already back in the United States when this all happened. In an interview to the podcast “Longform”, the host asked him –

“You were trending in Russia (on twitter) as a neo-nazi spy[..]. Was that scary?”

To which Chen answered –

“I didn’t know that there was that kind of counter-intelligence operation going on against me until I got back to the states. I think if it had happened [when I was still] in Russia, I’d be pretty freaked out.”

The sting that the Russians conducted on Chen demonstrates another type of Information Operation: one whose purpose is to threaten, silence or even physically harm those who are deemed worthy targets. Fortunately for Chen he was not in any immediate danger, but not everyone is equally lucky.

A Serious Thing

Graham Cluley is a British security expert who we met in the previous season of our show. Cluley experienced such an unpleasant event himself.

[Graham] I have had incidents in the past—there was a time—I don’t know if it was a virus writer who did this but someone took my photograph and they created a fake account on Facebook and they posted really unpleasant things on Facebook about child abuse and stuff like that. And as—

[Ran] It could be a serious thing.

[G] Yeah, it could.

[R] It could hurt your public image.

[G] At the time—well, at the time, I was actually on holiday in—where was I? I think I was in—not Thailand—Vietnam or something like that. Cambodia. I was in Cambodia. That’s right. And so I wasn’t on great internet connection anyway and I began to get these messages from my company that the CEO had been contacted by people saying, “Graham Cluley has been saying, you know, this, this and this on Facebook.” I mean introducing about them on my HR department who received emails. So people were so upset about someone who they believed to be me was posting on Facebook and recognized the picture and said, “That’s Graham Cluley. He works for so and so.” And so they would contact my company. My wife received messages saying, “We know where you live and we’re going to—” I received messages saying they’re going to shoot my wife in the head because they were so offended as to what this person was posting. And, Facebook—

[R] This is very serious. I mean—

[G] Yeah. It was horrible. It was absolutely horrible. And they said they were going to burn my house down and all sorts of things and I was in Cambodia at the time. I was far away from England. And—

[R] How did you feel about that, being so far away?

[G] I felt terrible. And so, what I did was I contacted Facebook and I said, “Someone has taken my photograph and they’re posting all these things and they’re saying that I’m a pedophile and they’re riling people into, you know, getting—” My company was fine. They knew it wasn’t me but, you know, people were making unpleasant threats, and Facebook would do nothing. They said “If you’ve got a problem, you should go to the police, if people are making threats about you.” And I thought, “What kind of community is this that you’re not doing anything about this?” And the only point where I could get Facebook to care was not when I said they’re claiming that I’m a pedophile or they’re making death threats against my wife. The only point at which they cared was when I said, “They’re using a photograph of me which is the copyright of my company.” And then they said, “Oh, copyright infringement. We’ll do something.” And I was—

[R] It’s so silly.

[G] I was disgusted. Well, I was pleased that they did then, but I know other people this has happened to where fake Facebook profiles have been used with their photographs or maybe someone they may have offended but I remember one young woman, someone obviously had a vendetta against her and they posted messages claiming she was a prostitute and asking people to phone her up. And she just got so much harassment and so many unpleasant phone calls. And, you know, her reputation was being destroyed and I had reputation was being destroyed and I had the advantage that I was known in the computer industry. I could go to the press and kick up a stink and say, “Facebook aren’t doing anything and this is how Facebook have treated me.” But regular members of the public have a much harder time and I’ve had a bad taste in my mouth about Facebook ever since.

So, what can we do against such sophisticated Information Operations? How can individuals defend themselves on this unexpected and chaotic battlefield of Social Networks and fake news sites?

The answer is unclear. All social networks, from Facebook to Twitter, are aware of the challenges of information security: they all operate manual and automated tools to identify fake profiles and fake news stories and do their best to warn those who might become victims of identity theft.

But not all social networks feel the same obligation to fight those who muddy the water for the rest of the users. Facebook has a relatively strict enforcement policy, but Twitter, for example, takes a more forgiving approach and does not tend to filter or disqualify problematic content unless it incites terrorism or pedophilia.

The New Soldiers

There are many voices among the public who call on governments to start treating social networks not as technology companies but as media companies, equal in duties and rights to other content organizations such as newspapers and TV channels. Such a new definition would apply to Facebook or Twitter the same legal responsibility as other types of media when they publish false and misleading information, and may encourage the companies to create more efficient mechanisms and algorithms for filtering false information. Facebook strongly opposes such a change, of course. Mark Zuckerberg said in one of his speeches –

“We’re a technology company. build tools. We do not produce the content. We exist to give you the tools to have the experience that you want, to connect with the people and businesses and institutions in the world that you want. “

And even if we place greater responsibility on social networks, who knows if they will be able to cope with all the troubles and threats of sophisticated information operations? It seems that the only practical solution is–as is the case for almost every negative phenomenon on the Internet–education. We must teach Internet users the skills needed to filter credible sources from untrustworthy sources, to question far-fetched claims, and to avoid over-sharing of personal information. Will we succeed? Who knows. What’s clear is that in the new battlefield of cyber warfare, each of us is a soldier.

Bibliography and Resources

http://edition.cnn.com/2017/09/21/politics/tpartynews-twitter-russia-link/index.html

http://talkingpointsmemo.com/muckraker/russian-trolls-tea-party-news-twitter-account

https://www.thedailybeast.com/exclusive-russia-used-facebook-events-to-organize-anti-immigrant-rallies-on-us-soil

https://www.stopfake.org/en/reporter-who-investigated-russian-trolling-shares-experiences-weighs-in-on-hacking-in-presidential-election/

https://www.nytimes.com/2015/06/07/magazine/the-agency.html

http://www.pbs.org/newshour/bb/russian-trolls-spreading-online-hoaxes-u-s/

http://www.npr.org/2015/06/04/412046928/russian-trolls-spread-false-information-on-the-internet

https://longform.org/posts/longform-podcast-171-adrian-chen

http://www.bbc.com/russian/society/2016/03/160315_smj_trolls_make_haram_video

https://www.rand.org/pubs/testimonies/CT473.html

https://www.youtube.com/watch?v=OhKcqJPsLd0