The term “Streisand effect” is named after Barbra Striesand, the famed singer and actress. In 2003, Streisand sued a photographer for 50 million dollars in an attempt to force him to remove from the web an aerial photo of her Malibu mansion. This ultimately turned out to be a very poor decision: the picture in question was one of 12,000 photos taken as part of a project documenting California’s coastline, and had things taken a different turn, it’s highly likely that only a handful of people would have even taken a second look at it. But ironically, the attention brought on by the lawsuit garnered it millions of views and made it famous to the point where it is now displayed in Wikipedia.
In a sense, Barbra Striesand was a bit unlucky: had the silly affair happened only two years later, we would probably end up calling it the ‘Cisco effect.’
The Black Hat conference which takes place every year in Las Vegas was conceived as – and to a large degree still is – an industry-centric event. To be sure, Black Hat had its share of the usual shenanigans one might expect when bringing together thousands of security researchers, such as WiFi hijacking and the like – but compared to DEFCON, its older and more hacker-oriented sibling, Black Hat has a more ‘corporate’ feel to it, with much less excitement and drama.
But Black Hat 2005 was different.
It started Tuesday morning, on the very first day of the conference, when early attendees received the conference’s proceedings booklet, and noticed that some thirty pages were missing from it. And not just ‘missing’: the pages were very obviously ripped from the booklet, as if someone decided to hastily remove them from the conference’s program at the very last minute.
Rumors started circulating among the conference’s visitors: a security researcher named Mike Lynn exposed a serious vulnerability in a major vendor’s product and the vendor is trying to silence him, threatening legal action. Lynn was scheduled to give a talk at 10 AM that morning, and the Streisand effect made sure that the Palace Salon hall at the hotel’s 4th floor was packed full of curious attendees when the 25 year old researcher took the stage.
Lynn was employed by ISS, an Atlanta-based security vendor, and uncovered a serious vulnerability in Cisco’s routers. Cisco did indeed fix the faulty code, but Lynn was concerned that the company did not notify its clients of the security bug and didn’t urge them strongly enough to upgrade their routers’ software: he intended to expose the vulnerability in his planned Black Hat talk.
But a few days before he was due to step behind the podium ISS, his employer, forbade him from giving the talk. Lynn didn’t budge: he quit his job, and got ready to present his research anyway. This is when Cisco itself got involved and tried to censor Lynn: they produced a temporary restraining order against Black Hat, forcing it to remove Lynn’s slides from the conference’s printed materials and CDs.
Unphased, Lynn took to the stage and described the flaw and its dangers, and as expected Cisco and ISS filed lawsuits against him. Fortunately for Mike, a lawyer who attended his talk volunteered to assist him and brokered a settlement deal between the two parties. The affair is commonly referred to as “CiscoGate”.
If companies such as Cisco view public disclosures as so threatening to their revenue and reputation, you’d imagine that they would welcome private ones. In that case, you’d be surprised.
“[Bar Zik] I can give Shas, for example.”
That’s Ran Bar Zik, a prominent security researcher and a journalist who is well known in Israel for publicly disclosing breaches and data leaks. The thing he’s referring to — Shas — is a popular Israeli political party: in 2022, Bar Zik came across a massive security vulnerability in the party’s website.
“[Bar Zik] And we found a huge breach, a huge breach that allowed everyone with a browser, only with a browser, to get all of the information of the Israeli citizens: all of the Israeli citizens, not only Shas supporters, with for example family connections with your father, with your mother, with your brother. Also phone numbers, also of course addresses, stuff like that. A huge breach.”
Bar Zik reached out to the party’s CEO and reported the breach.
“[Bar Zik] So the CEO told me – no, it didn’t happen. It didn’t happen at all. But it happened. I saw it.”
Bar Zik says he’s used to organizations trying to brush him off when he approaches them.
“[Bar Zik] I will tell you a story or a tip, maybe for other journalists or also for other IT guys. […] I’m connecting to a VPN from a foreign company – but not a foreign country like the United States or Britain: an exotic country, okay? For example, Mozambique. Okay, Mozambique is a nice country, okay, but not a lot of Mozambique citizens or tourists are entering Israeli sites. […] So I go to a VPN from this country, and then I try, I’m harvesting the data. And then I disclose the breach, and then I ask the company: Hey, did you make an analysis of this breach? Did you notice who used the data, if someone took the data, or just, you know, found out the breach?”
Most of the time, says Bar Zik, companies don’t even bother to investigate his reports: he knows that for sure, because had they done a real investigation they’d probably uncover the unusual network activity from Mozambique.
“[Bar Zik] 99% of the companies say to me- ah, no, nobody used it. […] They just gave me, just, you know…waved me over.”
In Shas’s case, the CEO not only refused to investigate the breach –
“[Bar Zik] And he says, no, but it didn’t happen. […] If you write about it, we’ll see you in court. […] And they hang up the phone.”
Bar Zik published his findings in the newspaper anyway.
“[Ran] By the way, did Shas really place charges against you in court?
[Bar Zik] Of course not. OK. You can threaten – but between the threat and going to court, there is a huge gap.”
Mike Lynn’s and Ran Bar Zik’s bravery and determination is certainly admirable: it also demonstrates the challenges facing those who do vulnerability research. Some vendors are so terrified of the consequences of such disclosures that they are willing to go to great lengths to try and prevent researchers from revealing them to the public. On the other hand, an irresponsible disclosure of a vulnerability might allow cybercriminals to craft an exploit that will allow them to take advantage of the weakness and harm innocent users before the vendor manages to fix the issue and push the updated software to its users. The obvious ideal solution is to disclose the vulnerability to the vendor in private – but as we’ve seen, many organizations don’t have the suitable organizational culture or internal processes to handle such disclosures, and so ignore them or try to sweep them under the rug.
Unfortunately for us, the users, there are those who are willing to listen to what such researchers have to say – and even pay them handsomely for the information they bring. These are the cybercrime syndicates and nation state actors who buy vulnerabilities in the black market for their own nefarious uses. Surveys clearly show, however, that most researchers would prefer to steer clear of the black market due to either ethical and moral issues, or simply because of the hassle involved in finding the shady clients, negotiating a deal and receiving the payment.
It is clear, then, that there’s an obvious incentive for both vendors and researchers to collaborate on disclosing vulnerabilities safely and privately. but this stands in stark contrast to the fact that bug bounty programs – official schemes that allow researchers to share their findings with vendors and be compensated for their work – have gained prominence only in the past decade or so, and even today only a relatively small portion of vendors have such bug bounty programs at place. Why is that?
A Short History of Bug Bounty Programs
The very first bug bounty program was launched in 1983 by Hunter & Ready, a software vendor specializing in real time operating systems: participants were presumably awarded a Volkswagen Beetle (commonly nicknamed ‘the bug’) for each vulnerability they discovered. However, judging from the way the program was described in Hunter & Ready’s print ads, this early program was less a bounty program, and more of a marketing campaign designed to taunt the software’s high quality.
The first “true” bounty program was created – as were many other important innovations – by Netscape, in 1995. Jarrett Ridlinghafer, a technical support engineer at the company, realized that many of Netscape’s users were sharing bugs they found in the software – and fixes and workarounds for these bugs – in various forums. He suggested to Netscape’s management that the company leverage the power of its community and create a bug bounty program – a new phrase he coined. His idea was accepted, and Netscape offered its users a 1000$ and a T-shirt for the bugs they reported. According to Jarrett, his innovation saved the company millions of dollars.
Two years later, in 1997, a Danish programmer named Christian Orellana approached Netscape with details of a particularly nasty bug. When the company offered its usual compensation, however, Orellana refused: he believed the severity of the vulnerability he discovered was worth more to Netscape than the company was offering. Netscape refused to negotiate the price, and Orellana disclosed the bug publicly.
This incident highlights a basic problem with bug bounty programs: they can be prohibitively expensive for vendors. The researchers who invest countless hours combing the code for bugs feel – and rightly so – that they deserve to be paid fairly for their efforts, especially if the vulnerability they discovered saved the vendor a lot of money in potential damages. Most vendors, however, view this problem from a different angle: while creating new software is an effort which ultimately makes money for the company, paying for bugs is an expenditure that does not directly translate into future income. Thus vendors have limited incentive to pay large sums of money for vulnerabilities brought to their attention. This is probably why in spite of Netscape’s bold initiative, no major software vendor followed suit.
One vendor which had a particularly troublesome relationship with the research community was Apple. In its ads, Apple used to describe its Macs as almost unhackable, or at the very least much safer than the PC. This irked many hackers, who knew for a fact that Apple’s products were just as vulnerable as almost any other complex and complicated software.
One of those irked researchers was Dragos Ruiu, a Canadian software developer and hacker. In 2007, Ruiu decided to run a small hacking competition, to demonstrate how misleading Apple’s ads were – and maybe shame the company into taking security more seriously. Ruiu bought two MacBooks Pro, and placed them on the floor of the CanSecWest conference in Vancouver. He named his challenge “Pwn2Own”, meaning if you manage to hack the computer, you get to walk away with it.
As one might expect, Ruiu’s challenge attracted a lot of attention from both the media and the conference’s attendees, and a few hackers tried their luck with the machines. By the end of the first day, however, no one was able to break into the MacBooks. Ruiu decided to up the ante, and added a 10,000$ prize to the pot.
It was then that Shane Macaulay, a security researcher who attended CanSecWest, decided to reach out to a former colleague of his, Dino Dai Zovi. Dino recalled their conversation in an interview for ZDnet.
“The interest for me was the challenge. I remembered it was happening but I wasn’t at the conference, so I didn’t give it much thought. I got a call on Thursday night from a friend [Shane Macaulay] saying that the machines survived the first day and maybe we should give it a shot, try to win it. He said they had added a $10,000 prize – so I said, OK, cool, let me sit down and take a look and see what I can find. I figured I’d stay up and write an exploit if I found something interesting.”
Dino worked all through that evening and into the night. Finally, at 3 AM in the morning, he called Shane and shared the good news: he had discovered a bug in a QuickTime library that was exploitable through any browser via a loadable Java applet. Dino sent the exploit to Shane, who crafted it into a website and emailed the URL to Ruiu. When the MacBook’s browser loaded the malicious web page, it launched a remote shell that gave the duo complete control over the laptop. Shane walked away with the pwned machine, and Dino collected the cash.
But winning Pwn2Own earned Dino Dai Zovi a lot more that 10,000$ prize: it basically changed his life. The exotic challenge – and in particular the focus on Apple, which was at the height of its success under Steve Jobs’ leadership – drew the attention of the media, and Dino quickly became a rockstar.
“It was a massive benefit to my career and really put it on a different and better trajectory. At the time, I had been writing exploits quietly as a personal hobby for almost a decade, but was not at all known for it.”
Following his success at Pwn2Own, Dino became a sought after security consultant, wrote a couple of books on hacking iOS devices, and became a regular speaker at prestigious conferences.
A Discussion Accelerator
The tremendous success of Pwn2Own’s first event attracted many more participants in the following years. The challenges became harder – and the prizes grew accordingly, as Brian Gorenc, Sr. Director of Vulnerability Research at Trend Micro who became a regular sponsor of the competition, describes:
“We would put in really cosmic exploits, where it’s like – all right, first you have to compromise the browser, then you have to compromise the guest operating system, and then you have to compromise the virtual machine and then you’ve got to compromise the host operating system – and we’d give you 200,000 dollars for that. And those teams would come together and they would have specialists in each area and build an exploit chain that would actually do that.”
Initially, the challenges focused on web browsers, but over the years, Pwn2Own added more diverse targets, such as smartphones, IoT devices – and even a Tesla, which two researchers drove away with in 2019 after successfully leveraging a bug in the car’s infotainment system.
Pwn2Own’s success made it impossible for vendors to ignore the competition, as Drago Ruiu told Security Weekly:
“It’s kind of an interesting role that the Pwn2Own competition has. It’s kind of like a discussion accelerator with vendors, because it happened under a big spotlight. The vendors really don’t have a choice but to pay attention to anything that happens there, otherwise it starts to become a little bit of a stinking pile. It’s going to get used [in the media] for clickbait. So it’s in their best interests and all of the vendors will be there and will do something about it.”
“What ends up happening is as the […] contest approaches, the vendors are forced to implement new security features to ensure that they are putting up the best defenses at the contest. What ends up happening is right before the contest all the vendors will start updating all their software. Tesla will push a new release or they’ll be the largest Patch Tuesdays right before the contest, things like that.”
One especially interesting consequence of Pwn2Own’s success was the renewed interest in bug bounty programs. Dino Dai Zovi says that it’s no coincidence that many prominent tech companies such as Google, Meta, Microsoft, Apple – even the US government – decided to kick start their own bug bounty programs shortly after Pwn2Own’s rise to prominence.
“Pwn2Own was the first competition that focused on demonstrating real, working zero-day exploits against real-world software, whereas before most security competitions were capture-the-flag competitions that focused on “mock” targets and vulnerabilities. It really puts the focus on what was possible against the software that millions, if not billions, of people use: To put a spotlight on how much we needed to improve security.”
“Dear Mark Zuckerberg”
But running a successful and effective bug bounty program isn’t easy, even for wealthy companies who can afford such costly initiatives.
Khalil Shreateh is a Palestinian researcher who in 2013 uncovered a serious flaw in Facebook’s system that allowed an attacker to post messages to any user’s page, including those not his Friends list. Khalil, who was unemployed at the time and working on a five-year-old laptop with broken keys and a faulty battery, submitted the bug through Facebook’s bug bounty program, hoping to win the promised reward. But the emails he sent were rejected: Facebook’s representative claimed that the issue he had reported was not an actual bug.
Khalil was certain that the weakness he uncovered was a serious one. He later told CNN that –
“I could sell (the information about the flaw) on the black (hat) hackers’ websites and I could make more money than Facebook could pay me, but for me – I am a good guy. I don’t deal with the black (hat) stuff. […] I never asked them, ‘I want $4,000 or $5,000’, I didn’t deal with them like that … (But) I really needed that money.”
So Khalil did the only thing he could think of that was certain to catch Facebook’s attention: he posted a message on Mark Zuckerberg’s page. Here’s Khalil’s original message, quoted verbatim:
“Dear Mark Zuckerberg.
First sorry for breaking your privacy and post to your wall. I has no other choice to make after all the reports I sent to Facebook team. My name is KHALIL, from Palestine. Couple days ago i discovered a serious Facebook exploit that allow users to post to other Facebook users timeline while they are not in friend list.
I report that exploit twice, first time i got a replay that my link has an error while opening, other replay i got was “sorry this is not a bug”. both reports i sent from www.facebook.com/whitehat, and as you see i’m not in your friend list and yet i can post to your timeline.”
As expected, the rouge post caught the attention of many users, and was reported extensively in various tech blogs and media outlets. Facebook acknowledged its mistake in ignoring Khalil’s messages – but refused to pay him for the bug he reported because by posting the message on Zuck’s profile, Khalil violated the platform’s terms of service… This obvious injustice enraged a lot of people, prompting a fellow security researcher to launch an online donation campaign to pay Khalil the money he deserved.
The Challenges of Bug Bounty Programs
As you’ve probably noticed, English isn’t Khalil’s first language. According to Matt Jones, a Facebook security team member who spoke with Hacker News, this language barrier is partly to blame for the incident – but only partly.
“We get hundreds of reports every day. Many of our best reports come from people whose English isn’t great: though this can be challenging, it’s something we work with just fine and we have paid out over $1 million to hundreds of reporters. However, many of the reports we get are nonsense or misguided, and even those […] provide some modicum of reproduction instructions. We should have pushed back asking for more details here.”
Facebook’s blunder sheds light on a major challenge facing vendors who wish to establish bug bounty programs. Depending on the number of people using the software, the sheer amount of bug reports can practically overwhelm the organization – especially after version upgrades and major changes, and if the internal processes erected to handle those reports aren’t yet mature and efficient enough. Communication with bug hunters can also be challenging, as we’ve seen in Khalil’s case, and if bug reports aren’t handled with enough care and sensitivity, the bounty program can end up backfiring and harming the organization’s relationship with its users more than strengthening it.
Another challenge is due to the inherent “fuzzines” of the term ‘vulnerability’. There’s no clear cut definition of what constitutes a security vulnerability or what makes one vulnerability more serious than another, and so the organization needs to be prepared for handling plenty of ‘edge cases’ that require special attention.
A great example of this is an incident that occurred in 2017, when a researcher named Kevin Finisterre informed DJI, a Chinese drone manufacturer, about large amounts of personal information of the company’s customers that were exposed to the open web due to poor handling of AWS private keys. DJI did recently establish a bug bounty program, but apparently it was originally oriented towards potential vulnerabilities in the drones’ firmware: Finisterre’s finding, which had nothing to do with firmware but was obviously just as important, threw a monkey wrench in DJI’s internal processes. Initially, the company was willing to pay Finisterre 30,000$ for the bug – but at a certain point made a sudden U-turn, claimed that the exposed servers were not in scope for the bounty program, accused Finisterre of being a hacker and threatened legal action. Ultimately, Finisterre decided to forfeit the money, and disclosed the issue publicly in his blog.
These sorts of difficulties, coupled with the potential high costs, are probably why bug bounty programs are still not widely implemented in the software industry except by the largest and wealthiest organizations. To fill that gap, a few companies – such as HackerOne, Zerodium, ZDI and others – have stepped up to play the role of ‘middleman’: they pay researchers for the various vulnerabilities they find, and then sell these vulnerabilities to their clients.
Much like stock market brokers, these companies play an important role in reducing the friction involved in closing the deal between buyers and sellers. Many researchers who are reluctant to approach potential clients directly – either because of the difficulties we discussed earlier, or maybe because of certain dark patches in their personal histories – find these vulnerabilities clearinghouses to be valuable services, as do the organizations who buy these vulnerabilities and favor dealing with a single broker over talking to hundreds or even thousands of individual researchers.
The problem with such middleman services is that more often than not, the researchers have no way of knowing the identity of the clients who buy the vulnerabilities they’ve uncovered. These clients are almost always legitimate – that is, lawful organizations as opposed to crime syndicates – but even so, some of them are of the sort that many researchers would object to doing business with. One such company is the Israeli NSO, whose Pegasus spyware was used to target human right activists and journalists. As Dr Max Smeets, a researcher at the ETH Zurich university Centre for Security Studies, explained in an interview for TechMonitor,
“When you’re selling your tools to a government or another group, you may want to integrate some zero days to ensure much higher chances of access. They will integrate them into a package they are selling and suddenly that platform becomes a lot more valuable. So you see many of these companies being willing to pay a really high price for certain types of exploits.”
Next to private companies such as NSO, says Smeets, a large portion of the brokers’ clients are governments.
“Many European countries won’t buy zero day exploits, but there are a select number of countries that will buy them. This includes the US government, which has a huge budget, and the UK government, and we know typically they are bought by the intelligence agencies like the CIA or the NSA, although as we see more countries establish military cyber commands they may be interested too.”
The problem is that when government agencies such as the NSA buy vulnerabilities – they don’t actually report them to the vendors. It’s well known that the NSA hoards zero day vulnerabilities for use in its cyber weapons – instead of reporting them to the vendors. That policy backfired miserably in 2017 when a vulnerability nicknamed EternalBlue, which was stolen from NSA, was used in the WannaCry ransomware attack – one of the most destructive and costliest attacks in cyber history.
It seems, then, that the problem of software vulnerabilities disclosure is yet to be fully solved – if it ever will be. Although the basic ingredients for a viable economic market – researchers who wish to sell information about vulnerabilities and vendors who want to buy it – are all there, the practical and moral issues involved in dealing with zero day vulnerabilities make such a trade much harder in practice then it is in theory.
Still, in the past 15 years, we’ve witnessed a sea change in the way vendors treat vulnerability disclosures. Where once such disclosures were treated as nuisance at best and potential threats in the worst case – nowadays, having a bug bounty program is considered a sort of a ‘badge of honor’ – a way of signaling to the rest of the world that the organization in question has reached a certain level of maturity and success. Let’s hope that this trend continues.