AI can change information security but only if users can understand the technology's benefits

In information security, the bad guys have an advantage over the good guys. But artificial intelligence can help close this asymmetrical gap and provide defenders with the needed edge.

“AI’s advent in security will make our jobs actually tenable and improvable,” Sam Curry, Cybereason’s Chief Information Security Officer, said in a recent talk at Google’s Cambridge, Massachusetts, office.

“They get to choose from thousands of vulnerabilities and when they’re in they expand and then you can’t find them.” Meanwhile, the defenders need to be stop every attack every time in order to successfully defend their assets, an impossible task given the adversary’s intelligence and strong motivation to complete the operation.

“Men and women have to defend at scale everywhere all the time and bad guys can pick their spot. You know how that ends. AI can change that,” Curry said.

With every security company touting products with artificial intelligence, separating the hype from the substance can prove challenging if not impossible. But, said Curry, when done correctly, artificial intelligence can help security.

“There are clearly applications for AI but it is a very difficult thing to get right because the bad guys are going to find ways around whatever security you build. This is the only part of IT that has extremely intelligent opponents,” he said.  

Part of the definition of security means allowing the right people to have access to the right data and preventing the wrong people from accessing that data, a process the artificial intelligence can help with, he said. With artificial intelligence, strong authentication to prove who you really doesn’t have to involve using tokens or a key fob with changing passwords or a texting a code to a phone, he added.

Device authorization is another area that can benefit from artificial intelligence.

“Think about how complex the communications are in the world with all these devices and people. You shouldn’t have to write a policy for how each one of these things is going to behave,” Curry said.

Artificial intelligence can block malicious activities, like malware execution, before it occurs, leading to fewer events like the recent WannaCry ransomware infection that spread around the world.

Despite the potential for artificial intelligence to positively impact security, companies need to understand how the technology helps their business if AI is to prove impactful. Part of the responsibility in keeping this from happening lies with vendors, Curry said. The ones that fail to understand their customer’s business and how artificial intelligence fits into it risk losing clients, Curry said.

If a customer can’t understand how AI works and the vendor has “trouble explaining it beyond bits and bytes in a way that would make sense to a business stakeholder” then failure is inevitable, he said.

For AI to prove successful and aid information security, vendors have to ask if this technology is going to confuse the people who are going to use it. “Hopefully, the answer is no. If not, then it’s just not going to take off,” Curry said.

Fred O'Connor
About the Author

Fred O'Connor

Fred is a Senior Content Writer at Cybereason who writes a variety of content including blogs, case studies, ebooks and white papers to help position Cybereason as the market leader in endpoint security products.