Cybersecurity has largely garnered attention for all the wrong reasons over the past 10 years. From Yahoo to Target to Equifax, it’s the lack of adequate security that had made headlines and sparked conversations.

For many, the advancement of Artificial Intelligence (AI) is the great equalizer against cybercriminals. Those that advocate its use in cybersecurity believe that among other things, AI’s ability to use machine learning to rapidly process information and make future predictions with complicated algorithms will be the industry’s white knight.

Security Struggles of AI

It’s not nearly that simple, however. AI isn’t just being used by the forces of good. Cybercriminals and hackers have shown a propensity for exploiting existing technology to overwhelm powerful security in the past. Is there any reason to think they won’t do the same with AI?

Speaking at a conference a week after the Equifax debacle in 2017, Google Director of Information Security and Privacy, Heather Adkins, expressed her belief that people will make a bigger difference than AI in the future of cybersecurity.

“AI is good at spotting anomalous behavior, but it will also spot 99 other things that people need to go in and check out, only to discover it wasn’t an attack,” Adkins said. “The problem in applying AI to security is that machine learning requires feedback to learn what is good and bad … but we’re not sure what good and bad is, especially when malicious programs mask their true nature.”

The debate rages between two camps on what the definition of cybersecurity should actually entail. Is it building more firewalls and layers of defense to keep unwelcome guests out altogether? Or is it focused on following established protocol when something unknown is found in the system and using predictive logic to formulate the right response?

Robots Fighting Robots?

In the same time frame that Adkins was downplaying AI’s abilities in cybersecurity, Oracle CTO Larry Ellison was hailing the innovation with the same gusto as a battlefield commander with a weapon of mass destruction.

“It can’t be our people against their computers—we’re gonna lose that war,” Ellison told attendees at Oracle OpenWorld. “It’s gotta be our computers versus their computers.”

If that sounds a bit too close for comfort to the plots of The Terminator or the re-envisioned version of Battlestar Galactica, you’re not alone. Plenty of thought leaders aren’t too confident that the AI security robot should be switched on and sent to work all alone.

AI and machine learning allow computers to parse data exponentially faster than the human brain can, allowing them to race ahead of us in their analysis and decision-making. That does not mean they are flawless. Just like a human security officer can forget a patch, a computer can experience a bug, get hacked by a smart criminal, or fall victim to an exploit that it’s never seen before.

The Future Role of AI in Cybersecurity

Must AI have a role in the future of cybersecurity? The answer is yes, with the caveat of just how big that role will be. The human element needs to remain the dominant factor in all this; using technology that can minimize human errors, such as forgetting to release patches. The same technology can expand the oversight humans use to govern the entirety of their security systems. Companies need to move beyond the reliance on IT personnel and pursue security experts for the industry they are working in, whether that’s healthcare, banking, retail shopping, etc.

The harmonization must be there because cybercriminals will be doing the same thing. Hackers thrive on their ability to break into restricted areas and outthink the latest forms of security. They won’t leave that to the whims of the latest AI package. Finding the balance between man and AI is the only way forward for cybersecurity.

Share this article

Facebook Comments