Ultimately, AI will protect the enterprise, but it's up to the cybersecurity community to protect "good" AI in order to get there, RSA's Rohit Ghai says.

AI concept art
Source: Christian Lagerek via Alamy Stock Photo

Threat actors armed with artificial intelligence (AI) tools like ChatGPT can pass the bar exam, ace an Advanced Placement biology course, and even create polymorphic malware. It's up to the cybersecurity community to put AI to work to combat it.

RSA CEO Rohit Ghai used the opening keynote of this year's RSAC in San Francisco, Calif. to call on the cybersecurity community to use AI as a tool to work on the side of "good." First, that means putting it to work on solving cybersecurity's "identity crisis," he said.

To demonstrate, Rhai called up onto the screen a ChatGPT avatar, one he called "GoodGPT."

"Calling it 'good' is somehow personally comforting to me," he added. He then asked it basic cybersecurity questions.

While "GoodGPT" spat out a sequence of words culled from a veritable ocean of available cybersecurity data, he went on to explain there are critical cybersecurity applications for AI far beyond simple language learning, and it starts with identity management.

"Without AI, zero trust has zero chance," Ghai said. "Identity is the most attacked part of the attack surface."

It's no big secret the security operations center (SOC) is overwhelmed. In fact, Ghai said the industry average timeframe to identify and remediate an attack is about 277 days. But with AI, it's possible to manage access in the most granular terms, in real time, and at the data level, creating a framework that is truly based on the principle of least privilege.

"We need solutions that ensure identity throughout the user lifecycle," he added.

During this year's RSA, Ghai said there are at least 10 vendors selling AI-powered cybersecurity solutions positioned as a tool that will help human cybersecurity professionals. Ghai characterized the current pitch as AI-as-a-copilot. But that framing belies the reality, he warned.

"The copilot description sugarcoats a truth," Ghai said. "Over time many jobs will disappear."

The role of humans, he explained, will evolve into creating algorithms to ask important questions, along with supervising AI's activities and handling exceptions.

"It's humans who will ask those questions which have never been asked before," he said.

Ultimately, humans and enterprises will have to rely on AI to protect them.

"And the cybersecurity community can protect 'good' AI," Ghai added.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights