OpenAI's chatbot has the promise to revolutionize how security practitioners work.

Matt Georgy, Chief Technology Officer, Redacted

January 27, 2023

5 Min Read
Screen reading ChatGPT
Source: Greg Guy via Alamy Stock Photo

ChatGPT took the world by storm after OpenAI opened it for testing on Nov. 30, 2022. For an industry calloused by years of largely unsatisfying AI and machine learning "innovations," the reactions have been quite telling. Like many who are excited by its potential, I believe this is finally the moment of clarity for how truly revolutionary AI can be for information security.

It's also quite sobering, as there are already countless examples of how it changes the game for black hats of all stripes. In one of the first proofs-of-concept, NYU professor Brendan Dolan-Gavitt used ChatGPT to exploit a buffer overflow vulnerability. Other examples include writing malware with lightning speed and crafting convincing, grammatically correct phishing emails.

The weaponization of AI within cybersecurity is not new, but what excites me the most about ChatGPT is its potential for closing information security's biggest gap: the lack of sufficient talent, in both breadth and depth of cybersecurity skills (i.e., specializations). To illustrate this further, here are three ways ChatGPT will change infosec in 2023.

Advancing Crowdsourced Threat Intelligence

For quite some time, one of the industry's holy grails has been successfully crowdsourcing threat intelligence. The promise stems from the ability to see what's happening across a wide swath of companies within a single vertical industry. Unfortunately, the greatest impediment has been the lack of trust between organizations to share the intelligence.

This is the problem that the array of ISACs across industries have been trying to solve — with mixed results. Going forward, an information sharing and analysis center (ISAC) could take an iteration of the ChatGPT model with its natural language interface and feed it log data submitted by ISAC constituents, based on implicit trust within the group. The ISAC could then use ChatGPT to correlate network connections, categories of malicious IP addresses and domains, and similar behaviors. The results could produce a set of IDS rules that the ISAC constituents should implement to protect themselves from threats. The ISAC also would gain insight into the overall risk posture of the industry it represents.

Doing More With Existing Resources

The uncertain economy is putting pressure on security organizations to implement hiring freezes to squeeze more productivity out of existing resources. ChatGPT can be extremely beneficial here as a force multiplier that enables one analyst to do the job of multiple people.

Generalists and entry-level staff can describe what they are seeing in alerts and detections, and then ask ChatGPT to decipher their observations to jumpstart the triage process. A specific example is helping with practitioners' daily de-obfuscation of suspected malicious code, which typically takes an hour or more. It now can be performed in seconds.

ChatGPT also has the potential to transform incident response. A team can use the existing model and natural language processing to feed all available data about an incident and describe the rationale for a potential response. ChatGPT could then immediately prove or disprove a theory about a compromise. Today, that involves several days of work by an incident response lead, an engineer, and several analysts to fully resolve an incident. I can foresee a future where the process doesn't need an analyst at all.

Taking the Malware Cat-and-Mouse Game to a New Level

Today, adversaries generate 100 million new malware samples per year. Because they all require manual coding, it is still a finite, manageable amount for signature detection. With ChatGPT, however, a hacker can say, "Here's what I'm trying to do, and here's the OS I'm trying to do it on," and it can generate hundreds of thousands of iterations of one piece of malware.

This will mean that the detection engines' ML models must be recomputed faster. It's far more complicated, because they're working against a much larger data set. Fortunately, ChatGPT will supercharge the reverse-engineering process and give anti-malware efforts a fighting chance.

For instance, a significant reverse engineering challenge is working with a generic file name, which doesn't provide necessary context about where it was found. This requires much more manual work to identify the system for which it was built. There are minor changes in binary assembly that have marked changes on the end result — e.g., was it written for a 32-bit or 64-bit architecture? Is the system using Little Endian or Big Endian? The answers determine the direction in which you read the machine language (forward or backward).

All these efforts require trial and error if you have no context. ChatGPT can run through these iterations at blazing speed and give reverse engineers the final assembly language and process it from there. They can take it further and have ChatGPT tell them what it thinks the application is doing — in natural language. More importantly, ChatGPT could do all of this at scale, analyzing hundreds of thousands of binary samples and proving insights to an analyst.

It also can help fight back against common cat-and-mouse techniques. For example, malware often contains anti-reverse engineering techniques, such as nested loops, to make it much harder for reverse engineers to keep track of what is happening and the end state. ChatGPT can figure that out much faster than humans. It also can analyze the genetic code of the malware and see where there may be code reuse to identify the fingerprint of the author more quickly.

Long-Term Implications

Whenever new advances in AI come to fore, there is the inevitable concern about whether it will replace humans and their jobs. I don't believe ChatGPT will make this happen, but it will make us more powerful consumers of information. The force multiplier effect will be profound at all levels. I can see CISOs feeding it a set of information about its risk register for it to return policies and procedures, incident response plans, and more — all tailored to their environments.

While ChatGPT is only a research preview, I share the excitement of my industry colleagues about its promise to revolutionize how security practitioners work.

About the Author(s)

Matt Georgy

Chief Technology Officer, Redacted

Matt Georgy’s extensive experience spans the private and public sectors. After 17 years as a commissioned U.S. Air Force officer and a Defense Department Senior Executive, he transitioned to the private sector holding positions as Chief Technical Officer for a venture-backed cybersecurity firm, Senior Technical Director for Symantec Corporation’s Global Security Operations, and an Executive-in-Residence at Littoral Ventures.

 

Matt began his professional security career as a network security architect at Vision Engineering where he developed security solutions for PLC/SCADA systems. His military and public service include Chief of Intelligence of the 18th Fighter Squadron (USAF), Senior Intelligence Officer to Gen. James R. Clapper, Lead U.S. Government Forensics Analyst in Baghdad, Iraq, and several technical leadership roles at the NSA. He holds several commercial information security certifications, a B.S. from California State Polytechnic University, and an M.S./M.B.A. from the University of Maryland.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights