Industry Insights with AJ Shipley, Vice President, Product - Threat Detection & Response

AI in XDR: When Does It Make Sense?

Cisco's AJ Shipley on When Generative AI Is Useful and When It's Dangerous

Everywhere you turn these days, you run into somebody or some company talking about large language models or generative AI. ChatGPT set the world on fire six months ago, and since then a slew of companies have released features or products built on or around generative AI - some of them completely legitimate, and some of them little more than snake oil.

See Also: New OnDemand | Reacting with Split-Second Agility to Prevent Software Supply Chain Breaches

It's no different in the cybersecurity space. Adversaries are using AI to write malware or phishing emails, and companies are rushing to deliver AI-based assistive technology for every use case imaginable.

Generative AI is really useful for summarizing and explaining things based on a set of inputs. For example, if I ask ChatGPT to write me five paragraphs explaining the War of 1812 using iambic pentameter, it would have no problem doing that.

If we focus on security operations center use cases, there are areas where generative AI makes a ton of sense. And if I asked it to summarize a security incident for me in three paragraphs based on a set of observables, TTPs and time stamps, it would have no problem doing that. Even if it got the summarization just a little bit wrong, that would probably be OK because the intention of the above use case is to be able to rapidly explain what happened to a CISO or board member.

Getting "what happened" 1% wrong is probably OK. It's probably better than the SOC analyst could do in the heat of the moment with a CISO breathing down their neck while they are responding to an incident.

But there are other areas where generative AI is downright dangerous. That's because most of the time, generative AI gets things really, really right. But when it gets things wrong, it not only gets them really, really wrong, but it wraps the 1% of wrongness in 99% of rightness -making it almost impossible to identify.

In security, and in the SOC in particular, there are times when it is OK to be a little bit wrong, but there are also times when being even slightly wrong can have disastrous consequences.

So, if you ask ChatGPT to resolve an incident for you and give it free rein to automatically update policies, access controls, or email inboxes, getting that even a little bit wrong can be a huge problem. If that happens, that CISO breathing down your neck might be the last thing you'll feel just before you’re looking for another job.

Before you buy into the AI hype being thrown every which way, ask your vendor a few pointed questions, such as, "How exactly are you using AI?" and "What data sets are you training your AI on?" Most importantly, in the case of an incident, breach or SOC workflow, ask, "Is my incident information now in the public domain because of your use of AI in the XDR solution I bought from you?"

Does Cisco use AI? Absolutely. In June, we announced our plans to bring AI into the SOC to augment security analysts with the context to make the right decisions at the right time. Do we think AI makes sense everywhere for everything? Absolutely not. This public service announcement has been brought to you courtesy of Cisco XDR.



About the Author

AJ Shipley, Vice President, Product - Threat Detection & Response

As Vice President, Product - AJ is responsible for Cisco’s Threat, Detection & Response portfolio which includes XDR, EDR, NDR, Risk Based Vulnerability Management, Malware Protection, Email security and Talos Threat Intelligence. Prior to Cisco, he was Vice President of Product Management for Palo Alto Networks and has worked for Net App, Wind River and Rayethon. He holds undergraduate and graduate degrees in Electrical Engineering and Computer Science and an MBA from The University of North Carolina.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing ffiec.bankinfosecurity.com, you agree to our use of cookies.