Running a custom-tuned model in a private instance allows for better security and control. Another way to have guardrails in place is to use APIs instead of letting analysts converse directly with the models. “We chose not to make them interactive, but to control what to ask the model and then provide the answer to the user,” Foster says. “That’s the safe way to do it.”
It’s also more convenient as the system can queue up the answers and have them ready before the analyst even knows they want them and save the user the trouble of cutting and pasting all the required information and coming up with the prompt. Eventually, analysts will be able to ask follow-up questions via an interactive mode, but that isn’t there yet.
In the future, Foster says, security analysts will probably be able to talk to the GenAI, the way Tony Stark talks to Jarvis in the Iron Man movies. In addition, Foster expects that the GenAI will be able to take actions based on its recommendations by the end of this year. “Say, for example, ‘We have 10 routers with default passwords — would you like me to remediate that?’” This level of capability will make risk management even more important.
He doesn’t think security analysts will be eventually phased out. “There’s still a human element in remediation and forensics. But I do think GenAI, combined with data science, will phase out tier-one analysts and maybe even tier-two analysts at some point. That’s both a blessing and a curse. A blessing because we’re short on security analysts worldwide. The curse is that it’s taking over knowledge jobs.” People will just have to adapt, Foster adds. “You won’t be replaced by AI, but you’ll be replaced by someone using AI.”
Analysts use GenAI to write scripts and summaries
Netskope has a global SOC that operates around the clock to monitor its internal assets and respond to security alerts. First, Netskope tried to use ChatGPT to find information on new threats, but soon it learned ChatGPT’s information was out of date.
A more immediate use case was to ask things like: Write an access control entry for XYZ firewall. “This kind of query requires general knowledge and was within ChatGPT’s capabilities in April or May of 2023,” says Netskope deputy CISO James Robinson. Analysts used the public version of ChatGPT for these queries. “But we set up guidelines in place. We tell folks, ‘Don’t take any sensitive information and put it into ChatGPT.’”
As the technology evolved over the course of the year, more secure options became available, including private instances and API access. “And we’ve done more engineering to take advantage of that,” says Robinson. “We felt better about the protections that existed with APIs.”
A later use case was using it to assemble background information. “People are rotating into working on cyber threat intelligence and rotating out and need to be able to pick things up quickly,” he says. “For example, I can ask things like, ‘Have things changed with this threat actor?’” Copilot turned out to be particularly good at providing up-to-date information about threats, Robinson says.
When newly hired analysts can create threat summaries faster, they can dedicate more time to better understanding the issues. “It’s like having an assistant when moving into a new city or home, helping you discover and understand your surroundings,” Robinson says. “Only, in this case, the ‘home’ is a SOC position at a new company.”
And for SOC analysts who are already in their roles, generative AI can serve as a force multiplier, he says. “These advantages will likely evolve into the industry seeing automated analysts and even into an engineering role that can build custom rules, and conduct engineering detection, including integrating with other systems.”
GenAI helps review compliance policies
Insight is a 14,000-person solutions integrator based in Arizona that uses GenAI in its own SOC and advises enterprises on how to use it in theirs. One early use case is to review compliance policies and make recommendations, says Carm Taglienti, Insight’s chief data officer and data and AI portfolio director. For example, he says, someone could ask, “Read all my policies and tell me all the things I should be doing based on the regulatory frameworks out there and tell me how far my policies are from adhering to those recommendations. Is our policy in line with the NIST framework? What do we need to do to tighten it?”
Insight uses OpenAI running in Microsoft’s Azure private instance, combined with a data store that it can access via RAG — retrieval-augmented generation. “The knowledge base is our own internal documents plus any documents we can retrieve from NIST or ISO or any other popular groups or consortiums,” he says. “If you provide the correct context and you ask the right type of questions, then it can be very effective.”
Another possible use case is to use GenAI to create standard operating procedures for particular vulnerabilities that are in line with specific policies, based on resources such as the @MITRE database. “But we’re in the early days right now,” Taglienti says.
GenAI is also not good at workflow yet, but it’s coming, he says. “Agent-based resolution is just around the corner.” Insight is already doing some experimentation with agents, he adds. “If you detect a particular type of incident, you can use agent-based AI to remediate it, shut down the server, close the port, quarantine the application — but I don’t think we’re that mature yet.”
Future use cases for GenAI in security operations centers
The next step is to allow GenAI to go beyond summarizing information and providing advice to actually going out and doing things. Secureworks already has plugins that allow useful data to be fed to the AI system. But, at a recent hackathon, the company also tested out plugging the GenAI into its orchestration engine. “It reasons what steps it should take,” says Falkenhagen. “One of those could be, say, blocking a user and forcing a login. It could figure out which playbook to use, then call the API to execute that action without any human intervention.”
So, is the day coming when human security analysts are obsolete? Falkenhagen doesn’t think so. “What I see happening is that they’ll work on higher-value activities,” he says. “Level one triage is the worst punishment for anybody. It’s just grunt work. You’re dealing with so many alerts and so many false positives. By reducing that workload, analysts can shift to doing investigations, doing root cause analysis, doing threat hunting, and having a bigger impact.”
Falkenhagen doesn’t expect to see layoffs due to increased use of GenAI. “There is such a cybersecurity skill shortage out there today that companies struggle to hire and retain talent,” he says. “I see this as a way to put a dent in that problem. Otherwise, I don’t see how we climb out of the gap that exists. There just aren’t enough people.”
GenAI is not a magic bullet for SOCs
Recent academic studies are showing a positive impact on the productivity of entry-level analysts, says Forrester analyst JP Gownder. But there’s a caveat. “The studies also show that if you ask the AI about something beyond the frontier of its capabilities, you can start to depreciate performance,” he says. “In a security environment, you have a high bar for accuracy. Generative AI can generate magical results but also mayhem. It’s built into the nature of large language models.”
Security operations centers will need strict vetting requirements and put these solutions through their pace before widely deploying them. “And people need to be able to have the judgement to use these tools judiciously and not simply accept the answers that they’re getting,” he says.
In 2024, Gownder expects many companies will underinvest in this training aspect of generative AI. “They think that one hour in a classroom is going to get people up to speed. But there are skills that can only be cultivated over a period of time.”