GenAI platforms have been fueling a significant rise in cyberattacks and security risks. This has given rise to a new set of cybersecurity startups that are specifically working to address these risks.
“Powerful GenAI capabilities are now accessible to a wider audience instead of an elite group of AI and deep learning experts and it is important to consider the security implications and take steps to ensure privacy and security of company, partner, and customer data,” said Melinda Marks, senior analyst at ESG. “There are a number of startups addressing this, including Portal26, Prompt Security, CalypsoAI, etc.”
The idea is to help organizations assess what GenAI is being used, help them set policies to limit usage or put guardrails in place for safe usage, and then monitor them to ensure the data is protected, according to Marks.
GenAI security built on data protection offerings
Almost all enterprise-centred GenAI-related risks can be piled under data leakage or bias. Therefore, tools designed to protect against these include data loss protection (DLP) solutions. GenAI-based leakage, however, sometimes pertains to a compromise of a huge amount of data as models are trained on larger corpora.
“This does fall into DLP, but usage of GenAI also brings a scalability issue because there can be so much data transferred to and from LLMs between building the models, and then using the data and generating/ changing new data in the natural language interactions and prompts,” Marks said. “Organizations need to ensure their sensitive data isn’t shared or used in other models, which is especially important for the regulated industries like healthcare and finance.
Startups like Aim will need to demonstrate better visibility and control at managing security risks with GenAI use, including visibility on data uploads and identifying out-of-policy data transfers, according to Marks.