The march of generative AI isn’t short on negative consequences, and CISOs are particularly concerned about the downfalls of an AI-powered world, according to a study released this week by IBM.
Generative AI is expected to create a wide range of new cyberattacks over the next six to 12 months, IBM said, with sophisticated bad actors using the technology to improve the speed, precision, and scale of their attempted intrusions. Experts believe that the biggest threat is from autonomously generated attacks launched on a large scale, followed closely by AI-powered impersonations of trusted users and automated malware creation.
The IBM report included data from four different surveys related to AI, with 200 US-based business executives polled specifically about cybersecurity. Nearly half of those executives – 47% — worry that their companies’ own adoption of generative AI will lead to new security pitfalls while virtually all say that it makes a security breach more likely. This has, at least, caused cybersecurity budgets devoted to AI to rise by an average of 51% over the past two years, with further growth expected over the next two, according to the report.
The contrast between the headlong rush to adopt generative AI and the strongly held concerns over security risks may not be as large an example of cognitive dissonance as some have argued, according to IBM general manager for cybersecurity services Chris McCurdy.
For one thing, he noted, this isn’t a new pattern — it’s reminiscent of the early days of cloud computing, which saw security concerns hold back adoption to some degree.
“I’d actually argue that there is a distinct difference that is currently getting overlooked when it comes to AI: with the exception perhaps of the internet itself, never before has a technology received this level of attention and scrutiny with regard to security,” McCurdy said.