While more than half of developers acknowledge that generative AI tools commonly create insecure code, 96% of development teams are using the tools anyway, with more than half using the tools all the time, according to a report released Tuesday by Snyk, maker of a developer-first security platform.
The report, based on a survey of 537 software engineering and security team members and leaders, also revealed that 79.9% of the survey’s respondents said developers bypass security policies to use AI.
“I knew developers were avoiding policy to make use of generative AI tooling, but what was really surprising was to see that 80% of respondents bypass the security policies of their organization to use AI either all of the time, most of the time or some of the time,” said Snyk Principal Developer Advocate Simon Maple. “It was surprising to me to see that it was that high,”
Without testing, the risk of AI introducing vulnerabilities into production increases
Skirting security policies creates tremendous risk, the report noted, because even as companies are quickly adopting AI, they are not automating security processes to protect their code. Only 9.7% of respondents said their team was automating 75% or more of security scans. This lack of automation leaves a significant security gap.
“Generative AI is an accelerator,” Maple said. “It can increase the speed at which we write code and deliver that code into production. If we’re not testing, the risk of getting vulnerabilities into production increases.”
“Fortunately, we found that one in five survey respondents increased their number of security scans as a direct result of AI tooling,” he added. “That number is still too small, but organizations see that they need to increase the number of security scans based on the use of AI tooling.”