Organizations continue to be excited about artificial intelligence (AI) in software. AI has the potential to accelerate software development by allowing developers to write code and ship features faster, as well as to better meet organizational deadlines and goals. While some early co-pilot and AI-powered code-writing tools show promise, the Snyk “AI Code Security Report” shows us that this powerful capability isn’t without risk.
False Sense of Security
One takeaway from the report is that developers have a false sense of security in AI-generated code. Snyk’s report found that code generation tools routinely recommend vulnerable open-source libraries, yet over 75% of respondents claimed that AI code is more secure than human code. The report did, however, acknowledge despite this sense of confidence, over 56% of survey respondents admitted that the AI-generated code sometimes or frequently did introduce security issues.
Snyk points out that this means AI-generated code requires verification and auditing to avoid introducing vulnerabilities into production systems by over-relying on AI-generated code without implementing proper security activities and tools such as software composition analysis (SCA).
Security Policy Bypass
Potentially most concerning, Snyk’s survey found that nearly 80% of developers and software practitioners admitted to bypassing security policies, and only 10% scan most of the AI-generated code. This means even though security leaders such as chief information security officers (CISOs) are implementing security processes to empower organizations to securely use AI tools for software development, developers are simply ignoring or sidestepping those processes, inevitably introducing vulnerabilities and risk
Ask Cloud Wars AI Agent about this analysis
Open Source Software Supply Chain Security
Software supply chain security continues to be a pressing industry-wide issue, from cybersecurity executive orders (EO) to private sector efforts from leading organizations such as The Linux Foundation and OpenSSF. Software supply chain attacks continue to rise as attackers realize the high return on investment (ROI) of compromising popular open source software (OSS) projects and components and having massive downstream cascading impacts.
Despite this industry-wide recognition, Snyk’s survey found less than 25% of developers were using SCA tooling to identify vulnerabilities in the AI-generated code suggestions before using them. This means the industry is accelerating its use of AI-generated open-source code suggestions without proper security measures. This makes organizations ripe for software supply chain attacks from malicious open-source attackers.
Pointing out a unique aspect of how AI tools work, the Snyk report emphasized that due to reinforcement learning, AI tools are more likely to continue to make similar code suggestions as developers accept them, leading to an infinite loop of vulnerable activity.
Risks Are Known, but Ignored
Notably, the survey found that developers recognize the risks of AI but have turned a blind eye to the risks due to the benefits of accelerated development and delivery, leading to the age-old problem of ignoring security for other goals such as speed to market and delivery timelines.
Survey respondents pointed out concerns around keeping pace with peers who use the AI code tools, leading to higher code velocity, forcing them to try and keep up. This is even though they said they are very worried about the security risks and over-reliance that AI code generation tools can create.
The report also interestingly cited the challenge of cognitive dissonance, where developed believe since their peers and others in the industry are using AI coding tools, that they must be safe, despite findings to the contrary.
However, developers did raise concerns about the potential for overreliance of AI coding tools. Some expressed concerns about losing their ability to write their own code and also being less likely to be able to recognize good offerings to development problems due to getting comfortable relying on the AI tools instead of their own skillsets and critical thinking.
Implications for Application Security (AppSec)
Lastly, the report discussed some of the implications for AppSec. Since the use of AI coding tools can lead to accelerated development timelines and code velocity, this inevitably puts further strain on AppSec and security professionals, trying to keep up with the pace of their developer peers. Over half of the teams responded saying they were experiencing additional pressure.
This underscores the necessity for AppSec and security practitioners to explore AI code security tools, as manual intervention proves impractical at scale. Relying on automated tools is imperative, all while striving to avoid becoming a bottleneck or friction point for their development peers.
Final Thoughts
It’s clear that AI coding tools are here to stay and will likely grow in use across organizations. Developers are looking to meet project and product deadlines, feature releases and keep up with product peers in their specific niche who are developing at increased velocity due to the use of the tools. But while code velocity may be increased, so can potential vulnerabilities and risks along with it, as the Snyk report highlights. If the trend continues, the attack surface for exploitation will only expand.