Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and Copilots
    • Innovation & Leadership
    • Cybersecurity
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
  • Summit NA
  • Dynamics Communities
  • Ask Copilot
Twitter Instagram
  • Summit NA
  • Dynamics Communities
  • AI Copilot Summit NA
  • Ask Cloud Wars
Twitter LinkedIn
Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and CopilotsWelcome to the Acceleration Economy AI Index, a weekly segment where we cover the most important recent news in AI innovation, funding, and solutions in under 10 minutes. Our goal is to get you up to speed – the same speed AI innovation is taking place nowadays – and prepare you for that upcoming customer call, board meeting, or conversation with your colleague.
    • Innovation & Leadership
    • CybersecurityThe practice of defending computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks.
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
    • Login / Register
Cloud Wars
    • Login / Register
Home » Snyk’s AI Code Security Report Reveals Software Developers’ False Sense of Security
Cybersecurity

Snyk’s AI Code Security Report Reveals Software Developers’ False Sense of Security

Chris HughesBy Chris HughesJanuary 12, 2024Updated:January 12, 20245 Mins Read
Facebook Twitter LinkedIn Email
Share
Facebook Twitter LinkedIn Email
Acceleration Economy Cybersecurity

Organizations continue to be excited about artificial intelligence (AI) in software. AI has the potential to accelerate software development by allowing developers to write code and ship features faster, as well as to better meet organizational deadlines and goals. While some early co-pilot and AI-powered code-writing tools show promise, the Snyk “AI Code Security Report” shows us that this powerful capability isn’t without risk.

False Sense of Security

One takeaway from the report is that developers have a false sense of security in AI-generated code. Snyk’s report found that code generation tools routinely recommend vulnerable open-source libraries, yet over 75% of respondents claimed that AI code is more secure than human code. The report did, however, acknowledge despite this sense of confidence, over 56% of survey respondents admitted that the AI-generated code sometimes or frequently did introduce security issues.

Snyk points out that this means AI-generated code requires verification and auditing to avoid introducing vulnerabilities into production systems by over-relying on AI-generated code without implementing proper security activities and tools such as software composition analysis (SCA).

Security Policy Bypass

Potentially most concerning, Snyk’s survey found that nearly 80% of developers and software practitioners admitted to bypassing security policies, and only 10% scan most of the AI-generated code. This means even though security leaders such as chief information security officers (CISOs) are implementing security processes to empower organizations to securely use AI tools for software development, developers are simply ignoring or sidestepping those processes, inevitably introducing vulnerabilities and risk

Ask Cloud Wars AI Agent about this analysis

Open Source Software Supply Chain Security

Software supply chain security continues to be a pressing industry-wide issue, from cybersecurity executive orders (EO) to private sector efforts from leading organizations such as The Linux Foundation and OpenSSF. Software supply chain attacks continue to rise as attackers realize the high return on investment (ROI) of compromising popular open source software (OSS) projects and components and having massive downstream cascading impacts.

Despite this industry-wide recognition, Snyk’s survey found less than 25% of developers were using SCA tooling to identify vulnerabilities in the AI-generated code suggestions before using them. This means the industry is accelerating its use of AI-generated open-source code suggestions without proper security measures. This makes organizations ripe for software supply chain attacks from malicious open-source attackers.

Pointing out a unique aspect of how AI tools work, the Snyk report emphasized that due to reinforcement learning, AI tools are more likely to continue to make similar code suggestions as developers accept them, leading to an infinite loop of vulnerable activity.

Risks Are Known, but Ignored

Notably, the survey found that developers recognize the risks of AI but have turned a blind eye to the risks due to the benefits of accelerated development and delivery, leading to the age-old problem of ignoring security for other goals such as speed to market and delivery timelines.

Survey respondents pointed out concerns around keeping pace with peers who use the AI code tools, leading to higher code velocity, forcing them to try and keep up. This is even though they said they are very worried about the security risks and over-reliance that AI code generation tools can create.

The report also interestingly cited the challenge of cognitive dissonance, where developed believe since their peers and others in the industry are using AI coding tools, that they must be safe, despite findings to the contrary.

However, developers did raise concerns about the potential for overreliance of AI coding tools. Some expressed concerns about losing their ability to write their own code and also being less likely to be able to recognize good offerings to development problems due to getting comfortable relying on the AI tools instead of their own skillsets and critical thinking.

Implications for Application Security (AppSec)

Lastly, the report discussed some of the implications for AppSec. Since the use of AI coding tools can lead to accelerated development timelines and code velocity, this inevitably puts further strain on AppSec and security professionals, trying to keep up with the pace of their developer peers. Over half of the teams responded saying they were experiencing additional pressure.

This underscores the necessity for AppSec and security practitioners to explore AI code security tools, as manual intervention proves impractical at scale. Relying on automated tools is imperative, all while striving to avoid becoming a bottleneck or friction point for their development peers.

Final Thoughts

It’s clear that AI coding tools are here to stay and will likely grow in use across organizations. Developers are looking to meet project and product deadlines, feature releases and keep up with product peers in their specific niche who are developing at increased velocity due to the use of the tools. But while code velocity may be increased, so can potential vulnerabilities and risks along with it, as the Snyk report highlights. If the trend continues, the attack surface for exploitation will only expand.


for more cybersecurity insights, visit the cybersecurity channel

Interested in Snyk?

Schedule a discovery meeting to see if we can help achieve your goals

Connect With Us

Book a Demo

ai Artificial Intelligence CISO featured Open-Source Software vulnerability
Share. Facebook Twitter LinkedIn Email
Analystuser

Chris Hughes

CEO and Co-Founder
Aquia

Areas of Expertise
  • Cloud
  • Cybersecurity
  • LinkedIn

Chris Hughes is a Cloud Wars Analyst focusing on the critical intersection of cloud technology and cybersecurity. As co-founder and CEO of Aquia, Chris draws on nearly 20 years of IT and cybersecurity experience across both public and private sectors, including service with the U.S. Air Force and leadership roles within FedRAMP. In addition to his work in the field, Chris is an adjunct professor in cybersecurity and actively contributes to industry groups like the Cloud Security Alliance. His expertise and certifications in cloud security for AWS and Azure help organizations navigate secure cloud migrations and transformations.

  Contact Chris Hughes ...

Related Posts

How One Company Added 20% to Profit with Cloud Optimization

June 27, 2025

AI Agent & Copilot Podcast: ServiceNow Innovation Officers Outline Agentic AI Opportunities in Healthcare

June 27, 2025

Workday Sets the Standard for Responsible AI with Dual Governance Accreditations

June 27, 2025

The AI Economy: Oracle More Valuable than Disney, Goldman Sachs, and Uber Combined

June 26, 2025
Add A Comment

Comments are closed.

Recent Posts
  • AI Agent & Copilot Podcast: ServiceNow Innovation Officers Outline Agentic AI Opportunities in Healthcare
  • How One Company Added 20% to Profit with Cloud Optimization
  • Workday Sets the Standard for Responsible AI with Dual Governance Accreditations
  • The AI Economy: Oracle More Valuable than Disney, Goldman Sachs, and Uber Combined
  • Microsoft Delivers In-Depth View of Security, Governance Functions in Copilot Control System

  • Ask Cloud Wars AI Agent
  • Tech Guidebooks
  • Industry Reports
  • Newsletters

Join Today

Most Popular Guidebooks

Accelerating GenAI Impact: From POC to Production Success

November 1, 2024

ExFlow from SignUp Software: Streamlining Dynamics 365 Finance & Operations and Business Central with AP Automation

September 10, 2024

Delivering on the Promise of Multicloud | How to Realize Multicloud’s Full Potential While Addressing Challenges

July 19, 2024

Zero Trust Network Access | A CISO Guidebook

February 1, 2024

Advertisement
Cloud Wars
Twitter LinkedIn
  • Home
  • About Us
  • Privacy Policy
  • Get In Touch
  • Marketing Services
  • Do not sell my information
© 2025 Cloud Wars.

Type above and press Enter to search. Press Esc to cancel.

  • Login
Forgot Password?
Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.