Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and Copilots
    • Innovation & Leadership
    • Cybersecurity
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
  • Summit NA
  • Dynamics Communities
  • Ask Copilot
Twitter Instagram
  • Summit NA
  • Dynamics Communities
  • AI Copilot Summit NA
  • Ask Cloud Wars
Twitter LinkedIn
Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and CopilotsWelcome to the Acceleration Economy AI Index, a weekly segment where we cover the most important recent news in AI innovation, funding, and solutions in under 10 minutes. Our goal is to get you up to speed – the same speed AI innovation is taking place nowadays – and prepare you for that upcoming customer call, board meeting, or conversation with your colleague.
    • Innovation & Leadership
    • CybersecurityThe practice of defending computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks.
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
    • Login / Register
Cloud Wars
    • Login / Register
Home » AI Accelerates Software Development Cycles But Carries New Cybersecurity Risks
AI and Copilots

AI Accelerates Software Development Cycles But Carries New Cybersecurity Risks

Chris HughesBy Chris HughesSeptember 3, 2024Updated:September 3, 20243 Mins Read
Facebook Twitter LinkedIn Email
Share
Facebook Twitter LinkedIn Email

While I’m excited about AI’s potential use cases and value for software development, there are reasons to be concened. AI assistant tools, such as Microsoft Copilot, have been cited as producing up to 50% more code than developers without AI assistance. That is a massively appealing benefit in an industry where incentives such as speed to market, new feature development, and market share often reign supreme over considerations such as security.

However, studies are now showing that not only is AI-generated code full of vulnerabilities, but that it also may be less secure than human-written code. New York University found that 40% of the programs written by Copilot included at least one vulnerability or common vulnerabilities and exposures (CVE). Couple that with the accelerated pace of development, and it is easy to see how potential vulnerabilities could pile up quickly.

This potential risk shouldn’t be a massive surprise because these tools are trained on large data sets, including coding samples, many of which have inherent flaws and vulnerabilities. As a result, it’s expected that the GenAI-produced code might inherit these vulnerabilities, reflecting the issues present in the training data.

This is concerning because many organizations already have vulnerability backlogs in the hundreds of thousands or millions, so adding exponentially more production code with vulnerabilities could be a death knell for security teams trying to keep up with existing backlogs.

Speed vs. Safety

Developers have been shown to inherently trust GenAI code outputs despite being aware they may contain vulnerabilities or introduce organizational risks. This is because they are incentivized to produce software quickly, not move slowly and be diligent.

Ask Cloud Wars AI Agent about this analysis


Some are okay with proceeding quickly because GenAI will also help identify and resolve vulnerabilities and defects in code. While that sounds good in theory, so far the results don’t seem to align with that aspirational goal. Purdue University conducted a study and found that ChatGPT was wrong over half the time when it came to answering questions about computer progrmming. When I consider that most teams are already struggling with poor, low-fidelity findings from security tools, including false positives, adding even more noise doesn’t seem like a great idea — yet that’s exactly where we’re headed.

It is possible and likely that GenAI and copilot tools will improve over time in both their ability to produce more secure code and identify vulnerabilities in code that they write or scan. This, however, will take time, training, patience, and investment.

Conclusion

The tech industry, like most enterprise customers, is already moving out using these tools. This means while the tools are still evolving and have to improve, countless lines of code and applications are being produced with potential vulnerabilities, and embedded into enterprises and products while being exposed to external customers.

It will be interesting to look back in 12 to 24 months on security incidents and vulnerability exploitation to see if there are any insights with regard to what incidents were influenced by AI-generated code. For now, we don’t have that visibility but we do know that GenAI and copilot coding tools aren’t a panacea: While they may be speeding us up, we have lingering doubts whether that speed comes at the expense of security.


The AI Ecosystem Q2 2024 Report compiles the innovations, funding, and products highlighted in AI Ecosystem Reports from the second quarter of 2024. Download now for perspectives on the companies, innovations, and solutions shaping the future of AI.

ai data featured vulnerability
Share. Facebook Twitter LinkedIn Email
Analystuser

Chris Hughes

CEO and Co-Founder
Aquia

Areas of Expertise
  • Cloud
  • Cybersecurity
  • LinkedIn

Chris Hughes is a Cloud Wars Analyst focusing on the critical intersection of cloud technology and cybersecurity. As co-founder and CEO of Aquia, Chris draws on nearly 20 years of IT and cybersecurity experience across both public and private sectors, including service with the U.S. Air Force and leadership roles within FedRAMP. In addition to his work in the field, Chris is an adjunct professor in cybersecurity and actively contributes to industry groups like the Cloud Security Alliance. His expertise and certifications in cloud security for AWS and Azure help organizations navigate secure cloud migrations and transformations.

  Contact Chris Hughes ...

Related Posts

AI Agent & Copilot Podcast: JP Morgan Chase CISO Publicly Pushes for Stronger Security Controls

May 8, 2025

ServiceNow Re-Invents CRM for End-to-End Enterprise

May 8, 2025

Inside ServiceNow 2025: How AI, Strategic Partnerships, and Platform Unification Are Reshaping Enterprise IT

May 7, 2025

Bill McDermott Calls Out ‘Collapse of 20th-Century Software-Industrial Complex’

May 7, 2025
Add A Comment

Comments are closed.

Recent Posts
  • AI Agent & Copilot Podcast: JP Morgan Chase CISO Publicly Pushes for Stronger Security Controls
  • ServiceNow Re-Invents CRM for End-to-End Enterprise
  • Inside ServiceNow 2025: How AI, Strategic Partnerships, and Platform Unification Are Reshaping Enterprise IT
  • Bill McDermott Calls Out ‘Collapse of 20th-Century Software-Industrial Complex’
  • With Latest Agentic AI Products, ServiceNow Embraces Third-Party Platforms, Data Sources

  • Ask Cloud Wars AI Agent
  • Tech Guidebooks
  • Industry Reports
  • Newsletters

Join Today

Most Popular Guidebooks

Accelerating GenAI Impact: From POC to Production Success

November 1, 2024

ExFlow from SignUp Software: Streamlining Dynamics 365 Finance & Operations and Business Central with AP Automation

September 10, 2024

Delivering on the Promise of Multicloud | How to Realize Multicloud’s Full Potential While Addressing Challenges

July 19, 2024

Zero Trust Network Access | A CISO Guidebook

February 1, 2024

Advertisement
Cloud Wars
Twitter LinkedIn
  • Home
  • About Us
  • Privacy Policy
  • Get In Touch
  • Marketing Services
  • Do not sell my information
© 2025 Cloud Wars.

Type above and press Enter to search. Press Esc to cancel.

  • Login
Forgot Password?
Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.