The importance of effective security when deploying AI tools

Secure Code Warrior

By Matias Madou, Co-founder and CTO, Secure Code Warrior
Tuesday, 09 December, 2025


The importance of effective security when deploying AI tools

Artificial intelligence and large language models (LLMs) have quickly become a force multiplier for software developers, accelerating output and reshaping workflows across the sector. Yet industry leaders caution that, despite the substantial gains in speed and productivity, responsibility for security cannot be outsourced to machines.

Developers remain on the hook for producing code that is both robust and trustworthy. Increasingly, security lapses in the software development lifecycle are traced not to the underlying AI systems but to their deployment.

It’s imperative to increase focus on the potentially disruptive dynamics now, because the presence of AI in coding is here to stay. According to the 2025 State of AI Code Quality report from Qodo1, more than four in five developers use AI coding tools daily or weekly, and 59% run at least three such tools in parallel.

AI’s impact on security, however, has emerged as a primary concern, with even the best LLMs generating either incorrect or vulnerable products. This has led academic experts to conclude that the technology cannot yet generate deployment-ready code. Using one AI solution to generate code and another to review it, with minimal human oversight, will create a false sense of security, increasing the likelihood of compromised software.

Diminishing human oversight in software development is already showing signs of eroding the rigour traditionally applied at the review stage. Industry observers warn that as accountability becomes less clearly rooted in individual developers or teams, organisations are also less inclined to invest in the long-term policies and practices needed to keep code secure and dependable.

The concern is straightforward: development teams may place undue confidence in AI tools that are not equipped to interpret the nuanced context in which many security vulnerabilities arise. Large language models, for instance, can struggle to understand an application’s authentication or authorisation architecture, increasing the likelihood of missing critical safeguards.

If developers become complacent, lulled into assuming the machine will catch every flaw, the risk profile only grows. Security leaders argue that maintaining vigilance and human judgement will be essential as AI systems take on a larger share of the coding workload.

Tackling the ethical and legal implications

Beyond security, team leaders and members must focus more on ethical and even legal considerations: According to the AI Impact Report 2025 from LeadDev2, nearly 50% of software engineers are facing legal, compliance and ethical challenges in deploying AI, while 49% are concerned about security.

Copyright issues related to training data sets, for instance, can also have real-life repercussions. It’s possible that an LLM provider will pull from open-source libraries to build these sets. But even if the resulting output isn’t a direct copy from the libraries, it could still be based upon inputs for which permission was never given.

The ethical and legal scenarios can take on a highly perplexing nature. For example, a human engineer can read, learn from and write original code from an open-source library. However, if an LLM does the same thing, it can be accused of engaging in derivative practices.

Best practices for building expert-level awareness

So how do software engineering leaders and their teams cultivate a ‘security first’ culture and a universal awareness of ethical and legal considerations? The following best practices should be considered.

1. Establish internal guidelines for AI ethics/liability protection

Security leaders must establish traceability, visibility and governance over developers’ use of AI coding tools. As part of this, they need to evaluate the actual tools deployed, how they’re deployed (including ethical considerations), vulnerability assessments, code-commit data and developers’ secure coding skills in incorporating internal guidelines for the safe and ethical use of AI.

2. Upskill and educate developers

To avoid vulnerability-caused reworks and legal and ethical dilemmas, team leaders must upskill developers to grow more proficient and dialled-in on software security, ethics and liability factors which could impact their roles and output. As part of this, they should implement benchmarks to determine the skill levels of team members on these topics, to identify where gaps exist and commit to education and continuous-improvement initiatives to eliminate them.

3. Communicate and enforce best practices

This should include the rigorous review of AI-generated code; it should be standard that code created with these assistants receives the same quality and security review as any other code. For example, as part of their due diligence, teams could validate as many user inputs as possible to prevent SQL injection attacks, while output encoding to block cross-site scripting (XSS) vulnerabilities.

Code ownership and the path forward

As software teams lean more heavily on artificial intelligence to meet tightening delivery schedules, security chiefs are under growing pressure to ensure developers retain clear ownership of their work, including the risks that can emerge from AI-assisted coding.

The path forward lies in setting firm guardrails around security, ethics and legal compliance, and backing those standards with proper training and performance benchmarks. Organisations that invest in this groundwork are finding their teams operate with greater confidence and capability, even as development cycles accelerate.

1. Qodo 2025, 2025 State of AI Code Quality, <<https://www.qodo.ai/wp-content/uploads/2025/06/2025-State-of-AI-Code-Quality.pdf>>
2. LeadDev 2025, The AI Impact Report 2025, <<https://leaddev.com/the-ai-impact-report-2025>>

Top image credit: iStock.com/Supatman

Related Articles

Five IT trends that will define 2026: building a smarter, more resilient digital enterprise

The coming year will bring an intelligent, automated and highly distributed IT ecosystem grounded...

Cybersecurity is the engine, not the brake, for Australia's AI ambition

Cybersecurity must be the enabling engine that validates and accelerates AI initiatives.

Avoiding the AI bottleneck: why data infrastructure matters for high‍-‍performance ambitions

In Formula 1, championships are won not by the teams with the biggest engines, but by those who...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd