Full steam ahead: how to accelerate AI agent development responsibly

Databricks

By Adam Beavis, VP and Country Manager,Databricks ANZ
Wednesday, 03 September, 2025


Full steam ahead: how to accelerate AI agent development responsibly

Like the logical leap that saw steam engines put into locomotives to power rail at an industrial scale, artificial intelligence (AI) agents are fast becoming an effective way for businesses to adapt and enhance generative AI (GenAI) models for specific enterprise use cases.

As noted by Gartner, intelligent agents in AI are goal-driven software entities that use AI techniques to complete tasks and achieve goals. Unlike basic GenAI models, they don’t require explicit prompts by operators. Instead, AI agents are autonomous intelligent systems that can use data to perform self-determined tasks and meet predetermined goals.

The upside of such technology, which is increasingly being used by organisations of all kinds to automate complex business processes and take care of mundane tasks, is that companies can get more done without blowing through resources. Finance, health care, retail and transportation are among the industry sectors already making use of AI agents.

But as with any new technology, there are some things to watch out for when deploying and using AI agents in an organisation. For example, good data controls play an outsized role in the ability of businesses to use AI agents safely without jeopardising sensitive information.

Data quality directly impacts AI performance

It’s not only data controls that are important when it comes to using AI agents for business applications. Data quality plays a considerable role in the efficacy of such tools. Returning briefly to the steam engine analogy, if the boiler isn’t up to the task of pumping out steam with the optimal qualities, such as heat and pressure, the engine falters. The same can be said for data.

AI agents rely on the data they’re trained on or access in real time. That means that the quality of the data consumed by an AI agent is often directly related to the quality of the outcomes. Poor-quality data can lead to inaccurate outputs, misinformed decisions and loss of trust in the technology.

With that in mind, businesses should invest in data hygiene and governance. One approach is to use appropriate data platforms to clean and format data into a unified format before employing any artificial intelligence. Additionally, it’s a good idea to regularly audit the data that AI agents use and establish processes to keep it accurate, updated and unbiased.

Infrastructure for production-grade AI agents

As AI agents move from experimental tools to enterprise-critical systems, there’s growing pressure to ensure they can operate at scale, in real time and on top of trusted data. This shift demands a new breed of infrastructure that can combine structured data handling with modern AI workloads.

Traditional databases weren’t built for the kind of interactivity or responsiveness that AI agents need and often become bottlenecks. In agent-based environments, latency and dynamic reasoning matter. That’s why we’re seeing increasing innovation around serverless, AI-native databases.

This is the kind of architecture that reflects a broader shift towards deeply integrated data and AI stacks, where agents can operate efficiently, interact with live data, and drive outcomes reliably at scale in production environments.

Reinforce regulatory compliance through data governance

One of the most important considerations when implementing any kind of AI technology, and especially AI agents, is data governance, since this is often central to maintaining regulatory compliance around the safety and security of sensitive information and the use of AI technology itself.

In Australia, the federal government has established stringent regulatory requirements when it comes to handling consumer data, and it’s likely that regulations dictating how AI is used will soon be on the horizon. Certainly, plugging data into AI platforms can see businesses run the risk of exposing sensitive information publicly, in contravention of legislated regulations.

As such, businesses need to prioritise implementing robust data governance frameworks, backed up by appropriate technology to keep data secure. They should also establish auditing frameworks and continuous monitoring systems to help ensure that AI agents make decisions aligned with ethical and legal standards, as well as eliminate any algorithmic biases.

Be clear with objectives and guardrails

To do what they are tasked to do properly, AI agents need clear objectives and guardrails to be put in place. Without specific goals and boundaries, AI agents may generate irrelevant or even harmful outputs, impacting customer satisfaction, compliance or brand reputation. Like any autonomous system, AI agents need strong instructions and limitations.

A good starting point to achieve this is to define specific objectives for each AI agent and set explicit parameters, for example, avoiding sensitive topics. It’s also worthwhile incorporating regular performance reviews to ensure the agent aligns with company goals.

However, training AI agents is only the beginning of the process. Such agents require ongoing monitoring to ensure they are successfully optimised to the business objectives that have been established. This can be done through LLM judges that use AI and human oversight to evaluate agent output, and tools that enable auto-optimisation of LLM models. The process would also involve determining appropriate KPIs and other metrics, such as accuracy of responses and user satisfaction, to help track the success of the agent.

AI tools still need humans in the loop

As it stands, AI agents still lack the ability to be entirely autonomous decision-makers for the business: they can still make mistakes. AI agents can handle specific tasks like customer support or data analysis efficiently, but human supervision is still required to add a level of accountability.

As AI agents become increasingly integrated into diverse aspects of our lives and work, it’s essential to emphasise the continued role of human oversight. While the future will undoubtedly see autonomous agents operating in many domains, their decisions and actions must remain subject to human judgment and approval.

This human-in-the-loop approach ensures accountability, ethical alignment and trust. Ultimately, we won’t be replaced; we’ll be elevated to the role of supervisors, responsible for guiding, validating and owning the outcomes of AI-powered processes. The future of AI is not autonomous in isolation; it is collaborative, with humans at the helm.

Keeping these considerations in mind when implementing new AI agents can help organisations harness AI agents’ potential while managing risks and building long-term value. If these things are taken care of, there’s no reason why enterprises can’t charge full steam ahead into implementing AI agents and reaping the benefits.

Image credit: iStock.com/Vertigo3d

Related Articles

Reducing environmental impact through real-time data streaming

There's a common misconception that continuous streaming is less efficient because it...

Australia's tech spending revolution is coming — and we can thank the trade war

Tariffs and geopolitical uncertainty have forced businesses to spend smarter, with more financial...

Build the next-generation customer experience with GenAI

As GenAI advances and becomes more deeply embedded in CX strategies, organisations face new...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd