The AI regulation debate in Australia: navigating risks and rewards
The AI regulation debate in Australia has intensified. The tightrope of AI risk versus reward has become increasingly relevant, particularly regarding the use of de-identified information. To address these issues, it is essential to elaborate on the principles that define high-risk AI, including a thorough review of the risk landscape and the interconnections between AI, economic growth opportunities and myriad risk areas.
Why we need AI regulation in Australia
The reality is that the widespread use of AI systems poses a number of challenges, from inaccurate inputs or outputs in models, biased or low-quality training data, and discriminatory outputs, to opacity over how and when these systems are used. Additionally, malicious manipulation of AI systems can lead to misinformation and security threats, while misuse such as the creation of deepfakes and biased decision-making presents broader societal threats.
However, to remain competitive in the world economy, Australia needs to find a way to safely use AI systems — it just needs a clear set of guardrails to mitigate the risk of missteps.
Australia, with 44 AI-related policies and strategies, is lagging significantly behind global leaders like the US (82) and the EU (63) in the development of AI governance.
On one hand, the EU AI Act serves as a potential blueprint that could help inform Australia’s approach, offering guidance on how AI regulations can be adapted to suit various industries and sectors while addressing local needs. On the other, it’s important that Australia maps its own path forward, and uses its position as a late mover in this space to its advantage — to learn, evaluate and, ultimately, find a solution that works for Australians.
What reforms are needed?
A sector-specific approach, with dedicated regulators overseeing AI activities, is crucial. This should include governance for imported AI technologies, ensuring they comply with Australian standards around human rights, data protection and ethics.
Furthermore, assessing the potential negative impacts on global security and cooperation is critical. Regulatory bodies such as the Australian Competition and Consumer Commission (ACCC) and the Australian Communications and Media Authority (ACMA) for example must play a central role in overseeing AI-related activities, including the authority to regulate AI-generated misinformation.
Australia’s current AI regulatory landscape employs a voluntary, safety-focused approach. In September 2024, the country introduced a Voluntary AI Safety Standard featuring 10 guidelines, emphasising accountability, transparency and risk management in alignment with OECD guidelines. These guidelines primarily target high-risk AI contexts, such as biometrics and law enforcement, with ongoing discussions about potentially making them mandatory for such applications.
For effective AI adoption among businesses and developers, Australia needs to establish a more tailored regulatory framework that takes into account the unique challenges and risks associated with different industries, markets and verticals. This would enable companies to innovate responsibly while maintaining public trust and safety.
Implementing AI governance without stifling innovation
AI governance must promote innovation while addressing ethical issues, risks and public trust by introducing a dedicated AI Act that establishes consistent expectations, facilitates international interoperability, and simplifies legislative complexities.
In October 2024, the Australian Privacy Commissioner released new guidelines for businesses and developers, explaining how existing privacy laws apply to AI, as the national privacy law undergoes review. The Office of the Australian Information Commissioner’s (OAIC) guide aims to help businesses choose AI products that comply with their privacy protocols in an attempt to streamline the use of generative AI among businesses and developers to capture the efficiencies of AI and convey them in a traditional manner that can exceed expectations and encourage brand loyalty.
To ensure effective and responsible AI governance, future AI regulations in Australia should focus on being technology-neutral to foster innovation, and avoid placing unnecessary burdens on customers, developers and businesses, all while aligning with global standards.
Australia must also continue to actively engage in international AI governance discussions to prevent imposing additional regulatory requirements that could put Australia at a disadvantage compared to major economies like the US and EU.
A strong focus on data governance, privacy and cybersecurity should be the foundation of AI regulation, as they are key to building trust and fostering innovation. The introduction of AI regulatory sandboxes can allow for live testing in controlled settings, which helps with risk assessment and promoting innovation. However, this approach will only succeed if we ensure regulatory expertise and broad participation across the ecosystem.
Additionally, ensuring human intervention and control is crucial to avoiding unintended consequences, while establishing complete transparency among all stakeholders in the AI supply chain is key to effectively managing risks and building trust.
Learning from our European counterparts
Regulatory guardrails help to ensure market stability, protect consumers and mitigate risks. Frameworks such as the General Data Protection Regulation (GDPR) and the Markets in Financial Instruments Directive II (MiFID II) have succeeded in enhancing transparency, thus safeguarding consumer interests while creating a trustworthy market. Such definitions highlight how balanced regulation actually encourages innovation while enhancing stability.
Aligning with AI regulations from a global perspective
The EU AI Act is the first comprehensive legal framework regulating AI across the European Union, set to take effect in August 2026. While it imposes strict obligations and penalties, particularly for high-risk AI systems, Australia’s approach remains consultative and flexible, emphasising adaptability.
However, the Act presents several challenges, including unsuitable safety standards for General-Purpose AI (GPAI) models, which have diverse applications and unpredictable risks. Its overly complex classification system, particularly regarding high-risk and systemic-risk categories, creates obligations and confusion around legal responsibilities. Additionally, the risks of innovation in Open Models, especially with third-party modifications, and the inconsistent treatment of smaller language models compared to larger language models may pose a similar risk to Australia’s AI adoption. Multinational companies should incentivise suppliers accredited to standards such as ISO 42001 in government AI procurement as an interim measure before a formal conformity framework is established.
Australia stands at a crossroads in the development of AI, facing both the urgent need to address risks and a significant opportunity for economic growth. A sector-specific approach, overseen by regulatory bodies such as the ACCC and ACMA, can help ensure that AI technologies are in line with Australia’s ethical, legal and societal values. By learning from global leaders like the EU and adapting policies to meet local needs, Australia can take advantage of its late-moving status to create flexible, sector-specific regulations that emphasise safety, accountability and innovation.
AI regulation should not be seen as a burden, and instead should be viewed as an opportunity for Australia to lead the way into the future.
|
Building a critical infrastructure security dream team
Today it's essential to have a strong cyber strategy, with all corners of the business aware...
Strategies for navigating Java vulnerabilities
Java remains a robust and widely adopted platform for enterprise applications, but staying ahead...
Not all cyber risk is created equal
The key to mitigating cyber exposure lies in preventing breaches before they happen.