Toward a new level of AI adoption in Australia

Tibco Software Australia Pty Ltd

By Lori Witzel, Director of Research for Analytics and Data Management, TIBCO
Tuesday, 01 March, 2022


Toward a new level of AI adoption in Australia

Artificial intelligence (AI) has moved onwards from the fanciful manifestations imagined in movies and science fiction and become a tangible part of our lives through things like chatbots and shopping recommendations. But, as with many new and still-nascent technologies, there is a tendency to go through a period of sceptical assessment, socially fuelled distrust and hesitant partial adoption before we reach any widespread level of validated acceptance.

Executed, deployed and integrated properly, AI can provide a quantum leap forward in operational intelligence, driving more profitable business operations that deliver improved customer experience across the board. A new level of ‘user intimacy’ is achieved where business systems are reinvented and entire supply chains are augmented, advanced and enhanced. Given the challenges that AI faces in terms of ‘rise of the robots’ fear-mongering, how can we move to a new level of AI adoption where AI is trusted, tested, toughened and above all transparent?

A clear road to AI transparency

The responsibility for enabling this new era of AI rests upon not only business leaders, but it also lies with all interested parties and stakeholders who seek the many benefits of faster, smarter systems. Work to achieve a new level of trust and transparency is going on at international state level that we all hope will prove effective and robust.

While Australia does not currently have specific legislation governing the use of AI, newly proposed legislation in the EU, namely the European Union Artificial Intelligence Act (EU AI Act), will likely impact Australian enterprises given the nature of international business, and set a benchmark for local regulations in the future. The closest Australia has come to a similar regulation is the government’s voluntary Artificial Intelligence Ethics Framework, which consists of eight principles designed to ensure AI remains safe, secure and reliable in the future.

At its core, the EU AI Act insists that humans remain at the centre of AI innovation and governance. As we start to apply the advantage of AI and machine learning (ML) efficiencies to systems, services and products, it is equally important that human decision-making underpins the logic and algorithms that AI uses. This human-centred AI is needed to properly govern personal privacy, human ethics and corporate compliance.

Transparency starts with auditability

To increase trust and build for a potentially more regulated future, AI systems will need to be transparent and auditable. These AI systems must exhibit traceability and explainability, enabling clear communication channels to illustrate, clarify and ratify the AI models that they are constructed upon.

By having a clear line of sight into the algorithms and ML processes that go towards making an AI model function, there is transparency into the processes, tools, data and ‘actors’ (mathematical models of computation) involved in the production of the total AI process itself.

The most auditable (and therefore the most transparent) AI processes are ones that are built with a level of documentation that is clear and comprehensive enough for auditors to be able to access and use. An AI auditor should be able to use that documentation to produce the same results using the same AI method using a new data science team.

Down the line with AI model lineage

In order to achieve true transparency and trust in an AI system, it is important to understand its lineage. This is the set of associations between any given ML model used by an AI, and all the components involved in its creation. Tracking a model’s lineage is more difficult without robust, scalable model operations — and this is typically down to the number of components involved, which can be large, dynamic and difficult to trace.

Both trust and transparency can be addressed with robust, scalable model management and model operations. Model operations, which is the development and management of ML models that support AI initiatives, is key to operationalising AI. But it can be tough and problematic to scale, so organisations need to think about working diligently with their data science and IT teams to understand their individual operational challenges.

In working practice, robust transparency is a blend of proper disclosure, documentation and technology. Putting it in more specific technical terms, data fabrics and model operationalisation tools track and expose data transparency through changelogs and history. Access to these assets enables us to trace and playback the actions of AI models. These actions, specifically, are the mechanics of transparent AI playing out in working motion.

This data and model traceability, combined with proper disclosures and documentation, help make the data used, decisions made and the implications of those decisions more transparent across an entire organisation. At no level can we reasonably expect customers and partners (and indeed internal company employees in any organisation) to engage with — let alone trust — any business systems driven by AI decisions without transparency at this level.

A better (more transparent) world

AI and ML have revolutionised many fields of science and engineering — just in the last two years, this technology has played an essential part in fast-tracking the COVID-19 vaccine rollout. Specifically, AI and ML enabled scientists to gain a clearer picture of the virus’s structure and predict which of the virus’s components would produce an immune response, which is a lengthy step in any vaccine design. Without the help of these tools, vaccine development would have undoubtedly taken longer. This is just one example of how AI and ML can be leveraged to augment human capabilities. We also see the effects of this technology in our everyday lives, such as when we use Google Maps to avoid traffic build-ups or are shown targeted advertisements that, with our permission, leverage our prior shopping habits and search terms.

Taking these actions forward within the boundaries of regulatory compliance and governance that communities, practitioners and lawmakers will set and establish are powerful enablers for trust as we embrace AI and its many benefits — delivering positive customer experience, medical breakthroughs and operational excellence.

Image credit: ©stock.adobe.com/au/bagotaj

Related Articles

Storage strategy in the multicloud era

Data has become the essential raw material in the strategic orientation of business, making data...

Private AI models: redefining data privacy and customisation

Private AI signifies a critical step forward towards a more secure, personalised and efficient...

Why having an observability strategy is critical for effective AI adoption

As organisations continue to adopt AI and put it to work in a variety of innovative ways, many...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd