How Australian orgs can shift from AI laggards to leaders

GBG
By Carol Chris, Regional General Manager, Australia and New Zealand, GBG
Wednesday, 17 August, 2022


How Australian orgs can shift from AI laggards to leaders

A recent report from CEDA shows Australia is falling behind its international counterparts when it comes to artificial intelligence (AI) adoption. Only 34% of Australian companies currently use AI in their operations, with just 12% prioritising consumer confidence and trust in their AI rollouts.

Australia’s AI adoption, or lack thereof, is usually referred to in a generalised business context, as the technology has a broad range of applications and outcomes. However, looking at it through a fraud and identity theft lens offers a new perspective on how businesses can more effectively harness AI’s capabilities.

Using AI in the complex customer onboarding process, particularly in industries such as banking, insurance and health care — where billions of lines of code power millions of transactions — is an effective way to reduce human error, eliminate bias and enable faster responses to emerging threats.

Increased digitisation opens doors for both threats and preventive AI solutions

According to the Australian Competition & Consumer Commission’s (ACCC) Scamwatch, in 2021 there was a significant increase of 234% in instances of identity fraud in Australia, with the total dollar value of the frauds exceeding the full year of 2020 by September.

There are many reasons for this jump in instances of fraud and identity theft. Most recently, the key reason is the accelerated digitisation of businesses during the COVID-19 pandemic, which drove consumers to conduct personal transactions in banking, shopping or with government agencies online in greater numbers than previously seen. Additionally, instances of natural disasters or economic uncertainty introduce more opportunities for financial fraudsters to take advantage of consumers at their most vulnerable moments.

Detecting and responding to identity fraud can be a challenge for businesses that are dealing with millions of transactions daily. It is unrealistic for businesses to expect they can stay ahead of fraudsters without the benefits of technology. The growing complexity of fraudsters’ strategies and the resources at their fingertips mean AI is a critical tool that businesses now require to prevent identity fraud as or before it happens.

For example, AI and machine learning solutions can enable financial institutions to flag fraud or signs of fraud in the early stages of customer onboarding, such as when they are applying for a new bank account, as opposed to later when they have already begun financial transactions.

AI is a gateway to realising new technological benefits

One of the biggest challenges faced by businesses today is not the increasing volume of fraud instances, but the creativity and sophistication in which fraud attacks are executed. In the financial services sector, for example, where fraud is particularly widespread, financial institutions are seeing higher false positive rates, higher fraud alerts and more missed frauds.

Amid this evolving threat landscape, it is critical for every layer and part of businesses’ technology solutions to be effective. AI can help maximise the impact of other technologies and capabilities, such as biometrics, as businesses are looking for more seamless and secure ways to improve customer experiences.

The connection between AI and trust

The CEDA report highlighted that customer confidence and trust are a low priority for businesses in their AI rollouts. This has to change. As new technologies such as AI are incorporated into business operations without establishing digital trust first up, customers may become suspicious or anxious that previously manual or physical interactions are now automated.

The process for establishing digital trust in AI rollouts does not need to be overly complex. A recent GBG Digital Trust survey found that when requesting personal information or one’s digital identity, Australians’ level of trust in an organisation is most likely to increase if they demonstrate clear safety protocols on their website such as multi-factor identification steps (27%). Over 1000 Australian consumers participated in the July 2021 survey to determine attitudes around sharing digital identify and personal information online.

Consumers feel more assured when organisations integrate front-end security measures. AI can play an effective role in achieving this while also enabling simple, safe and secure identity verification and customer experiences.

Ultimately, the decision for businesses to speed up their AI adoption and keep pace with international counterparts boils down to the importance of prioritising the safety and security of their customers’ journey from the beginning. Businesses that assume they can compete with fraudsters without leveraging AI will be bitterly disappointed. However, organisations that take proactive and preventive AI-driven measures will be able to withstand the ongoing threat landscape as well as plan against future threats.

Image credit: ©stock.adobe.com/au/peshkova

Related Articles

The problem with passwords is not what you think

When it comes to secure authentication, there seems to be a lesson we're not learning.

Secure-by-design software development for digital innovation

The rise of DevSecOps methodologies and developments in AI offers every business the opportunity...

Bolstering AI-powered cybersecurity in the face of increasing threats

The escalation of complex cyber risks is becoming a pressing issue for those in business...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd