Tech giants agree to fake news code


By Dylan Bushell-Embling
Tuesday, 23 February, 2021


Tech giants agree to fake news code

Technology giants have agreed on a new industry code aimed at tackling disinformation and misinformation online.

The tech companies, represented by the Digital Industry Group, have agreed to the self-regulatory code, which has now been registered with the Australian Communications and Media Authority (ACMA).

Under the code, signatories have committed to developing and implementing measures to deal with misinformation on their services, taking actions such as labelling or demoting the rank of false content, or suspending and disabling accounts spreading fake news.

ACMA Chair Nerida O’Loughlin welcomed the code as a flexible and proportionate approach to what research has found is a pressing concern for Australians, with more than two-thirds expressing concerns about what is real or fake on the internet during a survey conducted in 2020.

“The code anticipates platforms’ actions will be graduated and proportionate to the risk of harm. This will assist them to strike an appropriate balance between dealing with troublesome content and the right to freedom of speech and expression,” she said.

“Signatories will also publish an annual report and additional information on actions that they will take so that users know what to expect when they access these services.”

The code also contains a range of non-mandatory objectives, including disrupting advertising and monetisation incentives for disinformation and empowering consumers to make better informed choices. Signatories have until May 2021 to sign up to commitments under the code.

But Reset Australia, the local arm of the global initiative aimed at countering digital threats to democracy, has insisted that the new code is both pointless and wholly inadequate.

Reset Australia Executive Director Chris Cooper argued that the code should be scrapped and replaced by an independent public regulator with the power to inspect and audit algorithms.

“This limp, toothless, opt-in code of practice is both pointless and shameless.It does nothing but reinforce the arrogance of giants like Facebook,” he said.

“This code attempts to suggest it can help ‘empower consumers to make better informed choices’, when the real problem is the algorithms used by Facebook and others actively promote disinformation, because that’s what keeps users engaged. Any voluntary, opt-in code is inherently untrustworthy because we know it’s not in the business interests of these platforms to take real action on misinformation.”

Signatories can choose which provisions they have to follow and can simply pull out if adherence to the code starts hurting their bottom line, Cooper noted.

“This is a regulatory regime that would be laughed out of town if suggested by any other major industry. It’s ludicrous to have the peak body for Big Tech regulating itself.”

Image credit: ©stock.adobe.com/au/AA+W

Related Articles

Making sure your conversational AI measures up

Measuring the quality of an AI bot and improving on it incrementally is key to helping businesses...

Digital experience is the new boardroom metric

Business leaders are demanding total IT-business alignment as digital experience becomes a key...

Data quality is the key to generative AI success

The success of generative AI projects is strongly dependent on the quality of the data the models...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd