Oversight needed for police use of AI
Digital Rights Watch has joined the chorus of media and privacy advocates calling for accountability and oversight around facial recognition technology use after Clearview AI’s data breach allegedly revealed Australian law enforcement agencies were using the company’s tools for identification purposes.
It comes just weeks after Australia’s state, territory and federal police either denied or failed to comment on their use of Clearview AI, despite the assertion of the company’s founder, Hoan Ton-That, to the ABC that Australian police were using its technology.
“We should be deeply concerned that our police forces are using Clearview AI’s facial recognition technologies here in Australia without any accountability or oversight,” Digital Rights Watch Chairperson Lizzie O’Shea said.
“We need an inquiry into the data broking industry, and into the use of Clearview AI technology and other facial recognition surveillance by Australian law enforcement agencies.
“Facial recognition technologies invade our privacy and can impact on our fundamental rights. There are huge concerns about its accuracy, and the potential for such data to be misused.”
Currently, Australia has a limited framework governing biometric scanning, including facial recognition.
“Clearview AI scraped profile images of faces from social media platforms without the consent of users, breaching the terms of service of the platforms themselves, creating a database of billions of faces. The use of this database by Australian police raises a number of legal questions. We need transparent policies and regulatory frameworks that oversee the use of facial recognition technologies by government agencies and corporations,” O’Shea said.
As a result, Digital Rights Watch is calling for a “moratorium on the deployment and use of facial recognition technologies until we fully understand their implications and there are strong regulations that govern their use”, O’Shea continued.
“This is a growing trend worldwide, with a number of cities banning the use of facial recognition technologies, including San Francisco, and the European Commission considering a five-year ban. Australia should follow suit,” she concluded.
There were 29,000 malicious mobile apps detected in 2020 Q1, up from 14,500 in the same quarter...
NAB has signed on 120,000 customers to its VoiceID biometrics authentication service since its...
ESET researchers have uncovered a new malware toolkit that appears to be designed to exfiltrate...