CSIRO's Data61 develops digital 'vaccine'
Current AI and machine learning algorithms are vulnerable to adversarial attacks, according to Data61 Machine Learning Group Leader Dr Richard Nock.
These attacks involve adding a layer of noise (an adversary) over an image or piece of data to confuse algorithms and cause them to misclassify that image or data.
“Adversarial attacks have proven capable of tricking a machine learning model into incorrectly labelling a traffic stop sign as a speed sign, which could have disastrous effects in the real world,” Nock said.
To combat this, Nock and his team created a digital ‘vaccine’.
“We implement[ed] a weak version of an adversary, such as small modifications or distortion to a collection of images, to create a more ‘difficult’ training data set. When the algorithm is trained on data exposed to a small dose of distortion, the resulting model is more robust and immune to adversarial attacks.”
These vaccines were built from the worst possible adversarial examples, allowing ‘inoculated’ data to withstand very strong attacks, according to a research paper accepted at the 2019 International Conference on Machine Learning.
Data61 CEO Adrian Turner said this research is a significant contribution to the growing field of adversarial machine learning.
“Artificial intelligence and machine learning can help solve some of the world’s greatest social, economic and environmental challenges, but that can’t happen without focused research into these technologies,” Turner said.
“The new techniques against adversarial attacks developed at Data61 will spark a new line of machine learning research and ensure the positive use of transformative AI technologies.”
Symantec has identified a surge in the use of targeted ransomware by an ever-increasing number of...
AustCyber and the NSW Government have formally launched the sixth national Cyber Security...
Google has confirmed contractors listen to and transcribe snippets of audio recorded by Google...