Voice assistant systems vulnerable to lasers


Monday, 09 December, 2019

Voice assistant systems vulnerable to lasers

In a world of ever increasing cyber threats, lasers probably weren’t high on anyone’s list of concerns, but research from the University of Michigan and Tokyo’s University of Electro-Communications has shown that a strategically pointed laser can be used to hijack voice assistants in smart speakers, phones and tablets.

From up to 110 metres, through glass, researchers were able to trick voice-activated virtual assistants such as Siri, Alexa or Google Assistant into unlocking a smart lock-protected front door; opening a connected garage door; shopping on e-commerce websites at the target’s expense; and locating, unlocking and starting a vehicle that’s connected to the target’s account, according to the University of Michigan. 

The vulnerability — which the researchers called “Light Commands” — lies in the devices’ microphone, where just five milliwatts of laser power (the equivalent of a laser pointer) was enough to move a smart speakers’ microphone’s diaphragm in the same way it would when hit with sound, such as a voice command. The diaphragm’s movement generates electrical signals representing the command which the voice assistant can act on, the University of Michigan explained in the below video. The same effect could be achieved in phones and tablets with about 60 milliwatts of power, the university said.

University of Michigan Assistant Professor of Computer Science and Engineering Daniel Genkin said the research shows that “hijacking voice assistants only requires line of sight rather than being near the device”.

“The risks associated with these attacks range from benign to frightening depending on how much a user has tied to their assistant,” he said.

“In the worst cases, this could mean dangerous access to homes, e-commerce accounts, credit cards and even any connected medical devices the user has linked to their assistant.”

The researchers are currently working with Google, Apple and Amazon on hardware and software fixes to mitigate these risks.

In the meantime, University of Michigan Research Investigator in Computer Science and Engineering Sara Rampazzi said voice assistant users could protect themselves from Light Commands by “avoid[ing] putting smart speakers near windows, or otherwise attacker-visible places”.

“While this is not always possible, it will certainly make the attacker’s window of opportunity smaller. Another option is to turn on user personalisation, which will require the attacker to match some features of the owner’s voice in order to successfully inject the command,” she said.

Image credit: ©stock.adobe.com/au/rcfotostock

Related News

Commvault arranges to buy Appranix

Cyber resilience provider Commvault plans to leverage its acquisition of Appranix to help...

Fujitsu establishes security consulting division

Fujitsu's new digital security consulting division will help organisations prepare for and...

Unstoppable Domains joins GlobalBlock initiative

Web3 domain name service provider Unstoppable Domains has joined the GlobalBlock initiative to...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd