Hacking by Voice Command: The New Threat to Mobile Phone Security
Hackers have a powerful new tool at their disposal, according to security researchers at Georgetown University and the University of California. And it’s the power of their own voice! Voice command technology is more common on smartphones. So it was only a matter of time before hackers starting hijacking the tech.
That said, one needn’t be tech-savvy, or be familiar with speech recognition systems to exploit this weakness.
The siren song of the mobile phone hacker
In Her, Scarlett Johansson gave AI voice command technology a sultry sound. Unfortunately, the synthesized voice recordings in the study sounded more like the Borg from Star Trek.
In the researchers’ YouTube video demonstration, a voice mutters something barely intelligible. Yet the smartphone’s speech recognition system is able to recognise and respond to the voice command. The voice instructs the smartphone to access XKCD.com, and the device duly obliges.
Voice command risks
The obvious concern is that voice commands could hijack smartphone web browsers, and direct them to websites containing malware. Since the voice commands are disguised, users wouldn’t know that their phones had been targeted.
The fact that the command can be inserted into a YouTube video is disturbing, as hackers could target multiple devices by using voice commands in a viral video.
Apple fans will love the fact that Siri didn’t respond as easily to hacking voice commands as Okay Google. However, Apple and Android devices running Okay Google were successfully hacked.
The second part of the experiment, dubbed the ‘white-box test’, involved sending a voice command to open source speech-recognition software such as Sphinx. This experiment assumed hackers have full knowledge of the systems, so they could make adjustments on the fly. The white-box experiment was more successful in producing effective voice commands that were indecipherable to humans.
According to Micah Sherr, computer science department professor at Georgetown, Siri and Amazon Echo inspired the research. Since speech recognition technology is moving into smart homes, the prospect of voice command infiltration is more relevant.
That said, the’re no guarantee that the trick will work every time. Certain factors can affect the chance of success. Proximity of the smartphone to the voice command source and environmental noise affected success.
Protect yourself from voice command hacking
Furthermore, there are a number of measures users can take to protect themselves. For example:
- Passive defences notify users when action is taken. There is a risk that users can miss or ignore these alerts.
- Active defences require users to verify their identity before carrying out commands. This can frustrate users in the long run, as they have to jump through hoops to execute legitimate voice commands.
Ultimately, the best defence is to pay attention to your voice activation settings. According to Henry Hoggard, security consultant at MWR InfoSecurity, users need to enable device-wide voice commands for hacks to succeed. Users should ensure that voice recognition is not set to ‘always on’.
Even when device-wide voice commands are disabled, users can still execute voice-commands on Android by accessing Google Voice command widget, so there’s not much incentive to have device-wide voice commands enabled in the first place.
So while the experiment introduces some alarming possibilities, the researchers admit that hackers have far more effective methods available to them, methods that have been in use for a long time, such as phishing. For all the panic that the prospect of voice-activated infiltration elicits from smartphone users, the lesson remains the same: Always be vigilant, and take mobile phone security seriously.