Skip to main content

Study reveals extent of privacy vulnerabilities with Amazon’s Alexa

image of Amazon Alexa device

A recent study outlines a range of privacy concerns related to the programs users interact with when using Amazon’s voice-activated assistant, Alexa. Issues range from misleading privacy policies to the ability of third parties to change the code of their programs after receiving Amazon approval.

“When people use Alexa to play games or seek information, they often think they’re interacting only with Amazon,” said Anupam Das, co-author of the paper and an assistant professor of computer science. “But a lot of the applications they are interacting with were created by third parties, and we’ve identified several flaws in the current vetting process that could allow those third parties to gain access to users’ personal or private information.”

At issue are the programs that run on Alexa, allowing users to do everything from listen to music to order groceries. These programs, which are roughly equivalent to the apps on a smartphone, are called skills; there are more than 100,000 skills for users to choose from. Because the majority of these skills are created by third-party developers, and Alexa is used in homes, researchers wanted to learn more about potential security and privacy concerns.

They used an automated program to collect 90,194 unique skills found in seven different skill stores. The research team also developed an automated review process that provided a detailed analysis of each skill.

One problem the researchers noted was that the skill stores display the developer responsible for publishing the skill. This is a problem because Amazon does not verify that the name is correct. In other words, a developer can claim to be anyone. This would make it easy for an attacker to register under the name of a more trustworthy organization. That, in turn, could fool users into thinking the skill was published by the trustworthy organization, facilitating phishing attacks.

The researchers also found that Amazon allows multiple skills to use the same invocation phrase.

“This is problematic because, if you think you are activating one skill, but are actually activating another, this creates the risk that you will share information with a developer that you did not intend to share information with,” Das said. “For example, some skills require linking to a third-party account, such as an email, banking or social media account. This could pose a significant privacy or security risk to users.”

In addition, the researchers demonstrated that developers can change the code on the back end of skills after the skill has been placed in stores. Specifically, the researchers published a skill and then modified the code to request additional information from users after the skill was approved by Amazon.

The paper was co-authored by Sheel Jayesh Shah, a graduate student at NC State, and William Enck, an associate professor in the Department of Computer Science.


Return to contents or download the Spring / Summer 2021 NC State Engineering magazine (PDF, 52.0 MB).

Magazine Archives