When it comes to anonymous humans listening in on your audio clips and conversations, tech companies are finally getting their privacy settings in line.
But what’s the good of having options if you don’t know how to use them?
On Monday, Google unveiled new audio policy and settings that allow people to opt-in to allow human reviewers listen to their audio clips captured by Google Assistant. That came after revelations in July that some Google contractors were listening to and leaking recordings from Google Assistant.
Apple, Amazon, and, most recently, Facebook (through its Portal device), have all also undergone similar cycles: a report comes out that contractors are listening to audio clips, the tech company apologizes and pauses collection, and then eventually rolls out controls (and in some cases, policy changes) that would allow users to opt in or out to this sort of data collection.
So, why would anyone agree to let humans analyze their voices, and why were companies using contractors to listen in the first place?
“Opting in to Voice & Audio Activity (VAA) helps the Assistant better recognize your voice over time, and also helps improve the Assistant for everyone by allowing us to use small samples of audio to understand more languages and accents,” Google’s blog post on the new settings reads.
Essentially, humans listen to audio clips in order to transcribe them (or check the accuracy of automated transcriptions), and then feed the audio and its transcription back in to companies’ systems, to make their voice assistants smarter.
Google and its competitors are always improving natural language processing A.I. by delivering more real user data along with its matching meaning. By allowing Google, Apple, Amazon, and Facebook to listen to your conversations and commands, you’re helping their tech get smarter. Doing so is like being a good digital citizen (and the companies say they will work better for you, personally, by better recognizing your voice).
However, it is totally understandable if this is something you don’t want to do. Portions of Apple’s audio recordings actually leaked in Denmark. Plus, these transcribers are often contractors, which means they’re not necessarily as fully vetted as tech companies would like you to believe.
So now that the tech giants want to justify the practice by giving their users the option to participate, it’s time you make a proactive choice and exercise your data privacy rights, isn’t it?
Here’s how you opt in or out to voice data collection on Apple, Amazon, Facebook, and Google devices.
Google’s newly released controls are pretty easy to navigate. When you go to the “Your data in the assistant” page, scroll down to the “Voice & Audio Activity” box. By default, this should be “paused” (this sort of data collection is opt-in).
If you want to change the default setting, hover over the paused bar and click.
This will take you to another page where you can switch the toggle on if you want to allow Google storage and analysis of your voice commands. You can also choose to delete what it’s already recorded by clicking “Manage Activity.”
Transcription contractors might review clips collected by portal that begin “Hey Portal.”
Storage and review of “Hey Portal” audio recordings on Facebook’s Portal is opt-out, but you will be prompted to choose your settings the first time you log in if you’re a new Portal user. If you’ve already been using Portal, Facebook recently sent this push notification to users after making the adjustments:
People on existing devices will get a notification that explains how their voice data is used with a link to Settings where they can turn off storage. New Portal users will have the option to turn off storage of voice data when they set up their device for the first time.
You can go also go into your settings yourself. From Facebook:
You can also go directly to Settings on Portal or the Facebook Activity Log anytime to turn off storage. If you turn off storage, none of your “Hey Portal” voice interactions will be stored or reviewed by people.
In your Portal settings, you can turn off “Storage.” In your Facebook Activity Log (located underneath your cover photo), you can navigate to the Voice Interactions option in the left side bar, and click the “delete all voice interactions” button in the upper right hand corner.
Apple’s new settings aren’t out yet, but human review of transcriptions will be opt-in when they are. Apple has paused transcription review for the time being (since The Guardian reported that contractors “regularly hear confidential details” in July), and plans to roll out the new settings later this fall. All future transcription reviewers will be Apple employees, not contractors. Says Apple:
Users will be able to opt in to help Siri improve by learning from the audio samples of their requests. We hope that many people will choose to help Siri get better, knowing that Apple respects their data and has strong privacy controls in place. Those who choose to participate will be able to opt out at any time.
Following an April report from Bloomberg, Amazon gave Alexa users the option to decide whether humans would review their voice commands and messages in August. Users can adjust the settings in the Alexa app or on the Alexa privacy settings page on desktop. This service is opt-out.
Once you get to Alexa privacy from settings, navigate to “Manage How Your Data Improves Alexa.” Then you can toggle two options off (or keep them on): “Help Improve Amazon Services and Develop New Features” and “Use Messages to Improve Transcriptions.”
If you haven’t been ignoring the news for the last few weeks, then you’ve surely seen the headlines talking about the employees and contractors at Amazon, Google, Apple, and Microsoft listening in on your conversations with voice assistants like Alexa, Google Assistant, Siri, and Cortana and even recording them. The news that your private conversations with AI may be listened to by humans proved disturbing to a lot of users with many looking for options on how to stop Amazon or Google from listening to what they say.
In this post, we will give you the steps on how to stop companies from listening to conversations with your digital assistants.
Are Voice Assistants Listening to Me?
The main function of most digital voice assistants is to recognize your speech commands and perform simple actions like running a web search or playing a music track. However, as it has recently been revealed, most companies have special employees whose job is to listen to snippets of your conversations with voice assistants and other services.
Those listening to your conversations will typically do it for only a few minutes and your name or any other personal information will not be revealed — and the main purpose behind this is to know how well the assistant has been able to “understand” what you’ve said. If the assistant did not get your query, the employee listening to the conversation will try to determine the cause of that, and this information will later be used to improve the said assistant’s performance.
When you’ve pressed “I agree” on the User Agreement for your AI assistant, you may have missed the fine print that said that the company reserved the right to perform these actions. With that, it’s understandable that users may feel vulnerable about having third parties listen to bits and pieces of their dialogues, especially if you take into consideration the fact that voice assistants may get activated by mistake and records random conversations.
Does Alexa Listen In On Your Conversations?
The fact is that the absolute majority of companies that provide voice-powered assistant services do use the techniques that we’ve described above.
According to a Bloomberg report as of April 10th, 2019, Amazon employs thousands of workers
whose job is to listen to Alexa audio clips. A report issued by a Belgian public broadcaster VRT stated that Google had contractors listening to audio clips from Google Assistant. A report issued by the Guardian on July 26th, 2019 provided information about Apple contractors regularly listening to Siri recordings. And a Motherboard report as of August 7th, 2019 said that Microsoft employees were listening to Cortana voice commands as well as portions of some Skype calls.
Since the news has come out, some companies, including Apple and Google, have announced that they have — for now — stopped listening to the conversations and recordings. With that, these activities may resume in the near future and if this is something you feel uneasy about, you will probably want to take steps to stop tech company employees from listening to your conversations with the voice assistant.
How to Stop Google from Storing Voice Recordings?
So, how do you stop Google from storing your voice messages? In order to do this, you will need to manage your voice activity. Here’s how to proceed here:
- Go to the Activity Controls page for your Google account.
- Navigate to “Voice & Audio Activity”.
- Turn this option off — this will stop Google from making and saving new voice recordings of your conversations with the voice assistant.
You will have the option to re-enable this feature later should you ever want to.
You can also delete the recordings that Google has already gathered. Here’s how to proceed:
- Under Voice & Audio Activity, go to Manage Activity.
- Here, you will see all the records of your voice activity that have been stored.
- In order to delete all the audio activity from Google’s storage, click Delete Activity By, select All Time and click Delete.
How to Stop Amazon from Listening to What You Say?
There is a new feature that allows you to opt out of human review of your Alexa recordings as well. The new feature became available very recently — on August 2, 2019.
Here’s how you can make use of the option to stop Amazon from listening to your conversations with Alexa:
- Go to the Alexa app or website.
- Click Settings.
- Navigate to Alexa Privacy > Manage How Your Data Improves Alexa.
- Here, disable the following option: Help Improve Amazon Services and Develop New Features.
You will see the description of this feature as well saying “With this setting on, your voice recordings may be used to develop new features and manually reviewed to help improve our services. Only an extremely small fraction of voice recordings are manually reviewed.”
Once again, if at any point in time you want to allow Amazon to record your conversation, you can go back later and reverse this option.
How to Stop Microsoft from Listening to Cortana Recordings?
If you want to stop Microsoft from reviewing your Cortana voice commands and conversations, there’s a way to do that. Here’s how:
- Go to Settings > Privacy > Speech.
- Here, disable the “Online speech recognition” option.
And that’s it — Microsoft will no longer assign employees to listen in on your conversations.
However, there is no way to stop Microsoft from listening to the bits and pieces of your Skype conversations —at least, at this time. The only thing you can do is switch to another voice or video call service and avoid Skype altogether.
Protect PC from Threats with Anti-Malware
Check your PC for malware your antivirus may miss and get threats safely removed with Auslogics Anti-Malware
How do you feel about tech companies’ employees listening in on your conversations with your AI assistant? Share in the comments below.
Amazon now lets Alexa users opt out from having recordings of their conversations with the voice assistant analysed by humans.
The move comes after criticism was aimed at Amazon, following the revelation that it uses humans to analyze a small amount of conversations had between Alexa and its users. Subsequent reports revealed Apple and Google also employ contractors to listen to a small number of similar recordings with Siri and the Google Assistant.
Added to the Alexa smartphone app – and the alexa.amazon.com website – at the start of August, the new setting lets you stop Amazon contractors from using your conversations with Alexa to analyze how the voice assistant is working.
It was previously possible to opt out of Amazon using your interactions with Alexa to improve the system, but at the time the company did not reveal that this involved humans listening to the recordings.
To switch off human analysis of your Alexa conversations:
- Open the Alexa smartphone app or go to alexa.amazon.com and log in
- Tap the menu icon in the top-left corner
- Tap on Settings
- Tap on Alexa Privacy
- Now tap on 'Manage How Your Data Improves Alexa'
How to switch off recording analysis with AlexaGearBrain
On this page, Amazon justifies listening to a small number of Alexa conversations recordings thusly:
"Training Alexa with recordings from a diverse range of customers helps ensure Alexa works well for everyone. With this setting on, your voice recordings may be used to develop new features and manually reviewed to help improve our services. Only an extremely small fraction of voice recordings is manually reviewed."
Amazon then warns: "If you turn this off, voice recognition and new features may not work well for you."
Tap the toggle switch to turn the feature off, then tap Turn Off on the pop-up window to confirm your decision.
The wording here has changed in recent days, as Amazon used to not mention how recordings were "manually reviewed". It used to say: "When this setting is enabled, your voice recordings may be used in the development of new features."
Just two days after this switch was added to the Alexa app, a report by German newspaper Welt Am Sonntag (translated by Google Translate) claimed Amazon contractors employed in Poland to analyse Alexa recordings could do so while working from home. "User data is practically unprotected," the report said, adding that contractors described it as the "ideal housewife job" as new mothers could analyze the recordings while looking after children at home.
As has previously been reported, these recordings sometimes contained private conversations not meant to be heard by Alexa, but picked up when a smart speaker mistakenly hears someone say "Alexa" nearby, then records for a few seconds.
This week, Facebook came under fire for having hired hundreds of contractors to listen to and transcribe users’ conversations.
Last week, a Vice Motherboard report revealed that Microsoft contractors were listening to audio recordings of personal conversations of Skype users who used the app’s AI translation service and voice commands sent to Cortana, the company’s AI-powered voice assistant.
This comes no longer as a surprise. Microsoft and Facebook aren’t not the first tech companies whose employees or remote contractors listen to users’ voices. Amazon, Google and Apple have been caught doing the same thing with their voice assistants in the past year (and they’ve all used cleverly worded EULA’s to gain users’ consent without explicitly telling them humans will listen to their voice). Earlier this month, Apple and Google stopped their programs to listen to audio recordings.
But this is not a tirade about the privacy concerns of voice assistants and smart speakers (which is an important topic). In this post, I’ll be diving into why every company that offers a voice assistant inevitably resorts to hiring human workers (often low-paid) to correct the stupid mistakes its AI algorithms make.
Voice assistants and deep learning
AI-powered voice assistants and translation services use deep learning, the branch of artificial intelligence that develops behavior through experience. At the heart of deep learning algorithms are artificial neural networks, software structures that are especially good at finding correlations and patterns in vast sets of data.
If you train a neural network with multiple audio recordings of the same word with different accents and background noises, it will tune its inner parameters to the statistical regularities between the different samples and will be able to detect the same word in new audio recordings. Likewise, if you provide a neural network with different texts corresponding to the same request, it will be able to answer to the different ways of uttering the same command.
Deep learning and neural networks have helped solve problems that were historically challenging for classic, rule-based software systems. This includes speech recognition, natural language processing (NLP), machine translation, and computer vision. These are tasks that are previously known to require human intelligence.
It is thanks to deep learning and neural networks that you can talk to Alexa almost as if you were talking to another person (as long as you don’t ask it anything too complicated—but we’ll get to that later).
Anthropomorphizing neural networks
Ironically, the biggest strength of neural networks also amplifies their greatest weakness. Given the complicated tasks they perform, neural networks and deep learning applications are often mistaken or compared to human intelligence.
But despite the remarkable feats they perform (and the name they’ve inherited from their biological counterparts), neural networks and deep learning algorithms are vastly different from the human mind.
Neural networks are as good as their training data. The more quality training data you provide to a neural network, the better it will become at performing its intended task. Also, the narrower the problem domain neural network tackles, the less data it will need to reach accuracy. Consequently, lack of training data and broad problem domains are two of worst enemies of deep learning.
Large tech companies usually have access to vast stores of data to train their AI. But the problem with voice assistants is that they are tackling a very broad problem domain, and they create the wrong expectations in users. They have human names and human-like voices, and their commercials always give the impression that you can ask them anything.
The Wizard of Oz effect
When you apply deep learning to an open and limitless domain, you never have enough training data. No matter how much you train your AI model, there will always be edge cases, scenarios that the neural network has not seen before. That’s why the companies that develop these services must constantly collect new data and retrain their AI models. This means they must monitor users’ behavior for things that confuse their AI.
Another problem is that the neural networks used in voice assistants require supervised learning, which requires human operators to annotate the training examples. When a voice assistant finds a certain command confusing, it can’t figure out the real meaning for itself. A human operator must map it to the right command and steer the AI in the right direction.
This is why the companies hire human contractors to listen to the voice recordings and annotate them with the right label, which the neural networks will then use to finetune its inner parameters. And this entails a host of privacy and ethical concerns.
The Microsoft story is just the latest manifestation of the “Wizard of Oz” effect, where companies try to automate tasks with AI technologies, but end up using human labor to perform those same tasks or to train the AI to avoid repeating its mistakes.
As the AI encounters more and more edge cases and is retrained to handle them, it will become better and better. With more training, the need for human help will become less significant. But when your problem domains is too broad, chasing edge cases turns into an endless war of attrition, and humans will always remain a part of the equation.
One stark example of this is moderating online content with AI, which require commonsense, reasoning and abstract thinking that neural networks don’t possess.
What this means is that, if you’re using a general-purpose voice assistant like Siri, Cortana or Alexa, you can expect it to become smarter. But those smarts will continue to come at the expense of your data.
Most of the time, digital voice assistants use artificial intelligence (AI) to decipher your requests. Knowing that, people expect that machines are listening to them when they’ve given a “wake up” cue. What they may not have realized is that companies are retaining recordings of what they say, even when they’re not directly talking to their device, and that actual humans at Amazon, Apple, and Google might also be listening in. People have even less reason to expect that services beyond voice assistants are recording and reviewing their statements, yet Facebook has reportedly “paid contractors to transcribe audio clips from users of its Messenger service.”
Tech companies defend these practices as necessary to the development of voice-recognition and virtual assistant AI systems. Still, having people, in effect, eavesdropping on daily conversations — important though it may be for algorithm performance — raises a number of questions. What do companies do with that data? Do they sell it to other companies? How do they protect it from unauthorized access, and how long do they keep it? How identifiable are individual users in their voice recordings? How often are voice-activated devices actually recording — and do users know when they’re being recorded? How many of those recordings are listened to by people rather than computers? These concerns aren’t hypothetical. At least 1,000 Google recordings were leaked this year, many of which contained enough information to identify individual users.
The leaked recordings weren’t merely innocuous requests for weather forecasts or reminders; they included “bedroom conversations, conversations between parents and their children, … blazing rows and professional phone calls containing lots of private information.” 153 of those recordings “were conversations that should never have been recorded,” as the trigger command was never given. Nor are leaks the only concern; “last year, a bizarre and exceedingly complex series of errors on behalf of Alexa ended up sending a private conversation to a coworker of the user’s husband.”
In response to the news reports about humans listening to digital assistant recordings, Facebook, Google, and Apple have, at least temporarily, ceased the practice. Amazon has enabled a setting that allows people to delete their recordings. Of course, to delete recordings, people have to know that there are recordings in the first place — and one of the central criticisms has been that companies aren’t doing enough to inform customers about their data practices. How should companies advise users about the data they’re collecting and what they’re doing with it? Will people stop using voice assistants due to concerns about privacy? Only time will tell.
4 Ways IT Can Build a Better Partnership with Legal
Why IT is the Secret Weapon to a Solid Ediscovery Process It’s no secret that building a solid ediscovery process is a team effort, one that takes involvement from many stakeholders across an organization, from legal to compliance, HR, and beyond. Oftentimes one of the most overlooked teams in this process is the IT department. […]
6 Ways to Optimize Your Electronic Discovery Process
Whether you’ve already built an efficient ediscovery process or you’re looking to create one from the ground up, here are 5 tips for optimizing the process.
How to Manage Legal Holds with the Cloud
Cloud technology isn’t just a thing of the future, it’s a tool for the present. With pressure building for in-house corporate legal teams to control costs and mitigate risk, cloud-based legal hold software can help.
How Data Privacy Laws Vary by State
Consumer privacy feels more vital than ever, but very few states have laws on the books that protect consumer privacy in a relatable way. Following the 2018 passage of the California Consumer Privacy Act, other states rushed to draft their own versions of this wide-ranging consumer protection law. The singular goal of almost all these […]
Founded in 2008, Zapproved builds easy-to-use litigation response software designed to help corporate legal teams drive down costs, reduce risk, and build a better process. With our unwavering commitment to keeping our 350+ corporate customers ridiculously successful, we are proud to have earned a 99% retention rate.
Amazon is joining Google and Apple in taking steps to allow users to opt out of human review of its voice assistant recordings, the company announced on Friday. These changes come after many consumers railed against a practice that they saw as encroaching on their privacy, especially after a Guardian report revealed that contractors working for Apple heard cuts of drug deals and couples having sex while reviewing Siri recordings for accuracy.
News first broke in April that employees at Amazon were transcribing users’ Alexa recordings. Last month, a whistleblower told the Guardian that Apple was also listening in on smart speaker snippets, and a Google worker leaked 1,000 recordings to expose the company’s recording and reviewing practices. Human review is supposed to improve the quality of smart speakers’ responses to queries and prevent unintentional activation of the devices, according to the companies. But none of them were up front in letting users know that real people would be privy to their chats with Google Assistant, Alexa, or Siri—or to the unprompted exchanges that the assistants picked up. For example, Apple’s policy only noted that, “certain information such as your name, contacts, music you listen to, and searches is sent to Apple servers using encrypted protocols.” Most users likely didn’t consider that “certain information” could also include intimate moments or illegal transactions.
Google suspended the practice in Europe after regulators slapped the tech giant with a three-month ban on listening to recordings, and Apple has halted voice review worldwide. It’s not clear whether these moves are permanent. Meanwhile, Amazon hasn’t officially discontinued transcription but is offering a clear way for users to opt out. The official statement from Apple explains that the company is conducting a “thorough review,” while Amazon announced it would be “updating information we provide to customers to make our practices more clear.”
Clarity has been hard to come by as Apple, Google, and Amazon have scrambled to explain themselves and adjust their policies. If you’re looking to disable smart speaker recording or opt out of human review, here’s a straightforward guide.
Amazon provided a specific set of directions to opt out of human review. To do so, first open the Amazon Alexa app and click on Settings. From there, tapping through “Alexa Privacy” and “Manage How Your Data Improves Alexa” will bring you to a screen with an updated explanation of the policy. At the bottom, unchecking a box labeled “Help Improve Amazon Services and Develop New Features” means that no recordings can be poached for human review.
Turning this off does not mean that Alexa recordings won’t still be uploaded to Amazon’s servers. To delete recordings, navigate through Settings > Alexa Privacy > Review Voice History in the app. Then you can choose to delete specific recordings or to delete through a certain timespan.
Contractors aren’t going to be listening to Siri recordings anymore, but it’s not clear if Apple will delete those that are already on its servers. Currently, the company stores clips for six months, after which time it removes user IDs from copies that could linger on the server for up to two years. The best you can do now is disable Siri. Here’s how:
In iOS, go to Settings > Siri & Search. Then turn off “Listen for ‘Hey, Siri’ ” and “Press Side Button for Siri.” You’ll get a notification asking if you want to disable Siri, where you’ll select Turn Off Siri. The last step is to go to Settings > General > Keyboard. There, turn off “Enable Dictation.”
On a Mac desktop or laptop, go through System Preferences > Siri. Then, turn off “Enable Ask Siri.” After that, return to System Preferences > Dictation. Finally, turn off “Dictation.”
To thwart Google Assistant, tap the Google Home app, then go to Account > More settings > Your data in the assistant > Voice & Audio Activity. You can deactivate voice and audio recording using the toggle on that screen.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.
After reports of humans listening to and leaking private recordings, Apple and Google paused human reviews.
Apple and Google have suspended human reviews of voice recordings made by Siri and Google Assistant, their respective voice control programs, amid consumer concerns over privacy. Amazon, which operates the Alexa voice assistant, has not.
Over the past month, reports have detailed how contractors for tech giants have accessed recordings most people assumed were private, only heard by artificial intelligence algorithms. Apple contractors told the Guardian in July that they had often overheard people having sex, making drug deals, or describing medical symptoms.
Bloomberg reported in April that Romanian Amazon contractors listened to Alexa’s recordings and were able to view the associated users’ locations. In July, reports surfaced that a Google contractor leaked a thousand voice recordings from Google Assistant to a Belgian news outlet, which was able to identify individuals from the recordings and locations. Tech companies employ people to rate the quality of the responses the voice assistants return to users.
Both companies have suspended human reviews worldwide, they said, but they didn’t say for how long or when human reviews would resume. German authorities mandated that Google halt human reviews for at least three months, though a Google spokesperson said the company had taken action before the German inquiry began.
It’s possible to change the settings on your Google account so that the company stores none of your audio recordings or automatically deletes them after a given period of time.
An Amazon spokesperson said in a statement, “For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features.” The spokesperson added that the number of Alexa recordings reviewed by humans was “a fraction of one percent.”
Samsung, which owns the Bixby assistant, and Microsoft, which owns Cortana, did not immediately respond to requests for comment on human reviews.
Josh Hendrickson has worked in IT for nearly a decade, including four years spent repairing and servicing computers for Microsoft. He’s also a smarthome enthusiast who built his own smart mirror with just a frame, some electronics, a Raspberry Pi, and open-source code. Read more.
Aug 5, 2020, 1:04 pm EDT | 1 min read
When you use a voice assistant like Google Assistant, Alexa, or Siri, the A.I. doesn’t always get your command right. Until last summer, companies were using human reviewers to listen to your command and doublecheck results. But the practice wasn’t clear to users. Google paused its human reviewer program, but now it’d like to start listening to your voice again—with your permission.
Last summer feels like more than a year ago, so it be hard to remember the controversy. Last April, it became evident to users that everything they said to a voice assistant went to Google, Amazon, and other companies.
The idea was to have humans listen to the audio and doublecheck that the assistant understood correctly and responded appropriately. But false positives led to voice assistants uploading audio that users didn’t direct at their speakers. Family dinners, medical details, and more all made their way to cloud servers.
After people realized how often human reviewers listened to their conversations, the outrage began. All the companies paused human reviewers initially, but one by one each went back to the practice.
Now it’s Google’s turn. In an email sent to users, the company explains it’s turning off the setting that allows Google to store audio for every user. That setting empowers human reviewers, so by default, no one will send audio to Google. The idea is to make it your choice on whether or not Google can listen to your voice after you finish talking to Assistant.
But Google would like you to opt back into the audio storage and human reviewing. The practice helps it improve its service and respond more accurately.
The company didn’t say how many emails it’s sending out, but it’s likely anyone who interacts with Google Assistant will get one. The email contains a link to your Assistant settings to enable audio storage.
If you don’t want humans listening to your voice, you don’t have to anything. Hopefully, more companies follow Google’s lead and make features like this opt-in in the future, as opposed to out-out.
- › Surface Duo 2 Hands On: A Better First Impression
- › TCL’s First 5G Tablet Undercuts Apple and Samsung
- › This New LEGO ‘Home Alone’ Set Looks Amazing, Unlike the Disney+ Remake
- › Cloud Gaming Gets RTX 3080 Performance with New GeForce Now Membership
- › Raspberry Pi Just Increased Prices For the First Time: Here’s Why
Josh Hendrickson has worked in IT for nearly a decade, including four years spent repairing and servicing computers for Microsoft. He’s also a smarthome enthusiast who built his own smart mirror with just a frame, some electronics, a Raspberry Pi, and open-source code. Read Full Bio »
Voice assistants (or, if you’re very into cyber stuff, virtual assistants) are gaining popularity every day. Siri, Alexa, and others are making inroads into our houses – and especially the homes of families with multiple children . But is this electronic stranger in our midst really safe? Are voice assistants listening to our every word? Let’s find out.
Is my voice assistant listening to me all the time?
Is Alexa listening to you all the time? Yes, because it wouldn’t work otherwise. Your voice assistant has to be listening for the sounds to be able to pick up on your code words – called “hotwords” – that activate it. Companies maintain that nothing said before the hotword is recorded. Stuff said afterward is fair game.
However, things like Google Assistant privacy concerns suddenly became a front-page topic in 2019 when such practices came under review . What prompted this change of heart? It was all about the news that companies were using third-party contractors to process some of the work samples. A lot of concern came from the fact that some of the audio used included geographical information and intimate conversations .
Afterward, a rush of damage control followed, culminating in more transparent user options and, in the case of Google, automatically opting users out of voice data collection in 2020 . So while your voice assistant is still listening to you in case you say the magic words, the process behind it is now more transparent and more in your control.
What kind of data is my voice assistant collecting?
Your voice assistant is collecting your audio data. This means that what you say is recorded and then sent to the developer. The thing is that quite a lot of data can be gleaned from the recording, from your location to your shopping habits. For example, Belgian data leaks that revealed Google’s employment of contractors to analyze voice data showed recording samples that allowed specialists to identify the locations of the people doing the talking.
However, there are other data recorded along with your voice activity. For a more specific example, we turn to an Apple support page, which outlines what other stuff gets transmitted along with Siri recordings:
- Contact names, nicknames, and relationships (for example, “my dad”), if you set them up in your contacts
- Music and podcasts you enjoy
- Names of your and your Family Sharing members’ devices
- Names of accessories, homes, scenes, and members of a shared home in the Home app
- Labels for items, such as people’s names in Photos, Alarm names, and names of Reminders lists
- Names of apps installed on your device and shortcuts you added through Siri
However, Apple is quick to point out that this data is tied not to your Apple ID but to a random identifier generated by the device submitting it. The company is also adamant that this data isn’t sold or used to build a marketing profile.
That may actually be true, as one of the most important consumers of your voice data is the RnD behind the voice assistant.
What is the voice assistant data used for?
One of the most useful areas where voice assistant-collected audio can be applied is to teach the voice recognition model . The more samples of people saying “Hey, Google” in various circumstances you have, the better the assistant will be in recognizing that hotword (instead of, say, mistaking a cat’s meow or a blender going off for an activation).
However, you don’t just feed a bunch of audio data into an algorithm and hope for the best. Instead, humans have to process the raw data and prepare it for computer consumption . That’s where the issue with Google voice assistant voice collection came in: volunteers listening to user audio bits. They have to listen to the audio, transcribe it, annotate it, and more to make it into material useful for training voice recognition algorithms.
As of the 2020 August policy change, Google now explicitly states that your voice data is used by trained personnel. The data is detached from your account, so the volunteer can’t tell that it’s you saying, “Hey Google, order a tub of lard.” However, if you said, “Hey Google, order a tub of lard on my Henry Surfsharkreader’s account,” then the volunteer would know that you, Henry, are ordering a tub of lard.
Can I stop the voice assistant from collecting data?
2020 was the year when voice assistant corporations were finally convinced that people may object to their data gathering measures. That’s why Google has automatically opted everyone out of voice collection.
Nevertheless, if you want to have a more hands-on approach to how your voice assistant treats your data, there are things you can do:
- Check the support article on voice assistants’ data security and privacy settings .
- Fiddle with Google Assistant Activity by going to https://myactivity.google.com/ , where you can choose whether to save web and app activity as well as sound and set up an auto-delete feature .
- Visit the support article detailing the ways Ask Siri treats your data and how to switch the options.
- Disable Siri (if you want to, of course) by going to Settings. Then click Siri & Search and tap “Listen for ‘Hey Siri’” and “Press Home or Side Button for Siri” to turn them off.
- Disable Diction by going to Settings > General > Keyboard > Disable Diction. Disabling both Siri and Diction will delete the Siri data associated with the random identifier.
- Open up the Alexa app, and go to Settings > Alexa Privacy > Manage Your Alexa Data. Then, turn off the scary-sounding “Use Voice Recordings to Improve Amazon Services to Develop New Features” setting.
- Delete everything Alexa has stored about you by going to Settings > Alexa Privacy > Review Voice History. Then, choose “Delete All Recordings for All History.”
Finally having control over voice privacy is nice
It wasn’t from the goodness of their hearts that voice assistant producing corporations gave us more control over our data – it all followed reports of contractors being used to analyze your voice . But now we have some control over our data – and where it goes. Yet, it’s only a single aspect of our digital lives that we covered. Consider getting a VPN to improve your online privacy and security.