قالب وردپرس درنا توس
Home / Apple / Apple and Google stop human voice data ratings through privacy setbacks, but transparency is the real issue

Apple and Google stop human voice data ratings through privacy setbacks, but transparency is the real issue



Both Google and Apple are suspending some of their voice data assessment practices, after separate reports last month revealed the extent to which companies allow people to listen to private conversations.

After a data leak last month, Google confirmed that some of their contractors are listening back to recordings of what people say to Google Assistant – this said it helps improve their support for multiple languages, accents and dialects. While it is likely that employees or contractors cannot correlate recordings with user accounts, the contents of many of the recordings contained personally identifiable data, including addresses, names and other private information. Furthermore, many of the recordings had been accidentally activated by the user.

Later in the month, a separate report stated that Apple often allowed workers to access up to 30 seconds of "random" Siri recordings as part of the voice grading program. While it was already known that Apple was listening to some Siri recordings to improve the quality, the new report found that recordings were not only available by internal staff but by high turnover contractors. And again, Siri could be triggered by accident, for example by the sound of a zipper, or words that sound like "Siri," with 30-second-long excerpts of recordings made unknown to the user.

Yesterday, news came out that a German privacy authority had ordered Google to stop harvesting Google Assistant voting data in Europe for human reviewers. In reality, the authority only has the power to enforce the ban for three months because Ireland serves as Google's most important jurisdiction in Europe. The Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) said [in German]:

The Hamburg Commissioner for Data Protection and Freedom of Information has opened an administrative procedure to prohibit Google from conducting such employee or third party evaluations for a period of three months. This should protect the privacy rights of those affected for the time being.

Google said at the time it had already stopped processing such information in July following the initial public backlash.

Earlier today, Apple confirmed that it had suspended its grading program globally, pending a "thorough review", according to a statement from the Cupertino company.

It is also worth noting the quiet elephant in the room here ̵

1; Alexa. Amazon's digital assistant is arguably the market leader in the US from a smart speaker perspective, though Siri and Google Assistant's installation base, of course, extends deeper into the tech realm through billions of smartphones and tablets. However, a Bloomberg report from April confirmed that Amazon also allows for voice data captured from its users to train and improve Alexa. So far, Amazon has not confirmed any plans to stop its voice evaluation practices in response to any of these latest privacy reports.

Transparency

While there is undoubtedly growing concern about how technology companies process user data, the bottom line is that for artificial intelligence to improve, it will need people at the helm who oversee things and comment on data for some time. But it is not necessarily the core problem at play here – the underlying problem may be more related to transparency, and whether people are adequately informed about how their private conversations can be reached.

What is needed is a clearer permission structure. No long privacy rules or hidden opt-out settings – a clear pop-up that asks the user if they are satisfied that third parties are listening to their home activities. Zero veil. While that is happening, Apple today confirmed that it will allow users to opt out of the voice assignment program through a future software update, although it remains to be seen how clear this decline will be.

Europe plays a prominent role in the push to hold companies accountable for the user data they utilize. Last month, British Airways (BA) was released with a record £ 183.39 million ($ 230 million) in a 2018 fall in security, followed shortly after by hotel giant Marriott, which was hit by £ 99 million (123 million) dollars)) fine for similar violations. This was made possible by European GDPR regulations, which came into force in May last year.

Transparency is also a key aspect of GDPR regulations, and back in January, Google was fined € 50 million ($ 57 million) by the French data protection agency CNIL (National Data Protection Commission) for what it called "a lack of transparency, deficient information and lack of valid consent "regarding its methods of customizing ads.

"The use of language assistance systems in the EU must comply with the data protection requirements of the GDPR," noted Johannes Caspar, Hamburg Commissioner for Data Protection and Freedom of Information, in a statement yesterday. "At Google Assistant, it is currently significant doubt. The use of language assistance systems must be done in a transparent manner, so that informed consent from users is possible. In particular, this involves providing sufficient information and transparently informing those concerned about the processing of voice commands, but also about the frequency and risk of mis activation. "


Source link