The Internet company Google scans the images and emails of its cloud customers for photos and videos of child abuse. The company reports hits from its automatic searches to the local police authorities after checking a common clearing house. What makes sense has already led to allegations against innocent parents in the United States.
Parents had sent images of their sick infants' genitals to paediatricians for consultation. In at least one case, Google blocked a user account and deleted the user's image archives as well as emails from many years. The New York Times reported on the false accusations and their consequences over the weekend.
In an interview with the newspaper, “Mark”, one of the affected Google users who remained anonymous, reported how Google makes all account data, emails and calendars inaccessible, how this means that access to his digital identity and his digital image archives is lost, and what the consequences are for has his everyday life. He also reports that Google has not lifted the ban, even after the local police checked the case and declared it harmless.
The cases raise questions about the extent to which Google protects the privacy of its users - and to what extent the company has the results of its automatic search verified by humans before forwarding them to the police.
The cases also show how dependent users become on the benevolence of the Internet giant when they link all Internet services, logins, their photo archives and their mail to Google's cloud access and user accounts from Gmail, Google Cloud and the group's "Chrome" browser. The more intensively users use Google services, the more vulnerable they are to an unjustified account suspension.
According to research by WELT AM SONNTAG 2021, Google also scans user data in its cloud storage and email services in Germany and automatically searches for child abuse material. It's not just about pictures that are actively sent by email - it is sufficient if the user has activated Google's cloud backup for photos on an Android phone, for example.
Google's expert Claire Lilley had assured WELT AM SONNTAG that all cases will be examined in detail by people using a standardized procedure before they are passed on to the authorities. But this procedure seems to fail, at least in the cases described by the New York Times.
And in Germany, too, Google's investigators are getting a lot of false alarms, according to research by WELT AM SONNTAG in 2021: According to experts from the NRW State Criminal Police Office, the error rate was around 40 percent in the past. "So some algorithm produces a bunch of false accusations - and the taxpayer then has to sort this garbage," commented Patrick Breyer, member of the Pirate Party in the European Parliament, on the contribution of the corporations against the abuse pictures on the Internet.
In the meantime, the debate has a whole new relevance: In the EU, there is currently a discussion about obliging providers to automatically check messages on the Internet. It is not only about the depiction of child abuse, but also about so-called cybergrooming, i.e. attempts by adults to contact minors via chat.
The plans are extremely controversial among the governments of the EU member states - the German government, for example, sent the EU Commission a list of 61 questions, some of which were very critical, to the EU Commission in June.
In the document published by the Netzpolitik platform, the federal government refers to agreements in the coalition agreement of the traffic light coalition, according to which private communication is to be protected and end-to-end encryption is to be maintained as the standard for data protection and cyber security.
The federal government's experts asked, among other things: "How mature are state-of-the-art technologies for avoiding false hits?" and "What proportion of false positive hits can be expected (...)?"
The EU Commission's answers are vague - among other things, the Commission refuses to set a minimum level of maturity for the scanner technology, measured by the error rate. The Commission also finds error rates of up to ten percent of all hits acceptable. How exactly the scanning should work without violating the privacy of the users and breaking the end-to-end encryption is not clear from the answers.
Scanning on the device, which is the only technical solution that would bypass encryption, is rejected by large providers. WhatsApp parent company Meta, for example, wants to protect the rights of users and warns of possible abuse by authoritarian regimes. Apple had only announced the scan on the iPhone and had moved away from it after protests.
"Everything on shares" is the daily stock exchange shot from the WELT business editorial team. Every morning from 7 a.m. with our financial journalists. For stock market experts and beginners. Subscribe to the podcast on Spotify, Apple Podcast, Amazon Music and Deezer. Or directly via RSS feed.