Clearview AI, the facial recognition tool based on a database of faces scraped from Facebook and elsewhere, is facing several legal complaints from privacy watchdogs in Austria, France, Greece, Italy and the United Kingdom. The complaints, originally filed by privacy advocates, state that Clearview AI violates privacy protections established under the GDPR data privacy law and its UK equivalent. The New York City-based company claims to have helped thousands of U.S. law enforcement agencies arrest criminals and predators.
Gizmodo reports that the Hermes Center for Transparency and Digital Human Rights in Italy, Homo Digitalis in Greece, and the European Center for Digital Rights in Austria have filed complaints against Clearview.
Privacy International legal officer Ioannis Kouvakas said, “European data protection laws are very clear when it comes to the purposes companies can use our data for … extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users.”
Clearview AI founder and chief executive Hoan Ton-That said the company has no contracts “with any EU customer and is not currently available to EU customers.”
The AI-based company has been under scrutiny since it was revealed to have “have amassed a database of more than three billion images scraped from Facebook and elsewhere without consent.” Faces are searchable via a mobile app. Emphasizing the value of Clearview to law enforcement, Ton-That said, “national governments have expressed a dire need for our technology because they know it can help investigate crimes like money laundering and human trafficking, which know no borders.”
The facial recognition system has also reportedly “been used or tested at more than 200 companies, including many retail giants such as Best Buy, Home Depot, and Walmart.” Police officers at “at least dozens of departments” have apparently “downloaded and used the app without their department’s knowledge.”
The privacy authority in Hamburg, Germany declared Clearview’s practices “unlawful,” arising from a complaint by computer scientist Matthias Marx, whose biometric profile is in that company’s database, “without his knowledge or consent.”
Clearview “denied that Marx had been monitored over any length of time … [but only provided a] snapshot of some photos available on the Internet.” But the Hamburg Data Protection Authority (DPA) countered that one of the photos of Marx scraped by Clearview included text identifying him as a student and placing him in Hamburg on a specific date. That, it said, is considered monitoring, because Clearview “evidently also archives sources over a period of time.”
“Systematic recording is not necessary,” it added. “The sensitivity of the monitored behavior is irrelevant. The motive for the monitoring is also irrelevant.” The DPA’s deletion order, however, was “narrowly focused” disappointing Marx and privacy groups.
“This surveillance machine is terrifying,” said Marx. “Almost one year after my initial complaint, Clearview AI doesn’t even have to delete the pictures that show me. And even worse, every individual must submit their own complaint. This shows that our data is not yet sufficiently protected and that there is a need for action against biometric surveillance.”