European Union Institute researchers, working with the European Consumer Organisation (BEUC), created AI-enabled software to scrutinize the privacy policies of 14 major technology companies for violations of the new GDPR. They found that one-third of the clauses were “potentially problematic” or contained “insufficient information,” with 11 percent of the policies’ sentences using “unclear language.” Among the companies examined were Alphabet, Amazon and Facebook. The researchers did not reveal which companies were in violation.
Bloomberg reports that the researchers, who conducted the study in June, only published, “aggregate findings for all of the companies in the study.” The new General Data Protection Regulation requires “clear and comprehensive explanations of what data a company collects, how it uses the data, and who it shares the information with.”
The AI software, dubbed Claudette, found “policies that did not identify third parties a company might share personal data with, policies that stated users would be deemed to have agreed to a plan simply by using the company’s website and others that used vague and confusing language.”
BEUC director general Monique Goyens said the findings of the research were “very concerning,” and “urged EU regulators to look at the possible violations the researchers spotted.”
“Many privacy policies may not meet the standard of the law,” she wrote. “AI can be used to keep companies in check and ensure people’s rights are respected.” She added that, “such software would make it easier for EU data privacy regulators to monitor the vast number of businesses they are now responsible for policing and to start legal action against those who break the law.”
Claudette is based on natural language processing to “compare the wording of companies’ policy documents to model policy clauses that have been developed by an EU body that represents all of the bloc’s national data protection authorities.”