Clearview Facial Recognition Adds Deblur and Mask Removal

Undeterred by lawsuits and demands to stop scraping social media, facial recognition firm Clearview AI is plowing ahead with efforts to expand its database and introduce new tools. Company co-founder and CEO Hoan Ton-That said Clearview has collected more than 10 billion images from social media and the Internet, while the company is adding new tools to help users, often law enforcement, obtain matches. Most recently, the company developed a deblur tool in addition to mask removal, which uses machine learning to recreate the covered part of a person’s face. However, use of such tools raises concerns that individuals could be wrongly identified or biases could result.

Mask removal uses “a best guess based on statistical patterns found in other images,” reports Wired. “I would expect accuracy to be quite bad, and even beyond accuracy, without careful control over the data set and training process I would expect a plethora of unintended bias to creep in,” said MIT professor Aleksander Madry, a machine learning specialist.

People with certain features are more likely to be wrongly identified and the ethics of “unmasking” is problematic. “Think of people who masked themselves to take part in a peaceful protest or were blurred to protect their privacy,” Madry explained. NEC has also developed an algorithmic process to identify people beneath their masks.

Ton-That noted that Clearview’s new tools improve accuracy, and that enhanced images will be designated as such. Deblur also uses machine learning to help envision what a clearer picture would look like.

Clearview’s technology has been called a “surveillance tool,” sparking public outrage over invasion of privacy in an era where smartphones, social media and AI have blurred boundaries. Law enforcement uses Clearview’s database to help identify suspects. Ton-That “says he believes most people accept or support the idea of using facial recognition to solve crimes,” reports Wired.

That hasn’t stopped the ACLU from suing Clearview in Illinois under a law that restricts the collection of biometric data. Clearview has also been targeted with class action lawsuits in New York and California.

Wired writes that Facebook and Twitter have demanded that Clearview stop scraping their sites” and quotes Facebook spokesperson Jason Grosse as saying “Clearview AI’s actions invade people’s privacy, which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services.”

Jonathan Zittrain, director of Harvard’s Berkman Klein Center for Internet & Society believes the greatest danger of Clearview’s model is that it will lead to using facial recognition on a regular basis as the new norm. “And we know how this play enters its next act,” he stated. “LinkedIn and Facebook continue to object and throw up enough chaff, but then they just license access and it becomes about sharing the money.”

Ton-That says Clearview has 3,100 law enforcement and government customers. Wired was able to identify the FBI, U.S. Immigration and Customs Enforcement, and U.S. Customs and Border Protection among them. The executive notes that investigators have used artificial intelligence to aid their investigations for years, and that as long as its supervised “under human control,” potential harm will be minimized.