Microsoft Pulls AI Analysis Tool Azure Face from Public Use

As part of an overhaul of its AI ethics policies, Microsoft is retiring from the public sphere several AI-powered facial analysis tools, including a controversial algorithm that purports to identify a subject’s emotion from images. Other features Microsoft will excise for new users this week and phase out for existing users within a year include those that claim the ability to identify gender and age. Advocacy groups and academics have expressed concern regarding such facial analysis features, characterizing them as unreliable and invasive as well as subject to bias.

The changes follow a two-year review that resulted in a team at Microsoft creating a “Responsible AI Standard,” a 27-page document of best practices in ensuring AI systems will not inflict harm on society. The paper and policy changes, summarized in a Microsoft blog post, indicate the company is trying to implement tighter controls on its artificial intelligence technology.

“In practical terms, this means Microsoft will limit access to some features of its facial recognition services (known as Azure Face) and remove others entirely,” writes The Verge, explaining that “users will have to apply to use Azure Face for facial identification, for example, telling Microsoft exactly how and where they’ll be deploying its systems.”

More commonplace use cases — such as blurring faces automatically in stills and videos — will reportedly remain generally available.

The document makes a point of assuring a consistent quality of service across demographic groups, including marginalized groups. “Before they are released, technologies that would be used to make important decisions about a person’s access to employment, education, healthcare, financial services or a life opportunity” will be subject to review by Microsoft chief responsible AI officer Natasha Crampton and her team, according to The New York Times.

As with all firms interested in responsibly deploying AI, Microsoft is navigating through uncharted terrain as it tries to eliminate ineffective or inappropriate use, while making the technology available for legitimate goals that can potentially offer social benefit. Pro-social applications include Uber’s use of facial recognition to confirm their drivers match their licenses and on-file ID, and software that interprets images for the blind or those with impaired vision.

NYT also identifies as useful but potentially subject to abuse Microsoft’s Custom Neural Voice, which “can generate a human voice print, based on a sample of someone’s speech, so that authors, for example, can create synthetic versions of their voice to read their audiobooks in languages they don’t speak,” but also poses obvious perils with regard to deepfakes.

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.