Washington Inks Facial Recognition Law Backed by Microsoft

In Washington state, governor Jay Inslee just signed a law regulating facial recognition backed by Microsoft that could potentially be a model for other U.S. states. The law allows government agencies to use facial recognition but restricts it from using it for broad surveillance or tracking innocent people. It is more permissive than at least seven U.S. cities that have blocked government use of facial recognition technology due to fears of privacy violations and bias but stricter than states without such laws. Continue reading Washington Inks Facial Recognition Law Backed by Microsoft

CES 2020: Smart Devices Enter an Anticipatory Tech World

When your smart home takes stock of who’s there before turning the heat on to their favored temperature, that’s anticipatory technology. CNET editor-at-large Brian Cooley and CBS Interactive Tech Sites senior vice president, content strategy Lindsey Turrentine led a CES discussion on how data including location, human behavior, facial recognition and object recognition can help smart homes and smart devices anticipate human needs. “Some things will get better,” said Cooley. “And others might be unnerving.” Continue reading CES 2020: Smart Devices Enter an Anticipatory Tech World

CES Panel Examines Problem of Bias in Artificial Intelligence

As artificial intelligence is increasingly embedded into devices and experiences, the problem of racial and gender bias has become apparent, in several embarrassing and disturbing incidents. The industry has paid attention to studying how bias is introduced — often via the underlying data — and how to fix it. Former FCC commissioner Mignon Clyburn, now at MLC Strategies, led a discussion with Helloalice.com president Elizabeth Gore and Uber head of inclusive engagement Bernard Coleman on the topic. Continue reading CES Panel Examines Problem of Bias in Artificial Intelligence

Federal Agency Reveals Bias in Facial Recognition Systems

The National Institute of Standards and Technology reported that most commercially available facial recognition systems — often used by police departments and federal agencies — are biased. The highest error rate involved Native American faces, but African-American and Asian faces were incorrectly identified 10 to 100 times more than Caucasian faces. The systems also had more difficulty identifying female faces and falsely identified older people up to 10 times more than middle-aged adults. Continue reading Federal Agency Reveals Bias in Facial Recognition Systems

More Details Emerge About Facebook’s Upcoming News Tab

Facebook is slated to launch a News Tab as early as the end of October, but according to sources only a few of the publishers whose headlines appear there will get paid. The News Tab, which will appear on the toolbar at the bottom of the Facebook mobile app, will feature links for up to 200 publications, but sources say the social media giant never intended to pay all those news outlets. Sources note that it is similar to how Facebook built its Watch section, which includes videos it doesn’t pay for. Continue reading More Details Emerge About Facebook’s Upcoming News Tab

Twitter Will Warn Users of Politicians’ Inappropriate Tweets

Twitter announced that it plans to hide messages that are posted by politicians who violate the company’s abuse or harassment policies. Such tweets will be hidden behind a warning label, but will not be removed from the service, since Twitter still considers them a matter of public interest. The notices will inform readers if a tweet violates rules regarding harassment or violent threats, and then readers will have the option of clicking through to access the questionable message. The move could complicate the current debate over political bias on Twitter in addition to the balance other social platforms are struggling with between free speech and offensive content. Continue reading Twitter Will Warn Users of Politicians’ Inappropriate Tweets

Facebook Using Artificial Intelligence to Reduce Bias/Abuse

At this week’s annual Facebook F8 developer conference in San Jose, California, company CTO Mike Schroepfer discussed the progress being made by internal teams dedicated to reducing the spread of misinformation, hate speech, and abuse on the social platform using various artificial intelligence techniques. In the course of a single quarter, according to Schroepfer, Facebook takes down more than a billion “spammy” accounts, more than 700 million fake accounts, and tens of millions of items containing violent content or nudity.

Continue reading Facebook Using Artificial Intelligence to Reduce Bias/Abuse