Sony Debuts Benchmark for Measuring Computer Vision Bias

Sony AI has introduced the Fair Human-Centric Image Benchmark (FHIBE, pronounced “Fee-bee”), a new global benchmark for fairness evaluation in computer vision models. FHIBE addresses the industry challenge of identifying biased and ethically compromised training data for AI, aiming to trigger “industry-wide improvements for responsible and ethical protocols throughout the entire life span of data — from sourcing and management to utilization — including fair compensation for participants and clear consent mechanisms,” Sony AI says. The FHIBE dataset is publicly available now, following publication in the science journal Nature.

“The dataset includes images of nearly 2,000 paid participants from over 80 countries,” reports Engadget, noting that all likenesses “were shared with consent — something that can’t be said for the common practice of scraping large volumes of web data.”

FHIBE participants can remove their images whenever they wish. The photos “include annotations noting demographic and physical characteristics, environmental factors and even camera settings,” according to Engadget.

“FHIBE was created to address issues with current publicly available datasets that lack diversity and are collected without consent, which can perpetuate bias and present a persistent challenge to AI developers and users,” Sony AI explains in a news post.

“Additionally, the lack of adequate and available evaluation datasets can result in biased or harmful models being deployed, making it difficult to assess potential harms and the ability of a model to function equitably on a global scale,” Sony adds.

The dataset “examines FHIBE’s performance across both narrow computer vision and large-scale multimodal generative models,” assessing biases across demographic attributes and their intersections and comparing FHIBE against existing human-centric fairness datasets.

“A common misconception is that because computer vision is rooted in data and algorithms, it’s a completely objective reflection of people,” The Register quotes Sony Global Head of AI Governance Alice Xiang, also lead research scientist for Sony AI, saying in a video about the new release. “But that’s not the case,” Xiang continues, explaining that “computer vision can warp things depending on the biases reflected in its training data.”

Xiang notes that facial recognition systems on mobile phones in China have mistakenly let family members unlock each other’s phones and make payments, a mistake that could result from a lack of images of Asian people in model training data or undetected model bias.

The Register points out that there are other fairness benchmarks, including Meta FACET (FAirness in Computer Vision EvaluaTion) computer vision evaluation.

Related:
Fair Human-Centric Image Dataset for Ethical AI Benchmarking, Nature, 11/5/25

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.