January 13, 2021
During a CES 2021 panel moderated by The Female Quotient chief executive Shelley Zalis, AI industry executives probed issues related to gender and racial bias in artificial intelligence. Google head of product inclusion Annie Jean-Baptiste, SureStart founder and chief executive Dr. Taniya Mishra and ResMed senior director of health economics and outcomes research Kimberly Sterling described the parameters of such bias. At Google, Jean-Baptiste noted that, “the most important thing we need to remember is that inclusion inputs lead to inclusion outputs.”
Mishra, a career AI scientist with 12+ years in the field, noted how “the most important issue in diversity in AI … has to do with the lack of representational role models.” “You cannot be what you cannot see,” she said. Sterling, a pharmacist who now works in medical devices and digital health, said that “often it’s the unintended consequences that get uncovered.”
“Without diverse datasets we end up with bad models,” she said. “It’s predicting commonalities, yet differences matter in healthcare. Representation matters.”
Zalis asked panelists to describe “specific examples of a myth when we didn’t have diversity at the table.” Jean-Baptiste related that, when they were developing Google Assistant, they were all “working for representation in our teams.” “In the absence of that, we found the possibility of it saying alienating biased stuff,” she said. Adversarial testing helped “break the product before it launched and add positive cultural references.”
Mishra said that, as a PhD student focused on speech synthesis and recognition, the data was “mostly white men who spoke with standard U.S. accents in polished almost unnatural ways.” The datasets have become much larger, “yet, some of the issues around not being understood persist,” with voices of children and the elderly lacking representation.
Sterling added that both are good examples. “Companies have to be intentional,” she said. When Zalis brought up facial recognition not detecting color, Sterling pointed out MIT Media Lab’s Gender Shades, which found that commercially available products used limited datasets. “If you don’t have diversity in terms of shades of color and gender, you begin to not be identified or misrepresented,” she said.
With regard to solutions, Jean-Baptiste emphasized education, which, at Google, involves teaching the engineers. “We have to be really proactive around the inflection points in the process and have accountability,” she said.
Mishra questioned how to achieve “a much larger pool of AI technologists and leaders.” To those who say that diversity is important but they want the “best” candidates, Mishra pointed out that, “we confuse equal with equitable.” “Equal said that everyone will be measured by the same yardstick, and equitable means the yardstick holistically encompasses your skills, experiences and perspectives,” she said. “We have to be intentional in terms of opening those doors more widely.”
Sterling added that, “we often hear about privacy by design, security by design.” “We also have to think about eliminating bias by design,” she said. “That will encourage us to move a little slower in certain areas, challenging our assumptions. If you have people around the table that look like you and have the same background — you’re probably not being intentional.”