March 12, 2020
Microsoft Research, with almost 50 engineers from a dozen technology companies, created a checklist for AI ethics intended to spur conversation on the topic and raise some “good tension” within organizations. To that end, the list, rather than asking “yes” or “no” questions, instead suggests that teams “define fairness criteria.” Participants were not identified by name, but many are in AI-related fields like computer vision, natural language processing and predictive analytics. The group hopes to inspire future efforts.
VentureBeat reports that, “altogether contributors to the checklist are working on 37 separate products, services, or consulting engagements in industries like government services, finance, health care, and education.” Authors pointed out the “disconnect between the focus of the AI ethics community today and the needs of ML practitioners.”
As part of the description of the checklist, they wrote that, “despite their popularity, the abstract nature of AI ethics principles makes them difficult for practitioners to operationalize.” “As a result, and in spite of even the best intentions, AI ethics principles can fail to achieve their intended goal if they are not accompanied by other mechanisms for ensuring that practitioners make ethical decisions,” it added.
The paper noted that, “few of these checklists appear to have been designed with active participation from practitioners … yet when checklists have been introduced in other domains without involving practitioners in their design and implementation, they have been misused or even ignored.” Such “misused or ignored checklists” include examples from aviation, medicine and structural engineering.
The results are shared in “Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI,” which was honored with the Best Paper Award by the ACM Computer-Human Interaction (CHI) conference.
The work to create this paper was “compiled in conjunction with Microsoft’s Aether Working Group on Bias and Fairness … [and] authors include Microsoft Research and Women in Machine Learning co-founder Hanna Wallach, Microsoft Research’s Luke Stark and Jennifer Wortman Vaughan, and Carnegie Mellon University PhD candidate Michael Madaio.
In response to practitioners’ demands, the checklist is “based on six stages of the AI design and deployment lifecycle rather than on a standalone set of ethics principles” and includes interviews with ML practitioners to “understand how an AI ethics checklist can effectively help them do their jobs and confront challenges they’ve encountered.”