IEEE Publishes First Draft Guidelines for ‘Ethically Aligned’ AI

The IEEE just published the first version of a 136-page document that it hopes will help technologists build ethically designed AI systems that can benefit humanity and avoid the pitfalls inherent in the new field. Ethics, says the IEEE, is something that technologists should consider when building autonomous systems, and it lists recommendations in the new document, titled “Ethically Aligned Design,” which are based on the input of more than 100 specialists in AI, law, ethics, philosophy and policy.

TechCrunch reports that, “the IEEE is hoping it will become a key reference work for AI/AS technologists as autonomous technologies find their way into more and more systems in the coming years.”


The IEEE Global Initiative’s website also features a submissions guideline for feedback on the document by March 6, 2017. All comment and input “will be made publicly available,” as part of the work to create consensus for IEEE standards.

IEEE managing director for the organization’s Standards Association Konstantinos Karachalios notes that, “by providing technologists with peer-driven, practical recommendations for creating ethically aligned autonomous and intelligent products, services, and systems, we can move beyond the fears associated with these technologies and bring valued benefits to humanity today and for the future.”

Among its contents, the document lists general principles, including the need to ensure AI respects human rights, operates transparently and that automated decisions are accountable. It also recommends how “how to embed relevant ‘human norms or values’ into systems, and tackle potential biases, achieve trust and enable external evaluating of value alignment,” as well as considering “methodologies to guide ethical research and design.”

Relevant to the latter point, the IEEE flags “the tech industry’s lack of ownership or responsibility for ethics” as a problem, and suggests that ethics be a routine topic in tech degree programs.

The IEEE notes that the lack of “an independent review organization to oversee algorithmic operation, and the use of ‘black-box components’ in the creation of algorithms” stymies the creation of ethical AI, and recommends “a multidisciplinary and diverse group of individuals” to build AI solutions, and “an independent, internationally coordinated body” to oversee whether products meet ethical criteria.

“When systems are built that could impact the safety or well-being of humans, it is not enough to just presume that a system works,” according to the IEEE. “Engineers must acknowledge and assess the ethical risks involved with black-box software and implement mitigation strategies where possible. Technologists should be able to characterize what their algorithms or systems are going to do via transparent and traceable standards. To the degree that we can, it should be predictive, but given the nature of AI/AS systems it might need to be more retrospective and mitigation oriented.”