January 8, 2019
Industry leaders gathered at CES to discuss the ethics of artificial intelligence. Isaac Asimov’s Three Laws of Robotics protect humans from physical harm by robots, moderator Kevin Kelly of BigBuzz Marketing Group started out, but how do we protect ourselves from other types of technology-driven harm? AI experts Anna Bethke from Intel, David Hanson from Hanson Robotics, and Mina Hanna from the IEEE had a wide-ranging discussion on how to identify, shape and possibly regulate aspects of AI development that can have ethical and moral ramifications.
Anna Bethke, head of AI for social good at Intel, discussed the concept of a “counterfactual check.” In brief, if you input a problem a second time and change only one or two input parameters, such as changing the person from a man to a woman, and if the AI outputs a different answer to the problem that cannot be rationally explained, then there probably is something untrustworthy in the AI design or training. (Counterfactual: A claim, hypothesis, or other belief that is contrary to the facts.)
Mina Hanna, chair of the IEEE-USA’s AI and Autonomous Systems Policy Committee, shared that the IEEE has published documents on ethically aligned design.
David Hanson, CEO of Hanson Robotics, pointed out that any AI today is a reflection of the ethics of the creators. He raised the question: What happens when the AI has agency? (When it can take actions that it chooses as right or wrong. That are maximally beneficial and minimally harmful.) With animals it is ultimately about survival, he said.
When it comes to regulating AI, Hanna said that he expects discussions in Washington this year on the topic of fiduciary responsibility for a number of areas including data protection, privacy and AI that could produce results that violate existing laws and regulations. The consideration of fiduciary responsibility would be important because it could potentially hold every employee of a company responsible for violations, motivating the organization to train everyone in ethical behavior and ‘if you see something, say something.’
Both Bethke and Hanson stressed the importance of diversity when developing AIs — diversity in data sources and data sets, diversity in the development team, diversity in testing.
Finally, Hanson commented that we project our own lives into the machine. We need to consider agency in AI. Some of the laws assume agency, and depend on motivational bias in the decision making. AI will need conscious creativity before we can depend on it to make ethical decisions, he said.