Tech Firms and Investors Develop AI Ethics, Best Practices

A growing number of venture capital and technology executives are pushing for a code of ethics for artificial intelligence startups, as well as tools to make algorithms’ decision-making process more transparent and best practices that include open, consistent communication. At Google, chief decision scientist Cassie Kozyrkov believes humans can fix AI problems. But the technology is still under intense scrutiny from the Department of Housing and Urban Development, the city of San Francisco and the European Commission, among others.

The Wall Street Journal reports that “this summer, the European Commission plans to assess a summary of ethical guidelines for AI technology.” Some AI companies are already developing ethics on their own.

San Diego-based Analytics Ventures began thinking about ethics after being approached by horse breeders that wanted to use AI to identify the best thoroughbreds to breed. The company both invests in companies and builds them from scratch; its eight startups use Klear, an internally built tool “that forensically explains why an AI system made a decision it did.”

“I see explainability as a core component of having an ethical guardrail around AI,” said managing partner Andreas Roell, who added that he wants every entity to also have a “designated AI ethics officer.”

Enterprise software venture studio High Alpha began to think about AI ethics, said partner Eric Tobias, in the wake of the backlash against Geofeedia, social-media geo-tagging software that could be used by law enforcement to identify people at protests. Four of its partners had invested as individuals. The company began inviting non-technologists to meetings “to bring in different perspectives,” a strategy that proved to be fruitful.

A tech accelerator run by Innovation Works in Pittsburgh “introduced a voluntary ethics component to its 27-week program for startups, in collaboration with Carnegie Mellon University,” and all 12 companies in the program participated.

VentureBeat reports that, at Google, Kozyrkov “helps Google push a positive AI agenda — or, at the very least, convince people that AI isn’t as bad as the headlines claim.” She believes that “artificial intelligence is merely an extension of what humans have been striving for since our inception.”

“Humanity’s story is the story of automation,” said Kozyrkov. “Humanity’s entire story is about doing things better.” She also argued that AI isn’t dangerous because “all tools are better than humans,” such as a barber’s scissors or the Gutenberg printing press. “If you can do it better without the tool, why use the tool?” she noted.

“And if you’re worried about computers being cognitively better than you, let me remind you that your pen and paper are better than you at remembering things.” She added, “biases demonstrated by AI are the same as existing human biases,” because data sets have human authors.

Kozyrkov believes the four principles of teaching human students can apply to safe and effective AI: think about what you want to teach your students; incorporate relevant and diverse perspectives; create well-crafted tests; and build safety nets for when something goes wrong. “It’s time for us to focus on machine teaching, not just machine learning,” she said.

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.