CES: A Look at Robots, Conscious Computers and Humanity

During CES, Gigaom publisher Byron Reese discussed the ideas in his latest book, “The Fourth Age: Smart Robots, Conscious Computers and the Future of Humanity” with CTA’s Cindy Stevens. Reese’s previous three ages are Language and Fire; Agriculture and Cities; Writing and Wheels. But robotics and conscious computers, said Reese, divide people into two groups. “People like Stephen Hawking see AI as an existential threat,” he said. “Whereas Mark Zuckerberg and others think that’s ridiculous and can’t fathom that point of view.”

“That really puzzled me, why these smart technology people saw this so differently,” he said. “What I learned is that they believe very different things about what humans are.”

The first question he posed was, are people machines? “There are three answers to that,” he said. “If you’re a machine, everything that happens to you can be explained by physics. I have a podcast about AI, and 95 percent of the people on my show believe that.” The second and third choices are to be an animal or a human, the latter being an animal but with “something else special, be it a soul or consciousness.”

“The people afraid of AI all begin with the assumption that we’re machines,” he said. “If you think you’re a machine, you do have something to worry about. But outside of Silicon Valley, most people think they’re humans.” Stevens asked Reese if a computer can become conscious. “A computer can measure temperature, but you can feel warmth,” he said. “Whatever that difference is, is consciousness.”

He added that there are eight theories as to what creates that difference. “A few of the theories say a computer can be conscious and the rest don’t,” he said. “But I don’t think you want a machine to become conscious. The minute they are aware of the world, we have to afford them rights to the extent they can feel pain and suffer. You can’t send them in to diffuse the bomb, for example.”

He described MIT professor Joseph Weizenbaum’s early work in AI, writing a chatbot that could act like a psychiatrist. “He said that people poured their hearts out even though they knew it was a computer,” he said. “When the computer said, I understand, it was telling a lie.”

Weizenbaum turned against AI after this experiment. “I think it’s a problem if we build machines that mimic empathy,” he said. “We’ve had to come a long way to give humans rights. If we create machines that look and talk and act like humans — but then throw them away — could that have a numbing effect on human rights?”

Many people are worried that robots will take away our jobs, but Reese is sanguine. “That worry begins if you think we’re machines,” he said. “If there’s a job you can imagine a machine doing — like a drone that cleans windows — that’s a terrible job that you want machines to do. Meanwhile, there are things that only people can do. My hope is we build machines to do all the things they can do, freeing up people.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.