We do have technologies that can have conversations. My team at Google created smart replies, as you know. So we’re writing millions of emails. And it has to understand the meaning of the email it’s responding to even though the proposed suggestions are brief. But your question is a Turing-equivalent question—it’s equivalent to the Turing Test. And I’m a believer that the Turing Test is a valid test of the full range of the human intelligence. You need the full flexibility of human intelligence to pass a valid Turing Test. There’s no simple Natural Language Processing trick you can do to do that. If the human judge can’t tell the difference then we consider the AI to be of human intelligence, which is really what you’re asking. That’s been a key prediction of mine. I’ve been consistent in saying 2029. In 1989, in The Age of Intelligent Machines, I bounded that between early-2020s and late-2030s; In The Age of Spiritual Machines in ’99 I said 2029. The Stanford AI department found that daunting, so they held a conference and the consensus of AI experts at that time was hundreds of years. Twenty-five percent thought it would never happen. My view, and the consensus view or the median view of AI experts have been getting closer together, but not because I’ve been changing my view.
In 2006, there was a Dartmouth conference called AI@50. And the consensus then was 50 years; at that time I was saying 23 years. We just had an AI ethics conference at Asilomar, and the consensus there was around 20 to 30 years, and I’m saying, at that time, 13. I’m still more optimistic, but not by that much, and there’s a growing group of people that think I’m too conservative.
Sourced through Scoop.it from: www.wired.com
Leave A Comment