But this concept is not only the reserve of fiction. Professor Stephen Hawking says that AI “could spell the end of the human race”, and that “humans who are limited by slow biological advancement could not compete.” Elon Musk thinks that AI could be humanity’s greatest threat – more dangerous than nuclear weapons.
RedOrbit spoke to AI expert Charlie Ortiz, Senior Principal Manager of the Artificial Intelligence and Reasoning Group within Nuance’s Natural Language and AI Laboratory. He revealed why he opposes the view of Hawking and Musk, and what, in his view, the ‘real’ future of AI will look like.
“Hollywood is exaggerating the potential negative aspects because that makes good movies,” Ortiz told us. “They envision a future in which machines match our intelligence and then exceed it, and taking lessons from human history, they assume that the more powerful will persecute the weaker and that we’re all doomed.”
However, he says: “That’s one possibility, but there are many others. You can’t discount the future in which these systems become our helpers, assistants, and teachers.”
Asked about Hawking and Musk, Ortiz said: “I disagree with both of them. Any technology can be harmful if it’s not controlled and if it’s used by the wrong people. That’s one extreme future, but it’s not the only future.” He wonders why we should we take such a negative view about this technology when we don’t with other technologies.
Could AI machines be like our grown-up children?
“I don’t see why we assume that as intelligence evolves it will necessarily become evil; it’s looking at the dark side of everything.” Ortiz says. Why would it be so ungrateful? he asks. He uses the analogy of a disadvantaged family who are lucky enough to have a child go to a good college.
“Suppose you have a family that’s very poor and uneducated, they have a son or daughter who turns out to be brilliant and goes to a top university. They make major breakthroughs in whatever field they are in. Would we expect this genius with enormous intelligence to come back to his parents and belittle them or treat them poorly, or to not have anything to do with them?”
He adds that: “We should embrace the idea that as systems become more intelligent there’s a possibility they will have emotions and values, then they may very well feel a sense of owing humanity something for creating them, just like a parent and a child.”
Perhaps we think about this possibility less because movies about cyborgs taking their elderly owners to the supermarket weekly and the dialogue “Thanks for buying me a nice holiday home, Hal”, “You’re welcome Dave…” are not so thrilling.
But before we even get to the stage of finding out if AI will destroy us or care for us, Ortiz says that currently the technology is “nowhere near” as advanced as the movies portray, and it will be a very long time before it is. So what does a more realistic, short-term future of AI look like, and what sort of exciting new uses can we look forward to? We’ll take a look in part two. Stay tuned!
Source
Post a Comment
Click to see the code!
To insert emoticon you must added at least one space before the code.