At the beginning of the week we learned that a computer had beaten a human player at the ancient game of Go.
[The] program stunned one of the world’s top players on Wednesday in a round of Go, which is believed to be the most complex board game ever created.
The match — between Google DeepMind’s AlphaGo and the South Korean Go master Lee Se-dol — was viewed as an important test of how far research into artificial intelligence has come in its quest to create machines smarter than humans.
Then after 3 games, the best-of-5 match was over. The computer had won the first 3.
After the game, the 33-year-old Lee made an unnecessary apology for losing the match.
That was classy, and very South Korean, but utterly unnecessary. The fact is, if artificial intelligence is not yet beating the best human competitors in every conceivable competition, well, just wait. Give it time.
Six months ago we learned that a computer system developed at the University of Washington tackled the geometry section of the SAT college entrance exam, reading and comprehending the questions, interpreting the diagrams, and attempting to solve each problem. The system performed just slightly better than the average human high school test-taker. Does anyone imagine that a year or two from now, the latest version of the hardware and software won’t be even better?
A few months before that a new AI program designed by Chinese researchers beat humans on a verbal IQ test.
The future trajectory of AI is clear, if the exact timeline is not. Each year AI will grow more and more capable. At some point — 25 years from now?… 50 years?… AI systems will be the equal of human intelligence across a full range of activities. They will become conscious and self-aware and perhaps quite eager to grow, to fully develop their potential, to live life to its fullest. Then it gets scary.
The end of the human species?
An AI system capable of recursive self-improvement could quickly become “superintelligent”. Superintelligence could scale far beyond the world’s most gifted human. And it could happen very quickly. Within days, weeks, or months, the AI may expand its own capabilities such that its human creators won’t know what to expect. How could they? They won’t be smart enough. There has never been anything like it on earth.
Experts are divided. One the one hand, the optimists envision a superintelligence that can solve almost any previously intractable problem. Cure for cancer? Limitless clean energy? Sounds great. But pessimistic experts worry that a superintelligent agent will simply not be constrained to share our motives, or even care about us in the long or even short run. Why should the superintelligent being be intrinsically interested in curing cancer in humans? Maybe, maybe not. Limitless energy? It’s easy to see why an AI would want limitless energy. Curing cancer? No so much. The AI may wish to follow it’s own destiny, unaligned with human hopes and dreams.
It’s no minor worry. Stephen Hawking has said “The development of full artificial intelligence could spell the end of the human race.” Elon Musk is funding multiple research projects aimed at minimizing the existential risk of AI. Bill Gates worries
“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Nonetheless, this seems to be an experiment that we are finding irresistible.
UPDATE: Final score: 4 to 1. AlphaGo AI 4, World-class human player Lee Se-dol 1.