INTELLIGENCE vs COMPASSION – GOOGLE PROGRAM DEFEATS CHESS CHAMPION
In board games like Chess, players move pieces on surface of board according to rules of the game. The player loses game when his piece ‘King’ is lost or ‘King’ is under ‘Check’ or threatened by ‘Checkmate’. The Game may not allow an external force to ‘uplift’ King to grant him release from threat.
Artificial Intelligence cannot match uplifting power of Compassion for Compassion acts like Physical Force to cause motion not governed by Laws of Motion described in Physics. Compassion has nothing to do with intelligence or ability of performing intelligent actions. Compassion is unrelated to intellectual functions. It is not likely to match intelligent actions with compassionate actions. Compassion is a response evoked when a Subject witnesses pain and suffering being experienced by another individual or living entity. Compassion not only provides motivation or drive to perform selected sequence of actions, but it provides physical energy to perform selected task with ease, without experiencing stress or strain. The Force of Compassion acts upon both parties, the party performing compassionate action and the party receiving benefit of compassionate action. Both experience a sense of satisfaction and happiness while they are under influence of Compassion. For a natural instinct like Compassion, there can be no artificial counterpart. In my view, Google’s Artificial Intelligence Program called ‘DeepMind’ cannot fathom depths of Compassion.
Ann Arbor, MI 48104-4162 USA
THE NEW YORK TIMES
Google Program defeats CHESS Champion
MASTER OF Go BOARD GAME IS WALLOPED BY GOOGLE COMPUTER PROGRAM
AlphaGo has become much stronger since its matches with Mr. Fan, its developers said.
It challenged Mr. Lee because it was ready to take on someone “iconic,” “a legend of the game,” Mr. Hassabis said. Google offered Mr. Lee $ 1 million if he wins the best-of-five series.
Tens of thousands of people watched the contest live on YouTube.
Choe Sang-Hun reported from Seoul, and John Markoff from San Francisco.
Some computer scientists said Wednesday that they had expected the outcome.
“I’m not surprised at all,” said Fei-Fei Li, a Stanford University computer scientist
who is director of the Stanford Artificial Intelligence Laboratory. “How come we are not surprised that a car runs faster than the fastest human?”
Master of Go Board Game Is Walloped by Google Computer Program
By CHOE SANG-HUN and JOHN MARKOFF
MARCH 9, 2016
SEOUL, South Korea — Computer, one. Human, zero.
A Google computer program stunned one of the world’s top players on
Wednesday (March 09, 2016) in a round of Go, which is believed to be
the most complex board game ever created.
The match — between Google DeepMind’s AlphaGo and the South Korean
Go master Lee Se-dol — was viewed as an important test of how far research
into artificial intelligence has come in its quest to create machines smarter
“I am very surprised because I have never thought I would lose,” Mr. Lee said
at a news conference in Seoul. “I didn’t know that AlphaGo would play such
a perfect Go.”
Mr. Lee acknowledged defeat after three and a half hours of play.
Demis Hassabis, the founder and chief executive of Google’s artificial
intelligence team DeepMind, the creator of AlphaGo, called the
program’s victory a “historic moment.”
MARCH 9, 2016
The match, the first of five scheduled through Tuesday, took place at a Seoul hotel amid intense news media attention. Hundreds of reporters, many of them from China, Japan and South Korea, where Go has been played for centuries, were there to cover it. Tens of thousands of people watched the contest live on YouTube.
Lee Se-dol, the world’s top player of the board game Go, lost the first of five matches to a computer program, AlphaGo, designed by Google DeepMind.
By REUTERS on Publish Date March 9, 2016. Photo by Google, via Getty Images.
Go is a two-player game of strategy said to have originated in China 3,000 years ago.
Players compete to win more territory by placing black and white “stones” on
a grid measuring 19 squares by 19 squares.
The play is more complex than chess, with a far greater possible sequence of
moves, and requires superlative instincts and evaluation skills. Because of that,
many researchers believed that mastery of the game by a computer was still
a decade away.
Before the match, Mr. Lee said he could win 5-0 or 4-1, predicting that
computing power alone could not win a Go match. Victory takes “human intuition,” something AlphaGo has not yet mastered, he said.
But after reading more about the program he became less upbeat, saying that AlphaGo appeared able to imitate human intuition to a certain degree and predicting that artificial intelligence would eventually surpass humans in Go.
AlphaGo posed Mr. Lee a unique challenge. In a human-versus-human Go match,
which typically lasts several hours, the players “feel” each other and evaluate styles
and psychologies, he said.
“This time, it’s like playing the game alone,” Mr. Lee said on the eve of the match.
“There are mistakes humans make because they are humans. If that happens to me,
I can lose a match.”
To researchers who have been using games as platforms for testing
artificial intelligence, Go has remained the great challenge since
the I.B.M.-developed supercomputer Deep Blue beta the world chess champion
Garry Kasparov in 1997.
“Really, the only game left after chess id Go,” Mr. Hassabis said on Wednesday.
But Mr. Lee, 33, is one of the world’s most accomplished
professional Go players, with 18 international titles
under his belt. He has called the European champion’s
level in Go “near the top among amateurs.”
AlphaGo has become much stronger since its matches
with Mr. Fan, its developers said.
Mr. Hassabis said AlphaGo did not try to consider all the possible moves
in a match, as a traditional artificial intelligence machine like Deep Blue does.
Rather, it narrows its options based on what it has learned from millions of
matches played against itself and in 100,000 Go games available online.
Mr. Hassabis said that a central advantage of AlphaGo was
that “it will never get tired, and it will not get intimidated either.”
Kim Sung-ryong, a South Korean Go master who provided commentary
during Wednesday’s match, said that AlphaGo made a clear mistake early on,
but that unlike most human players, it did not lose its “cool.”
“It didn’t play Go as a human does,” he said. “It was a Go match with
human emotional elements carved out.”
Mr. Lee said he knew he had lost the match after AlphaGo made a move
so unexpected and unconventional that he thought “it was impossible to make
such a move.”
On Tuesday, before the match began, Oreon the director of the Allen Institute for Artificial Intelligence, a nonprofit research organization
in Seattle, conducted a survey of leading members of the Association for
the Advancement of Artificial Intelligence.
Of 55 scientists, 69 percent believed that the program would win,
and 31 percent believed that Mr. Lee would be victorious.
Moreover, 60 percent believed that the achievement could be
considered a milestone toward building human-level artificial intelligence software.
That question remains one of the most hotly debated within the field of
artificial intelligence. Machines have had increasing success in the past
half-decade at narrow humanlike capabilities, like understanding speech
However, the goal of “strong A.I.” — defined as a machine with
an intellectual capability equal to that of a human — remains elusive.
Other artificial-intelligence scientists said that humans might still find refuge
if the goal posts for the competition were moved.
“I wonder what would happen if they played on a 29-by-29 grid?” wondered
Rodney Brooks, a pioneering artificial intelligence researcher. By enlarging the
playing space, humans might once again escape the machine’s computing power.
Correction: March 9, 2016
An earlier version of this article described Google DeepMind’s AlphaGo incorrectly.
It is a computer program, not a computer. The error was repeated in the headline.
Choe Sang-Hun reported from Seoul, and
John Markoff from San Francisco.
A version of this article appears in print on March 10, 2016, on page A1 of
the New York edition with the headline: