AlphaGo vs. Lee Se-dol: Why a win for AI is not a lose for humanity

When machines beat men, they make us seriously think about our place in a technologically advanced world and we tend to overestimate machines and underestimate men.

N S Ramnath

[Photograph: A finished game of Go on a 13 X 13 board for beginners by Chad Miller under Creative Commons]

Two men sitting on the opposite sides of a table at a luxury hotel in Seoul, taking turns to place black and white coins on a board, were the centre of attraction for the artificial intelligence (AI) community across the world in the last few days. One of them is Lee Se-dol, one of the best Go players the world has ever seen. The other is an operator who merely follows instructions from a computer next to him that’s running a program called AlphaGo.

In many ways, the match is a replay of another board game played 19 years ago in New York, when Deep Blue, a program running on an IBM supercomputer, defeated Garry Kasparov, the reigning chess champion at that time. As in 1997, the present game involves a tech giant. AlphaGo was developed by DeepMind, owned by Google since 2014. As in 1997, this match too has a tremendous symbolic value, pitching man against machine. And as in 1997, the machine won. AlphaGo won four out of five games.

AlphaGo’s victory was not expected for another 10 to 15 years. Go is a more complex game than chess, even though it’s based on a simpler set of rules. This quality had attracted some of the most powerful minds to it. Albert Einstein is known to have played it. Alan Turing, a key figure in AI, has spent hours on it. US astronaut Daniel Barry has played it in space. We might have seen John Nash (brilliantly portrayed  by Russell Crowe in A Beautiful Mind) play the game a Princeton quadrangle.

Go is played on a bigger board. In chess, it’s eight squares by 8 squares; in Go, it’s 19 by 19. In chess, on an average, there are 35 legal moves at any point; each one of those would in turn branch out to 35 more options, and so on; in Go, there are 250 options at any point, each one branching out to 250 more, and so on. This means, in chess, a computer could use brute force to look at all possible options before selecting the best—which is what Deep Blue did. This approach won’t work in Go, because of the enormity of the options it needs to consider. Again, in chess, an expert can look at a half played board and reasonably take a call on who would win. In Go, an expert won’t be able to do that. It’s more intuitive. The strategic implications of a tactical move would be too hard to calculate. Go players tend to go by principles that are too general, too vague: the opponent’s key point is yours; beware of going back to patch up; the empty triangle is bad, and so on.

AlphaGo developers tried to tackle this by making the program first choose a limited number of promising moves (using an algorithm that learned from millions of moves and thousands of games) and then sending those to another algorithm that would evaluate their statistical probability of success. AlphaGo learnt the game by looking at millions of games to arrive at its own strategy and tactics. It’s unlike Deep Blue. In Deep Blue’s case, the developers could tweak (and in fact they did tweak) the program during the course of the match, to make it play differently. AlphaGo’s developers don’t have that luxury. AlphaGo will have to play thousands of games before its behaviour changes.

Despite this difference, both Deep Blue and AlphaGo have similar lessons to offer. One, they throw light on how human beings see and relate to technology. Two, they give a sense of how fast technology can advance. Finally, they make us seriously think about our own place in a technologically advanced world.

Neither chess nor Go is poker, where players look for clues in the slightest gestures and micro expressions of their opponents. In chess or Go, all the information that one may need is right there on the board, at least in theory. In practice, though, a lot depends on what’s outside the board. The players are informed by the previous games of the opponent, they spend a lot of time figuring out their opponent’s strategy and state of mind. During the game, body language matters. Kasparov, for instance, is known to intimidate people with his presence. However, when playing against a machine, you don’t have these benefits. A machine is a black box, you don’t know what’s happening inside. Both Kasparov and Lee mentioned this lack of information as a key challenge.

In Kasparov’s case, this turned ugly. Watching the clippings of the 1997 match, one could see that Kasparov felt that something was seriously amiss. Deep Blue didn’t play like a machine. (It made a move that made sense to Kasparov.) In one of the press conferences, he said Deep Blue used “hand of god” to defeat him (alluding to Diego Maradona, who used the phrase after scoring a goal using his hand in a quarter final match against England during the 1986 World Cup). Kasparov demanded that IBM share the rationale behind each of Deep Blue’s moves (IBM didn’t). Kasparov refused to accept he lost.

The passage of time made no difference. A 2003 documentary, Game Over: Kasparov and the Machine, directed by Vikram Jayanthi, looked at the match from Kasparov’s point of view. Its central metaphor was The Turk, a fake mechanical chess player from 18th century. It was fake because the moves were in fact made by chess masters hiding inside the instrument. Deep Blue’s principal architect Feng-hsiung Hsu in a 2002 book on how the machine was built (Behind Deep Blue—Building the Computer that Defeated the World Chess Champion), looked at it differently: “Garry’s accusations of cheating both during and after the 1997 match confirmed that Deep Blue passed the chess version of the Turing Test (a blind test to tell whether you are interacting with a human or a computer).” More recently, Nate Silver, in The Signal and the Noise: Why so Many Predictions Fail—but Some Don’t, attributed Deep Blue’s random moves to a bug.

The key lesson though is that human beings tend to look at machines with suspicion. It will get increasingly more difficult to convince sceptics, as technology gets deeper into the dark box that is AI. One way to tackle this is to do exactly what Kasparov demanded: explain the rationale behind its action. Manuela Veloso, a recipient of Tesla founder Elon Musk’s grant to keep AI beneficial to humanity, is working on exactly that. “To build AI systems that are safe, as well as accepted and trusted by humans, we need to equip them with the capability to explain their actions, recommendations, and inferences,” Veloso, a professor at Carnegie Mellon University, wrote in the proposal for the grant.

Deep Blue and AlphaGo also stand as a testament to how fast the technology can progress. A year before the 1997 match, Deep Blue played against Kasparov—and lost. It took just a year to better him. When AlphaGo defeated Fan Hui, Europe’s Go champion, in 2015, no one saw its success as an indicator that it would beat Lee Se-dol. But, AlphaGo was improving at exponential rates, learning from thousands of games and by playing against two versions of itself, over and over again.

Some argue that Deep Blue had a limited purpose. IBM made just two of those machines. One of them is at a museum (Smithsonian) now, and the other is at IBM’s offices, resting on its past glory. No one believes it can win against the present chess champions. A new DeepMind will have to be built to achieve that. In his book, Hsu writes: “Would I agree to build a new machine to play against Vladimir Kramnik or Vishwanathan Anand? I seriously doubt it, unless someone makes me an offer that I cannot refuse, as I am having too much fun with new projects.”

It might be tempting to think of AlphaGo as the new Deep Blue. However, a better comparison would be IBM’s Watson, which was developed specifically to beat human opponents in the TV quiz show Jeopardy, and which it did in 2011, but didn’t stop there. Today, Watson is used in a range of areas including pharmaceutical research and development, healthcare and retail. It’s used by doctors at Memorial Sloan–Kettering Cancer Center and Cleveland Clinic, among others. It’s a billion dollar business within IBM today.

DeepMind seems to have bigger ambitions. In a speech, DeepMind co-founder Demis Hassabis (who also has a PhD in neuroscience) said the company’s ambition was to solve intelligence first, and using that solve everything else. His comment on AlphaGo is an indicator of that ambition. He called it “just a prototype”.

Where would all this leave human beings? This question in the context of AlphaGo or Deep Blue comes with an inherent bias. It’s the same bias that creeps in when we discuss it in the context of movies such as Terminator and Matrix. In these films, machines are pitched against men. The machines can destroy human lives, or at least, their jobs. It’s a pessimistic view. A recent study, released during the World Economic Forum earlier this year, puts the job loss due to automation at 5 million.

There is another way to look at the question. Steve Jobs liked to think of computers as bicycles for the mind. (In fact, in Apple’s early days, he wanted to rename Macintosh as the Bicycle.) A bicycle can make man more efficient in movement than even a condor in flight, and computers, he believed, can make mind many times more efficient. It’s an optimistic view. Here, technology becomes a tool in the hands of man.

Neither is false, for there will be winners and losers. However, it’s too early to speculate on the exact shape it will take. During the early days of industrialisation, many feared mass unemployment, as machines outperformed human beings. It’s true that many lost their jobs. Ultimately though, it didn’t turn out badly for human beings.

When machines beat men, we tend to overestimate machines and underestimate men. It’s easy to forget that for all its complexity, AlphaGo didn’t have to consider group dynamics, didn’t have to worry about vague goals or unclear rules that human beings face in their daily life. Machines are good at what they are, and humans are good at what they are.

Thus, a useful question to ask would be what are the skills and competencies that are unique to humans, that cannot be copied by machines. In Humans Are Underrated—What High Achievers Know That Brilliant Machines Never Will, Geoff Colvin argues that these skills will get more and more important, as machines get better and better at performing ever more complex tasks. These skills, he says, are to do with human interaction—empathy, creativity and teamwork.

Watching Lee Sed-ol in the press conference, with his broad smile and polite manners, one cannot help but think that he is in many ways the opposite of Garry Kasparov. But they are alike in one important way. They represent what humans are capable of.

In 1997, after losing to Deep Blue, Kasparov came across as aggressive, even rude, unable to accept the fact that he lost. But then, it’s those very same qualities that made him stand up against someone as powerful as Vladimir Putin in Russia (Kasparov is a member of The Other Russia, a coalition that opposes Putin’s policies). And, Lee Se-dol, in his own gentle manner, pointed out to something that’s easy to forget if we get too caught up in the Man vs. Machine narrative. He said, “I would like to express my respect to the programmers for making such an amazing program.”

Ultimately, it’s Man vs. Man.

Was this article useful? Sign up for our daily newsletter below

Comments

Login to comment

About the author

N S Ramnath
N S Ramnath

Senior Editor

Founding Fuel

NS Ramnath is a member of the founding team & Lead - Newsroom Innovation at Founding Fuel, and co-author of the book, The Aadhaar Effect. His main interests lie in technology, business, society, and how they interact and influence each other. He writes a regular column on disruptive technologies, and takes regular stock of key news and perspectives from across the world. 

Ram, as everybody calls him, experiments with newer story-telling formats, tailored for the smartphone and social media as well, the outcomes of which he shares with everybody on the team. It then becomes part of a knowledge repository at Founding Fuel and is continuously used to implement and experiment with content formats across all platforms. 

He is also involved with data analysis and visualisation at a startup, How India Lives.

Prior to Founding Fuel, Ramnath was with Forbes India and Economic Times as a business journalist. He has also written for The Hindu, Quartz and Scroll. He has degrees in economics and financial management from Sri Sathya Sai Institute of Higher Learning.

He tweets at @rmnth and spends his spare time reading on philosophy.

Also by me

You might also like