‘As machines become more intelligent, they also become unpredictable’

In 2016, Go world champion Lee Sedol lost to AlphaGo, Google’s Go-playing AI program. How did the program do this? And why does this feat matter? Kartik Hosanagar answers that in this extract from his book, ‘A Human’s Guide to Machine Intelligence’

Kartik Hosanagar

AlphaGo wasn’t based on rules programmed by human experts; instead, it programmed its own rules using machine learning. It was trained on a database of more than 30 million past moves by expert Go players. Additionally, it played millions of games against itself— more than “the number of games played by the entire human race since the creation of Go,” according to the research team that created it. This preparation came to fruition most vividly in the second game of the Sedol- AlphaGo match of March 2016. After a resounding defeat in game 1, Sedol was playing more carefully, and even though AlphaGo had gained a slight early edge, the contest was still anyone’s to win. Then move 37 happened.

AlphaGo, playing black, made an unusual move on the center-right side of the board. “I thought it was a mistake,” said one commentator within seconds of the play. Sedol stepped out of the room briefly and took nearly fifteen minutes to respond. Fan Hui, a European Go champion who had previously lost to AlphaGo, remarked “It’s not a human move. I’ve never seen a human play this move.” He later described it as “so beautiful. So beautiful.” AlphaGo went on to win the game (and I hope was a good sport about it).

Move 37 was, in fact, in some ways an utterly human move— the kind of mysterious, creative, and unpredictable action of which humans are routinely capable but that we don’t expect from machines. What made it unusual was that it couldn’t be understood by AlphaGo’s human developers, let alone programmed by them. They had simply provided the input— millions of Go moves from past games— and stood back to observe the output: that stunning 37th move.

Given that such deep learning systems independently combine simple concepts to create abstract patterns from the data, computer scientists don’t actually know what’s going on under the hood of their systems. How or why AlphaGo and its peers behave in certain ways is often not clear even to their designers.

Why might this be disturbing? On the one hand, there is the sheer eeriness of computers displaying the sort of creativity that we believed belonged to humans alone. But there is also a more practical concern. Think about the way we teach young children to operate in the world. We might simply give them sets of rules and watch, satisfied, as they follow them: wash your hands before eating; wait your turn; wrestle that winter coat on by laying it on the floor, sticking your hands in the arm openings, and flipping it over your head. But children also routinely surprise us, pretending to have washed their hands when they’ve patently not done so; finding another activity rather than waiting in line to do the original one; deciding that they’re coordinated enough to put on a jacket the way they’ve seen adults do it, one arm at a time. Such abilities are not the result of following sets of rules they have been taught but rather stem from observing people around them and picking up new skills on their own. A capacity to surprise is core to a child’s normal development. It also makes them admittedly difficult to handle at times.

The same is true of algorithms. Today, some of the most accurate machine learning models that computer scientists can build are also the most opaque. As machines become more intelligent and dynamic they also become more unpredictable. This suggests a fundamental conundrum in algorithm design. You can either create intelligent algorithms in highly curated environments— for example, programming explicit rules they might follow, expert systems style— to ensure they are highly predictable in behavior, while accepting that they will run up against problems they weren’t prepared for and therefore can’t solve; or, you can expose them to messy real- world data to create resilient but also unpredictable algorithms. I call this a predictability- resilience paradox.

The unpredictability of a game- playing algorithm may not be particularly troublesome to anyone outside the Go universe. However, when life- altering decisions are placed in the cybernetic hands of algorithms, the ability to understand and predict their decisions becomes more urgent. We can’t always know why an airline pilot pulled rather than pushed the control wheel amid turbulence; why a manager decided to hire someone with nontraditional qualifications; why a doctor decided not to order a blood test for her patient. And yet most of us find unfathomable the idea that we would let computers make these decisions and still not know the reasoning. But as much as we may desire fully explainable and interpretable algorithms, the balance between predictability and resilience inevitably seems to be tilting in the latter direction.

Buy this book on Amazon

[This excerpt from ‘A Human’s Guide to Machine Intelligence’ by Kartik Hosanagar has been reproduced with permission from Penguin Random House]

Was this article useful? Sign up and we'll send you articles like this every week. Here's a sample

Comments

Login to comment

About the author

Kartik Hosanagar
Kartik Hosanagar

John C. Hower Professor of Technology and Digital Business and Professor of Marketing

The Wharton School, University of Pennsylvania

Kartik Hosanagar is the John C. Hower Professor of Technology and Digital Business and a Professor of Marketing at The Wharton School of the University of Pennsylvania. Kartik’s research work focuses on the digital economy, in particular the impact of analytics and algorithms on consumers and society, Internet media, Internet marketing and e-commerce.

Kartik has been recognized as one of the world’s top 40 business professors under 40. He is a ten-time recipient of MBA or Undergraduate teaching excellence awards at the Wharton School. His research has received several best paper awards. Kartik cofounded and developed the core IP for Yodle Inc, a venture-backed firm that was acquired by Web.com. Yodle was listed by Inc. Magazine among America’s fastest growing private companies. He is a cofounder of SmartyPal Inc. He has served on the advisory boards of Milo (acq. by eBay) and Monetate and is involved with many other startups as either an investor or board member. His past consulting and executive education clients include Google, American Express, Citi and others. Kartik was a co-host of the SiriusXM show The Digital Hour. He currently serves as a department editor at the journal Management Science and has previously served as a Senior Editor at the journals Information Systems Research and MIS Quarterly.

Kartik graduated at the top of his class with a Bachelors degree in Electronics Engineering and a Masters in Information Systems from Birla Institute of Technology and Sciences (BITS, Pilani), India, and he has an MPhil in Management Science and a PhD in Management Science and Information Systems from Carnegie Mellon University.

Outside of Wharton, he likes to make short filmsstart companies, and spend time with his kids.

Also by me

You might also like