What AI can show us about what it means to be human

Brian Christian, author of ‘The Most Human Human’ and ‘Algorithms to Live By’, discusses the gaps and overlaps between humans and machines

CKGSB Knowledge

[By Gordon Johnson, under Creative Commons]

By John Christian

Decades before Siri and Alexa began battling it out for best virtual assistant, computer scientist Alan Turing invented the eponymous Turing Test of machine intelligence. The test goes: if a human operator cannot, after a text conversation, determine if he is talking to a human or a computer, then the computer is “intelligent” insofar as the operator is concerned.

Today, the Loebner Prize, an annual competition in artificial intelligence, sets a panel of judges the task of finding via the Turing Test the “Most Human Computer” and also the “Most Human Human,” or the person the judges least often mistake for a computer. In 2009, author Brian Christian entered the competition and later produced the bestselling book The Most Human Human, which investigates the nature of intelligence. His second book, Algorithms to Live By, co-authored with cognitive scientist Tom Griffiths, was published last year.

In this interview, Christian dives into the ideas underlying both books.

Q: You hold degrees in philosophy and computer science. Why did you start with such a dual track approach and how has that influenced your career?

A: I have always been motivated by curiosity and by the big questions: “What does it mean to have a mind?”, “What is the nature of intelligence?”, “What is the nature of reality?” Philosophy gives us a way of framing these questions, but the rigour available in computer science offers a set of tools and insights that, for me, are also strikingly applicable to that set of questions. There are fertile intersections between the two areas, which I have explored in my books.

The first, The Most Human Human, investigates the question of intelligence. What are the hallmarks of intelligent behaviour? What is the nature of interpersonal communication? And, at the broadest level, what have we learned about what it means to be human by attempting to build machines in our own image? In large part, that’s the story of what we have learned about ourselves from our failures to replicate certain aspects of our own intelligence.

The second book, Algorithms to Live By, in a way takes the question from the flip side—what do minds and machines have in common? And what are the things that we can learn from the sometimes unexpected parallels between problems in computer science, and problems in our everyday lives?

There is a dialogue between the two books where they almost ask the same question from two different sides: what do we learn about the differences between humans and machines, and what do we learn from the similarities.

Q: The first book is a critique of the perception that a certain dehumanisation results from our constant interactions with and through machines. But given the pervasiveness of digital culture, how do you begin to fight back?

A: There is a paradox that as communication tools become more powerful, we are communicating with one another in ever lower-bandwidth forms. In the last century, we went from meeting in person to talking on the phone. We went from talking on the phone to writing emails. Then we went from writing emails to texting. And now from the text message to the emoji, or to the single-button “Like.” We have almost reduced human conversation to its logical minimum, literally in some cases to a single bit of information. I think this has a homogenising effect.

Another example is the Gmail Smart Reply, which includes automatic suggested replies to messages. If someone proposes a meeting, it might offer “sounds good” and “sorry, I can’t make that.” But we should be mindful of what we are trading off in that equation of efficiency. I think the Turing Test gives us the perfect illustration. In a Turing Test you have nothing except the idiosyncrasies of your word choice to assert your identity.

Q: Before we get into talking about the second book, can you demystify the term “algorithm”?

A: The concept of algorithms far predates the computer, and arguably predates mathematics, and so one of the goals of the project was in fact to re-humanise them. You can think of an algorithm as just a discrete series of steps, a process that you follow to get something done. Any process that you can break down into steps is an algorithm, including a cooking recipe.

Computer science gives us a way of recognising some really fundamental things in everyday life. One of the examples the book gives is if you are hosting a party, or if you are at a large dinner, there is a moment where everyone shakes hands with one another in greeting. You might have noticed that when there are more than a few people there, it takes a noticeably long time for everyone to make sure that they have shaken everyone else’s hand.

Computer science gives us a language for identifying what’s going on here. And so, for example, the number of handshakes that need to happen grows with the order of n squared, the square of the number of guests at the party; computer scientists would call this a quadratic algorithm. It doesn’t scale well! Part of the real value of computer science is that it gives us a vocabulary and a rigorous set of tools for identifying even these everyday things that are around us in life.

Q: You make the point in the book that despite computers being extremely powerful, there are problems that cannot be solved by brute force of calculation. To arrive at solutions, you need to introduce an element of randomness or simplification. How has developing algorithms to tackle these types of tasks helped change our understanding of handling difficult tasks more broadly?

A: One of the most valuable contributions of theoretical computer science has been complexity theory, which is a way for understanding and ranking how difficult problems are. In broad terms, you could say mathematics is about finding the correct answer to a problem and computer science is about deciding how hard the problem is.

Computer scientists deal with what are known as “intractable problems,” or “NP-hard” problems. In this set of problems there simply is no scalable way to get the exact correct answer every time. To address them, computer scientists turn to a toolkit of strategies. These include things like settling for approximate solutions, or settling for algorithms that are correct only most of the time.

One of my favourite examples comes from the world of encryption. If you want secure banking or commerce, the starting point is usually generating an enormous random prime number, and that requires finding efficient ways of determining whether a large random number is in fact prime. One of the best ways to do this is using the Miller-Rabin test, which happens to be wrong 25% of the time.

We asked the developers of Open SSL, an open-source library for secure communications technology, which uses this test, what they do about that and the answer was they just run the test 40 times, and accept that the margin of error of 25% to the 40th power is good enough. And this is in banking and even in military applications.

The deeper point is that computer science really gives us a way of thinking in new terms about what it means to be rational. Behavioural economics has highlighted the idea that people are fallible, they make mistakes, they have cognitive biases and behave “irrationally” and so forth. Computer science, I think, offers a bit of a different story: many of the problems that we face in life are simply hard, that is, computationally intractable. In many real-life situations we trade off the quality of the answer or decision that we ultimately get with the pain or the cost of actually thinking about it.

Q:  Tell us about one such trade-off situation.

A: A classic one is the explore/exploit trade-off—how much time do you spend gathering information, and how much time do you allocate for using the information you’ve got? Computer scientists refer to this as the “multi-armed bandit” problem, which references the “one-armed bandit,” a nickname for casino slot machines.

It goes like this: in a casino, each slot machine is set to pay out with some probability, and it is different for each machine. If you go to play for the afternoon you will want to maximise your return. This involves some combination of trying different machines out and some amount of time cranking away on the machine that seems the best.

For much of the 20th century, the question of what exactly constitutes the best strategy was considered unsolvable, but a series of breakthroughs on the problem over the last several decades yielded some exact solutions and broader insights. The details of the optimal algorithms are difficult to explain concisely, but the key consideration is how much time you have. If it is your final moment in the casino, you should pull the handle of the best machine you know about. But if you are going to be in the casino for 80 years, then you should spend almost all your time initially just trying things out at random.

These algorithms are now powering huge parts of the digital economy. Google, for example, has an enormous pool of ads that they could serve for any particular search query. They could always serve the ad that got the most clicks historically, but on the other hand they have a lot of ads that they have never served that they need more information for. The algorithm optimises the process.

In more personal terms, I also feel like this is an idea that helps us makes sense of the arc of a human lifespan—why children seem so random and older people seem so set in their ways. Well, in fact they are both behaving optimally, with respect to how long they have in life’s casino.

Q: Might thinking about life in terms of algorithms take some of the magic out of it? For example, there seems to be a qualitative difference between “trying to find the optimum romantic partner” and “falling in love.”

A: In many areas of life there is a mixture of an intuitive, emotional, ineffable process and a more deliberate, intentional, rational process. Buying a house is one example of the two working together.

Sometimes you walk into a house and something doesn’t feel right, and you may not ever be able to articulate why. Or on the other hand, you might feel good as soon as you set eyes on it. Nobody can tell you what is good, or what isn’t—but there is an algorithm that can help you with the more rational part of the equation, which is whether to settle for something good or hold out for something even better. This is called an “optimal stopping problem,” and the answer is surprisingly specific: 37%. The optimal way to pick a candidate is to get through 37% of the available options or of the time allotted, and then commit to the next option that is better than all previous ones.

When it comes to something like romance, most people, including me, are resistant to the idea of a methodical approach—it’s just not romantic. But in practice, we are more logical about it than we realise. If you’re the parent of a teenager and the teenager says, “You know, I met this amazing person who is going to a totally different college, so I’m just going to put my life trajectory on hold and follow them across the world…” you would say, “No way! You think this is the relationship you should stake the direction of your life on, but maybe if you just go to your college you will meet someone else.” But if someone at 35 says the same thing, “I met this incredible person, and I am going to move across the world,” one is more inclined to say, “Go for it! You know what you’re looking for at this point.”

This is anecdotal, but I think it is interesting that 37% of the average life expectancy in the first world is about 28-29 years old, and the average age of someone at their wedding is also 28-29 years old. There is sort of a funny sense in which these principles may offer us a macro-level understanding of societal norms and patterns, even if we are reluctant to apply them at the individual level.

Q: What will you work on next?

A: I am working on a book that is about the intersection of computer science and ethics. I think that’s the next big thing. As we were discussing, I think we are at a point where philosophy and computer science are very much in dialogue with one another and this to me seems like the next wave that’s breaking. We are increasingly deploying automated systems to make consequential, moral judgments, like who gets parole. There is this question of how do we ensure that the systems we are entrusting such decisions to actually uphold our sense of human and civic values. There is a fascinating conversation that is just beginning to happen, and that is what I am researching right now.

[This article has been reproduced with permission from CKGSB Knowledge, the online research journal of the Cheung Kong Graduate School of Business (CKGSB), China's leading independent business school. For more articles on China business strategy, please visit CKGSB Knowledge.]

About the author

CKGSB  Knowledge
CKGSB Knowledge

Knowledge Partner

CKGSB Knowledge (knowledge.ckgsb.edu.cn) is the online publication of the Cheung Kong Graduate School of Business (CKGSB), China's first faculty-governed independent business school. Headquartered in Beijing, and with campuses in Shanghai and Shenzhen, and offices in London, Hong Kong and New York, CKGSB has a finger on China's pulse as well as a good understanding of global business trends, and China's role on the global stage. CKGSB Knowledge features articles, videos and interviews on the intricacies of doing business in China, local competition, the evolution of "Made in China", policy issues, the globalization of Chinese multinationals and foreign multinationals' strategy and operations in China. It also features interviews with influential thought leaders and CEOs on trending topics and stories of global significance.