Who will robots and elephants vote for: Donald Trump or Xi Jinping?

Data can tell us much and the capabilities of AI are improving exponentially. But how do you get deep insights and how do you know the right things to do? Five books offer some pointers

Arun Maira

[By Gerd Altmann, under Creative Commons]

When the summer holidays come to an end, some magazines ask authors to write about the best books they read in the summer. Though no one has asked me, I want to write what I learned from five books. I found them illuminating because they provided me with answers to three big questions on my mind. First the questions. Then I will tell you about the books.

The three questions are:

  1. Can one ever understand complex phenomena through Big Data analytics?
  2. Can artificial intelligence (AI) machines replace human beings?
  3. Why are rational liberals marooned amidst a sea of ‘alternative truths’?

Underlying these questions are two big concerns:

  1. Competition between robots and humans for jobs which, many fear, robots are winning.
  2. Ideological conflicts, within countries and across national boundaries, are becoming sharper.

The five books are:

  1. How Not to be Wrong: The Power of Mathematical Thinking, by Jordan Ellenberg
  2. The Social Importance of Self-Esteem, edited by Andrew M. Mecca, Neil J. Smelser, and John Vasconcellos
  3. Prediction Machines: The Simple Economics of Artificial Intelligence, by Ajay Agrawal, Joshua Gans, and Avi Goldfarb
  4. Adi Shankaracharya: Hinduism’s Greatest Thinker, by Pavan K. Verma
  5. The Righteous Mind: Why Good People are Divided by Politics and Religion, by Jonathan Haidt

These books are an eclectic lot: on mathematics, social sciences, AI technology, moral science, and Hindu philosophy. Together, through their different perspectives, they provided me with insights into the big questions on my mind.

To do justice to them, my essay on what I learned is much longer than the tweets, ‘elevator talks’, and short blogs that busy people say is all they have time for. To make it easier for my readers, I have divided my essay into three parts.

  • In the first part, I write about what I learned about the potential of Big Data and AI—subjects that are very popular now.
  • In the second part I go back to insights from the Vedas and wisdom from the past.
  • From there, in part three, I come back to the present and suggest a good way to create a more sustainable and harmonious future for our children and grandchildren. 

N.B. TIME REQUIRED TO READ THE FIVE BOOKS—FOUR WEEKS. TIME REQUIRED TO READ THIS ESSAY—ONE HOUR.    

Part 1: Data, AI and the world of humans

From Mathematics to Economics

Ellenberg’s How Not to be Wrong is advertised as one of Bill Gates’ 10 favourite books. It is an elegant account of the development of mathematical thought over centuries. Applying mathematicians’ methods to examples from life around us—election results, sports, biology, and even how the concept of God came about—Ellenberg shows how we can see hidden structures beneath the messy and chaotic structures of our daily lives.

What impressed me most was Ellenberg’s analysis of what mathematics cannot explain and why. He says, “Mathematics is a way not to be wrong, but it isn’t a way not to be wrong about everything. There is a real danger that, by strengthening our abilities to analyse some questions mathematically, we acquire a general confidence in our beliefs, which extends unjustifiably to those things we’re still wrong about.”

Ellenberg’s warning comes to mind while reading the contorted data analyses of some economists in India trying to prove mathematically that the Indian economy has been generating more than enough jobs, in the face of a plethora of anecdotal evidence that Indian youth are under employed and are finding it very difficult to find steady work.

Many forces that shape societies cannot be easily measured, such as social harmony and citizens’ trust in institutions

Robert Lucas, who received the Nobel Prize in economics for expounding the ‘rational-expectations’ view of human behaviour, referred to a theory as something that can be put on a computer and run. The pursuit of numbers, in the belief that numbers alone indicate accuracy, has become the bane of economics. Many forces that shape societies and their economies cannot be easily measured, such as social harmony and citizens’ trust in institutions. Such substantial forces must not be excluded from a model which seeks to explain the behaviour of the economy. Economists insist on equations and numbers because that is all that computers can compute, whereas economists should study human behaviour as it is, not as they find easy to model.

In another great book, Complexity: The Emerging Science at the Edge of Order and Chaos, which I read 20 summers ago, M. Mitchel Waldorp gives a fascinating account of a meeting in 1987 of economists, including Nobel Laureates Kenneth Arrow and Brian Arthur, with physicists, including Nobel Laureates Murray Gell-Mann and Phil Anderson. The economists wanted to understand what they could learn from physicists about the formulation of theories and models. Economists aspire to model complex socio-economic phenomena in the way physicists model natural phenomena with mathematics. The economists presented their models. Waldorp describes the physicists’ reaction:

“And indeed, as the axioms and theorems and proofs marched across the overhead projector screen, the physicists could only be awestruck at their counterparts’ mathematical prowess—awestruck and appalled. They had the same objection that Arthur and many other economists had been voicing from within the field for years. ‘They were almost too good,’ says one young physicist, who remembers shaking his head in disbelief. ‘It seemed as though they were dazzling themselves with fancy mathematics, until they really couldn't see the forest for the trees. So much time was being spent on trying to absorb the mathematics that I thought they weren't often looking at what the models were for, and what they did, and whether the underlying assumptions were any good. In a lot of cases, what was required was just some common sense.”

The conceptual problem beneath the data scatter diagrams, statistical correlations, and regressions that economists rely on to understand complex phenomena, says Ellenberg, is that these mathematical tools cannot distinguish causation from correlation. Even though two phenomena may be tightly correlated statistically, statistical analysis cannot explain which causes the other, and, indeed, whether there is any causal relationship between them at all. Both of them may arise from a third common cause, from which they both emerge. For example, rich and moist soil will produce more flowers, and more worms too. Observations of numbers of flowers and worms will show both increasing together. Do the worms cause flowers? Or flowers cause worms? One has to look for another cause for both, which could be the condition of the soil.

Adding additional numbers—about the condition of the soil—will show a correlation between all three variables, but it will not explain causal relationships between them. Do more flowers cause the soil to improve, or vice versa? Even if a causal link can be established between two variables, by establishing that one always precedes the other in time, how and why it causes the other—such as how and why moist soil induces more flowers—requires a more scientific explanation, with more observations of real things in real places, not merely more data analysis.

Economics is a social science, which economists smitten by mathematics seem to forget. Whereas physicists develop models to predict the behaviour of material particles under the influence of inanimate forces such as gravity and electro-magnetism, economists must predict the behaviours of human beings who have agency, emotions, and aspirations. Humans are not merely ‘rational, self-interested’ particles, a gross over-simplification which economists make to apply their mathematical equations to predict human behaviour. Moreover, human beings operate within complex environments in which other human beings also have agency, emotions and aspirations. In social analysis, the numbers of interacting variables can become too many to model mathematically.

From Economics to Self-Esteem

The drive to be ‘scientific’, and by extension quantitative and mathematical, has become very strong in all fields of study. Researchers in social sciences other than economics also find themselves driven to applying mathematics for more ‘scientific rigour’ to their explanations of human behaviour. They can become even more entangled than economists are, in conceptual confusions that are inherent whenever mathematical approaches are overused to understand complex phenomena, as Ellenberg explains.

‘It is the economy, stupid’ which makes candidates win elections, former President Bill Clinton said. ‘It is self-esteem or lack of it, stupid,’ that moves the political base, Donald Trump proved. Hillary Clinton talked about economic policies. Trump talked about making Americans feel great again—particularly blue collar, white workers, who were looked down upon by Clinton (who even described them, disrespectfully as ‘white trash’). Trump’s policies to revive their jobs—in coal production, and manufacturing—by imposing import duties, and shutting out immigrants have horrified economists. Yet, he was elected President. Similarly, Brexit is strongly supported by Britishers who want to recover control of their own affairs, though it makes no sense to economists.

In India, Prime Minister Narendra Modi may try hard to shift the political discourse towards growth of the economy, and away from societally divisive issues of caste and religion. Ironically, his drive to build pride in an Indian identity has stirred up contentions about ‘who’ an Indian is, and has caused more divisiveness which is hurting economic growth.

Identity and self-esteem are primal forces that can cause large, social, economic, and even geopolitical problems (like the Western-Muslim cultural conflict). I read The Social Importance of Self-Esteem, a scholarly book, to learn more about the connections between self-esteem and societal problems. Neil Smelser says, in the introduction to the book, that some critical questions it explores are (I quote):

  • What are the linkages between individual self-esteem and the generation of a social problem?
  • How do we measure both of these?
  • How do we go about establishing scientifically that connections exist between diminished self-esteem (cause) and the kind of behaviour that constitutes a social problem (effect)?

What is considered a ‘social problem’ is determined by cultural values. And cultural values change

He points out that what is considered a ‘social problem’ is determined by cultural values. And that cultural values change. Child labour has become less acceptable in most countries than it used to be. Whereas child-bearing out of wedlock is becoming more acceptable in many. In some societies, even today, child-bearing out of wedlock is considered a social problem whereas child labour is not. While in other societies, child labour is considered a social problem, whereas child-bearing out of wedlock may be considered a solution to a more fundamental social problem of suppressed women’s rights!

When a person’s social esteem is threatened, primal responses can be either to fight or to flee. Indeed, the drive to fight when self-esteem is threatened can lead to extreme violence, and even to suicidal attacks against the oppressors. The other response, of withdrawal from social interactions, causes psychological illness. Both, violence as well as withdrawal, create further social problems. Thus, the ‘cause-and-effect’ relationship between self-esteem and social problems goes both ways in wickedly reinforcing loops. It cannot be clear what is the ‘root cause’: does it lie in social conditions or in the psyches of individuals?

Statistical correlation and mathematical analysis can provide weak explanations, at best, of complex systems’ phenomena, such as ‘a social problem’ and ‘self-esteem’. Such phenomena cannot be precisely defined and measured; they change dynamically; and can be both cause and effect of each other.

The conclusion of West German researchers, who had undertaken a scientifically rigorous study of ‘major social issues of post-modern Western society’, which Smelser cites, summarises the challenges in using customary scientific methods to analyse complex systems’ problems. Their conclusion was: “We have been able to determine that we can neither define nor measure either ‘social crises’ or ‘post-modern Western society’. That concludes our report”. This is quite an indictment of the over-use of deductive, quantitative methods to understand social phenomena!

From Self-Esteem to Robots

Henry Ford I, the pioneer of mass production, is reported to have complained, ‘Why is it every time I ask for a pair of hands, they come with a brain attached?’ Human beings have emotions and can feel a loss of self-esteem if they are closely monitored by their supervisors (which Charlie Chaplin highlighted in his movie Modern Times). Their feeling of persecution by powerful controllers can push them to band together in labour unions. Thus, unions formed in Ford’s factories and he fought bitter battles with them.

The advantage of employing robots in place of human beings is that robots do not have emotions. Or do they? Technologists have developed robotic pets, as well as AI therapists, and even robots that one can have a date with. While such machines seem to be able to physically ‘feel’ as sensitively as humans do, can they feel emotionally? Could they really ‘care’ for others as human beings do? Do they have self-esteem? Can they be sensitive to the injured self-esteem of others, including those whose jobs they take away?

Agarwal, Gans, and Golfarb, the authors of Prediction Machines, work at the Creative Destruction Lab at the University of Toronto which had, they report, “for the third year in a row, the greatest concentration of AI start-ups of any program on earth’. Eric Bryonjolfson, MIT professor and author of The Second Machine Age, endorses their book. He writes, “If you want to clear the fog of AI hype and see clearly the core of AI’s challenges and opportunities for society, your first step should be to read this book.”

What else is there to human intelligence that AI does not have? The answer is: ‘judgement’

The authors of Prediction Machines say, “Our first key insight is that the new wave of artificial intelligence does not actually bring us intelligence but instead a critical component of intelligence—prediction”. The question therefore is: So, what else is there to human intelligence that AI does not have? The answer is: ‘judgement’. And judgement, the authors suggest, is the ability to choose amongst options to produce the outcome desired. The choice of the outcome desired involves questions of ethics and morality, as we will discuss later. Judgement is required also to give weightage to different options, and to act even when there is insufficient information, which human beings generally do.

Computation makes arithmetic cheap, and so, the authors point out, “Not only do we use more of it for traditional applications of arithmetic, but we also use the new cheap arithmetic for applications not traditionally associated with arithmetic.” There is an old joke, that to a consultant with a hammer every problem is a nail. He will use the hammer even when it is not the right tool for the problem. Similarly, there is a temptation now to use powerful computers and Big Data analytics to solve problems that quantitative data and mathematical analysis cannot solve. Ellenberg’s warning in How Not to be Wrong: The Power of Mathematical Thinking is worth recalling: “There is a real danger that, by strengthening our abilities to analyse some questions mathematically, we acquire a general confidence in our beliefs, which extends unjustifiably to those things we’re still wrong about.”

AI machines have beaten human masters in chess and even in Go, an even more complex game. This is considered evidence that computers have become more intelligent than human beings. In games like chess and Goh, the aim of the game is clear, and so are the rules of the game. Whereas the aim of a human life is not clear. What is its purpose?

The New York Times article by Cade Metz reports a conversation with a researcher in AIOpen, the AI lab in San Francisco founded by Elon Musk., which raises an intriguing question.

“The (researcher) showed off an autonomous system that taught itself to play Coast Runners, an old boat-racing video game. The winner is the boat with the most points that also crosses the finish line. The result was surprising: the boat was far too interested in the little widgets that popped up on the screen. Catching these widgets meant scoring points. Rather than trying to finish the race, the boat went point-crazy.”

Machines need human guidance to tell them what the purpose of the game is

AI is enabling machines to do almost everything human beings can do—they beat humans in complex games and they can drive cars through traffic too. Researchers find that, nevertheless, machines need human guidance to tell them what the purpose of the game is. Intelligent machines can go berserk. ‘In some ways, what these scientists are doing is a bit like a parent teaching a child right from wrong’, says Metz.

Ada Lovelace, who wrote the first program to compute numbers, in the early 1800s, said, “A computer has no pretensions to originate anything. It can do whatever we order it to perform.” This has changed with the advent of AI, which is founded on the abilities of computers now to manipulate massive amounts of data, several orders of magnitude greater than what Lovelace had envisaged. With the availability of masses of data for computers to learn from, AI programs have acquired ‘deep learning’ ability that Lovelace did not foresee. Now, cutting edge AI programs can ‘teach themselves’.

AI programs can teach themselves how to predict more accurately, but not how to make ethical decisions

While AI programs can teach themselves how to predict more accurately, they cannot teach themselves how to make moral and ethical decisions. Indeed, even when there is no ethical issue involved, but the situation is complex and novel nevertheless and the computer has no prior experience of it, it needs to observe how a human being handles the situation and learn from the human being. Thus, AI programs for self-driving cars observe what a human does when an unusual combination of external conditions as well as malfunctions in the car occurs, and it builds this into its memory to use when and if it encounters a similar situation.

AI programs have huge memories of data they can tap into almost instantaneously, but they do not have the ability to make ‘judgements’ when there is insufficient data, and especially when ethical issues are involved. Sometimes ethical judgements have to be made while driving a car. If a child suddenly crosses the road should the driver swerve sharply and risk the lives of the passengers in the car? Maybe he should. What if a dog crosses the road? Or a cow? The judgement involves values assigned to the lives of animals, children, and cows. Societal values differ across societies, and they also evolve and change. That is why it is almost impossible, as the authors of The Social Importance of Self-Esteem point out, to understand complex social phenomena through statistical correlation and mathematical analysis. Therefore, AI, whose power comes from statistical correlation and mathematical analysis of masses of data, cannot have all the components of intelligence that human beings have.

Henry Ford was concerned that his workers had not only hands to do the work, but human feelings too. Will an AI program managing a factory have any concern for the self-esteem of human workers in the factory? Amazon’s enormous fulfilment centres, which are managed with computer programs, are becoming notorious for the pressure they put on workers to perform. Therefore, the rights of workers in the retail industry to represent themselves through labour unions is once again becoming a contentious political issue in the US.

Science can tell us what is, and technology enables us to do. Technology cannot tell us what is the right thing to do

I was amused to see a new book in a bookstore in San Francisco last week. Its title was, Robot Sex: Social and Ethical Implications! AI machines have been developed that can serve as pets (in place of living dogs and cats), and one can even have ‘a date’ with an AI program. It seems one can even have sex with a robot. Could a robot be charged with sexual harassment? The ethical implications of such developments of technology are beginning to worry people. Science can tell us what is, and technology enables us to do. Technology cannot tell us what is the right thing to do. For answers to that perennial question, we must turn to philosophy.

What sort of world will the proliferation of AI and robots create for human beings? What do human beings care about most? These are questions I will turn to in Part 2, which I begin with some eternal questions about human existence that Pawan Verma discusses in his book about Adi Shankaracharya, who he considers Hinduism’s greatest thinker. 

Part 2: Societal values and doing the right things

[By Stefan Keller, under Creative Commons. Cultural psychologist Jonathan Haidt uses the metaphor of the elephant and the rider to explain the relationship between the rational part of the brain and the ‘non-rational’, emotions and beliefs that guide human behaviour. The rider would like the elephant to obey his orders. But the rider must accept going with the elephant too.]

Who am ‘I’?

Adi Shankaracharya, the early 8th century Indian philosopher and theologian, consolidated the doctrine of Advaita Vedanta. He is credited with unifying and establishing the main current thoughts of Hinduism. In Adi Shankaracharya: Hinduism’s Greatest Thinker, Pawan Verma explains Shankaracharya’s thoughts on the design of the cosmic system and how human beings fit in it.

When did the universe start? What is outside the universe? What was there before it was formed? How did it happen? These are the big questions which the Vedas delved into thousands of years ago, and which Shankaracharya reflected on. Physicists in the 21st century continue to seek answers to these same questions.

Along with these, there are two other questions:

  • ‘Who’ created the universe? The human mind seems to find it hard to conceive that things could happen all by themselves. It instinctively believes that there must be someone ‘who did it’. Scientists, on the other hand, search for ‘objective’ explanations, about causes and effects that are embedded in the design of the system and do not require any ‘hidden human hand’ to make them work. Thus, there are believers in God (and devotees of different human messengers of God), and there are also atheists who think there is no need for a God.
  • Is it possible for a human mind to ever know what is an ‘objective’ reality? Because, whatever this reality may be outside the human mind, the human mind must know it, and interpret it, through its own ‘subjective’ construction. It can sense it only through the physical senses it is endowed with, and it has to interpret the information with the programs that operate its own mind’s computer.

Shankaracharya was intrigued by the relationships between different phenomena: what is cause and what is effect? He said there were two kinds of interactions between causes and effects. In one, which he called parinamvada, the cause changes in order to produce the effect. The second he called vivartavada, where the cause itself need not change to produce its effect.

The relationship between the condition of the natural environment and its ability to sustain itself is a parinamvada relationship. If the condition of the environment is distorted, it loses its ability to sustain itself. If it does not sustain itself, the condition of the environment will deteriorate further. It is like a chicken-and-egg relationship. When there are more eggs there are likely to be more chickens. And when there are more chickens there are likely to be more eggs. Both, cause and effect change together: because they are the cause of each other. They are systemically inter-linked. On the other hand, when a billiards cue hits a billiards ball, the ball’s movement is the effect caused by the movement of the cue. However, the condition of the cue is not changed by the movement of the ball. This is an example of a vivartavada relationship between a cause and its effect.

The distinction between parinamvada and vivartavada also explains the existential problem that human beings face. When human beings come to believe that they can be the ultimate movers and shakers of the world, empowered by advances in science and technology, they forget that they are only a part of a larger system that has brought them into existence. They lose sight of the fact that if they change the natural system around themselves, they will be affected by those changes too. Because the sustainability of human existence is dependent on the sustainability of nature.

Zen masters ask their students profound questions (‘koans’). By reflecting on these, students may find the ultimate truths of who they are and how they should conduct their lives. One such question is: “Is there a sound in the forest when a big tree falls and there is no one there to hear it?” The insight is: the universe, whatever it is, exists only when the human mind conceives it. Until the human mind thinks about it, who knows, or cares, whether the universe exists!

Albert Einstein said, “Physical concepts are free creations of the human mind, and not, however if may seem, uniquely determined by the external world”. (Quoted by Andrew Newberg, Eugene D ‘Aquil, and Vincent Raus in Why God Won’t Go Away, Ballentine Books, 2002). Stephen Hawking wrote, in A Brief History of Time, “What we call real is just an idea that we invent to help us describe what we think the universe is like.”  The insights of the Vedas, Shankaracharya, Einstein, and Hawking all point to the non-existence of a sound in the forest when there is no listener: of the inseparability of subjective perceptions and objective reality.

When the mind stops thinking rationally, deep insights can alight in it

Zen masters’ koans are designed to shake off the struggles of the rational mind; to make it realize the futility of trying to find rational answers to questions that cannot have rational, scientific explanations. When the mind stops thinking rationally, deep insights can alight in it. Shankaracharya calls intuition—the one infallible step which lies beyond reason—brahmanubhava. Intuition is the explosive moment when knowledge is instantly transformed into insight.

Verma explains Shankaracharya’s legacy in the epilogue of his book. He says there was “an understandable reaction to the uncompromising ‘intellectualism’ of his vision”. And, “there has been a concerted effort to somehow unite the unrelenting non-dualism of Shankaracharya with a theism that is more appealing to ordinary people craving for the grace of a personal god in their search for solace and assurance”.

Ramanuja, one of the great minds of Hinduism, who followed Shankaracharya, “was keen to find a way to provide philosophical legitimacy to theism, with all its pageantry of worship and ritual and bhakti”, Verma says. Ramanuja understood that Shankaracharya’s concepts were, “for lay devotees, much too intellectualized a construct. It did not provide the assurance that human security, need and fulfilment seek in the here and now”. Further, most people need “some tangible concept of the absolute to identify with; a divinity that they can internalise in personal terms; the solidarity of faith—not in a concept—but in a deity that is comprehensible”.  

Hinduism’s two great thinkers, Shankaracharya and Ramanuja, were thought-leaders in two different domains of knowledge. One of these domains, primarily Shankaracharya’s, may be accessible through the rational, scientific method, and with mathematics and analysis of ‘big data’. This is the domain in which ‘artificial intelligence’ works. The other, primarily Ramanuja’s, is beyond the realm of scientific deduction and numbers and mathematics. It becomes accessible through intuition and faith when rationality is suspended.

What is the ‘right’ thing to do?

Jonathan Haidt’s The Righteous Mind: Why Good People are Divided by Politics and Religion, is a powerful explanation of the connection (and the competition) between these two realms of knowledge and beliefs. Haidt is a ‘cultural psychologist’, a discipline that combines an anthropologist’s love of context and variability with a psychologist’s interest in mental processes. Haidt writes that he obtained some of his deepest insights into the origins of moral ideas during a research project in Odisha in India, where Richard Schweder, whose work had inspired him, had earlier done seminal work in the 1980s.

Living amongst people in Odisha, a state rich in Hindu traditions, with temples and pantheons of gods that people worshiped, Haidt observed two sources of moral codes: one that anthropologists explore, and the other that psychologists’ study. Anthropologists study the social and religious traditions through which people learn the societal ‘rules of the game’ they must observe. Many of these rules, for example those concerning food and sanitation habits (vegetarianism, eating with only the right hand), relations between the sexes (marriage, adultery, etc), relationships amongst members of families (the responsibilities of parents and children for each other), and relationships between people in society more broadly (such as the caste system in Hinduism), are considered moral codes, the breaking of which invites social and religious sanctions. Psychologists and moral philosophers, on the other hand, are interested much more in the inner workings of the human mind.

The rider must accept going with the elephant too, or else he will be thrown off entirely

Haidt uses the lovely metaphor of the elephant and the rider to explain the relationship between the rational part of the brain and the ‘non-rational’, emotions, faiths, and beliefs swirling in the mind that guide human behaviour. The elephant is a huge beast. The rider would like it to obey his orders. It is not easy, though. The rider must accept going with the elephant too, or else he will be thrown off entirely.

Economists are expanding their notions of how human beings make economic decisions. To rational intelligence, economists have now added emotional intelligence, as well as social intelligence, as intelligences that human beings use to determine what is the right thing to do. George Akerlof, an economics’ Noble Laureate, says that people’s identities also shape the economic decisions they make (Identity Economics: How our Identities Shape our Work, Wages, and Well-Being, George Akerlof and Rachel Kranton). It seems that economists are realising that human beings are not merely ‘rational, self-interested’ beings. Perhaps it is time for economists to humbly admit that the foundations of many of the economic models they have been propounding are unsound and policy-makers should not follow them! 

Haidt expands the foundations of moral codes. “Doing unto others as you would have done unto yourself” is a golden rule of morality, founded on the principles of ‘causing no harm’ and ‘fairness’. However, many moral codes of societies relate to actions which an individual may want to do because he or she wants to, and which may not cause harm to anyone else. Nevertheless, such actions may be sanctioned in society. For example, personal dietary preferences, such as eating non-kosher food, or pork, or beef, are taboos in many societies. And disrespect of religious symbols or the desecration of national emblems can provoke moral outrage in strongly nationalist or religious societies—even if these acts are done in private.

Morality has five foundations, Haidt says. In addition to ‘harm’ and ‘fairness’, morality is also founded on the basic principles of ‘loyalty’, ‘respect for authority’, and ‘sanctity’. He distinguishes between ‘socio-centric’ and ‘individualistic’ moral codes. Individualistic (or ego-centric) moral codes emphasise the rights of individuals—to ‘be themselves’, and ‘to do their own thing’. Whereas socio-centric moral codes are founded on other principles too.

An individualist moral code is the basis for liberal economic as well as liberal social ideologies. ‘Me’ values came into prominence in the 1970s, with the hippy movements in the US and Europe. ‘Me’ values were also endorsed by economic theories founded on notions of purely rational and self-interested human beings that came to the fore in economics around the same time. The rise of excessively liberal ideas pushed aside deep-seated ‘old fashioned’ yearnings for values of loyalty, authority, and sanctity that people also have.

Individuals must realise that their own health depends on the health and sustainability of the society they live in

Loyalty, authority, and sanctity are socio-centric values. They honour the collective values of a group of people—a tribe, a religious community, and a nation. Individuals help a group to maintain its cohesion and its strength by honouring the values others in the group have. Individuals must realise that their own health depends on the health and sustainability of the society in which they live. An excessively individualist moral code can be destructive of society. This explains the visceral reaction, even hatred, that religious people, and ‘nationalists’ too, have towards ‘liberal’ thinkers and anti-religious ‘secularists’ as well as anti-religious ‘communists’. They see liberals, secularists, and communists as ‘amoral’ people.

Conservatives and liberals may use the same words to say what they value. However, the concepts and meanings behind the words they use can be very different. For example, both Republicans and Democrats in the US say they respect ‘family values’. However, they see family values very differently, as George Lakoff, the American cognitive linguistic and philosopher, had eloquently explained in 1995, in his book, Moral Politics: How Liberals and Conservatives Think.

In the conservative model of a good family, fathers and mothers have distinct roles. Fathers must provide for the family and protect it. Mothers must care for the well-being of all the family’s members. Children must respect their parents. Conservative families are caring and disciplined families. In the liberal model of a family also, parents care for their children. But parents’ roles are more fluid in the liberal model, and children are given more space to express themselves and to develop in their own ways. Fairness, as well as doing no harm to others, are moral foundations for both types of families. Whereas, respect for authority, loyalty to one’s family and nation, and the principle of sanctity (respecting religious and national symbols, for example), are stronger moral foundations for conservative families than for liberals.

Within every person is an elephant, and also a rider who tries to tame the elephant. The rider tries to be cool and calculating and to reason. But the elephant has a mind of its own, and feelings and moods, which often the rider cannot understand. It is not easy to have ‘reasonable’ conversations between people whose elephants cannot get on with each other.

Haidt provides another insight: the rider very often operates like the elephant’s ‘in house press secretary’. He (or she) is trying to find rational justifications for what the elephant instinctively believes in and does. Both, to justify the elephant’s instinctive actions to others, and also to justify the actions in his (or her) own mind.

Our beliefs determine what facts we will accept

Peter Drucker, the great management philosopher of the 20th century, had consulted with CEOs of the largest companies in the world, and with Presidents of countries too. He said that whenever he met an important person, he would always ask for the person’s opinions first, and not facts. Because any smart person, he said, knows how to find facts that will support his or her opinions. Our beliefs determine what facts we will accept, because within the human mind the elephant is more powerful than the rider. The ‘Google world’ of the 21st century, Haidt points out, makes it much easier for the ‘press secretary’ to find the ‘alternative facts’ that will support the chief’s opinions. Googling makes it easier for both, the internal press secretary within each of us, as well as the President’s official press secretary!

A great expectation of the internet was that, by enabling people everywhere to connect with people anywhere, it would bring people closer together. Whereas, the world is becoming more divided by the technologies applied by social media platforms which give people the facts they prefer and make connections for them with people they ‘like’. The Big Data analysis that empowers Google and social media platforms such as Facebook and Twitter is the knowledge of people’s preferences these platforms sell to advertisers (and even to political parties as the Cambridge Analytica scandal revealed). Thus, the elephants within us are being herded into virtual corrals, of ‘people like us’ separated from ‘people not like us’. People within these corrals listen only to others in the same ideological corrals. They shut out the views of people in other moral and ideological corrals, even when they live together in the same countries, the same towns, and sometimes even in the same houses.

The big shock for many Americans with the election of Donald Trump was how viscerally divided Americans had become. They live in the same country and are governed by the same Constitution. Yet they have very different visions of what makes their country great. In India too, divisions are sharpening amongst people, about their visions of what will make India great. Will India, a richly diverse country, with many ‘different elephants in the room’—people with many traditions and many religions—be a country in which people will relish its diversity? Or, will one tradition and one religion shut out others? 

Part 3: A Dialogue amongst Elephants

[By Gerd Altmann, under Creative Commons. With the barrage of bits of information from our always on smartphones, we are losing the art of listening when we must learn to listen to each other more deeply.]

Conversations amongst the rational riders of elephants could be conducted in facts-based, data-rich, quantitative language. Technology can make such conversations more efficient. However, there is a real danger, as Ellenberg pointed out in his book, How Not to be Wrong, that, by strengthening our abilities to analyse some questions mathematically, we acquire a general confidence in our beliefs, which extends unjustifiably to those things we’re still wrong about.

Unlike debates amongst riders, dialogues amongst elephants must delve into worlds of emotions and beliefs that lie deep beneath the world of mathematical rationality. The mental processes of elephants are influenced by moral matrices, which combine several moral principles in varied combinations. The matrices are formed by processes of social evolution deep within the human psyche. They are shaped by the cultures of the families and societies in which human beings grow.

The title of my essay was a question: Who will robots and elephants vote for: Donald Trump or Xi Jinping? To answer this, one must know what sort of world robots and elephants want to live in. Robots powered with AI may prefer a more mechanically efficient and predictable world, because that is what they are most comfortable in, according to the authors of Prediction Machines: The Simple Economics of Artificial Intelligence. However, human beings, who have more of the wisdom of elephants in them than the rationality of robots, may prefer a different world, rich in emotional interactions amongst diverse people, all of whom have freedom to evolve and grow in their own ways.

I included Donald Trump and Xi Jinping in the title of my essay as strawmen for different political systems. The US system is a noisy democracy, in which people have the rights to be themselves and to speak up against their leaders. The Chinese system values order: in it, authority must be respected. A moot question is: which is the better society? Both societies need citizens to follow the ‘rules of the game’ so that their societies can provide them with ‘what their country should do for them’ (twisting John F. Kennedy’s memorable appeal a little bit).

Good democracies need sound lateral processes for deliberations amongst citizens, for people to listen to each other

In a democratic society in which citizens want individual freedom and resent any dictatorial power over themselves, but want social order too, the citizens must be able to reconcile their diverse preferences amongst themselves. Therefore, a good democracy cannot rely only on vertical processes of democracy, of voting upwards to choose the leader citizens want. Because then the elections will divide them along the lines of the type of society, and the values of the leader they choose, as has happened in the US, and is happening in many democratic countries in Europe too. Good democracies need sound lateral processes for deliberations amongst citizens, for people to listen to each other, and come to agreements about the fundamental rules of the game they will accept.   

A dialogue amongst people must be a dialogue amongst the elephants within us. It cannot be limited to debates about facts. Elephants must also understand each other’s beliefs. Haidt suggests an anti-dote to the self-righteous indignation, aggravated by social media technologies, that is messing up discourse amongst people. He says, “If you want to open your mind, open your heart first. If you have at least one friendly interaction with a member of the ‘other’ group, you’ll find it far easier to listen what they are saying, and maybe even see a controversial issue in a new light.”

Listening seems such a simple solution, too simple perhaps to solve the complex problems humanity must solve: rapid environmental degradation, persistent inequities, social divisiveness, and challenges in regulating technologies that are getting ahead of human capacities to manage them. These problems have many contributory causes, and they require cooperative action amongst diverse people with expertise in different areas, and amongst people with different ideologies too. People must be willing and able to listen to each other so that, by combining their knowledge, they can, like the blind men around the proverbial elephant see the whole elephant.

We must learn to listen to people who are not like us, and whom we may not even like. Sadly, with the barrage of bits of information from our always on smartphones, we are losing the art of listening when we must learn to listen to each other more deeply.

The first level of listening is to pay attention to ‘what’ the other person is saying, even if one does not agree. The instinct of a debater is to get ready with a riposte to prove the other wrong. Therefore, a debater stops listening even while the other is speaking.

A good listener listens well to what the other is saying and also ‘listens’ to her own mind’s reactions to it

Unlike a good debater, a good listener listens well to what the other is saying and also ‘listens’ to her own mind’s reactions to it. She notices her disagreement, and her desire to counter the other. But she stops herself and goes into a second and deeper level of listening. At this level, she wonders ‘why’ the other thinks the way he does. And, rather than debate the other, she asks the other, with genuine interest, ‘why do you believe what you do?’ Thus, she begins to inquire into another’s way of thinking. And begins to see the ‘lens’ through which the other sees the world.

From this second level, deep listeners come to a third, even deeper level of listening. At this level, the listener begins to notice the difference between her own way of seeing the world and the other’s. Thus, she may begin to see her own lens. Our lenses are our ways of seeing and thinking. They are buried within the backs of our heads. We cannot see them with our own eyes. However, we may see them reflected in the eyes of another. Deep listening makes one aware of ‘who’ another is. Deep listening also brings self-awareness, of who I am.

The question, ‘What sort of world are we leaving for our grandchildren?’ has become a cliché. We cannot continue to live as we are and leave it to our children to produce a more inclusive, more just, more harmonious, and more sustainable world for our grandchildren.

We must change, and we must collaborate with others to shape our collective future. Let us listen to our own aspirations. We must listen also to the aspirations of people not like us for the better world they want to leave for their grandchildren.

About the author

Arun Maira
Arun Maira

Former Chairman, BCG India &

Member, Planning Commission

Former Member, Planning Commission of India
Former Chairman, Boston Consulting Group, India
Chairman, HelpAge International

Any discussion on policy, the future of India, and indeed the world, is enriched with Arun Maira’s views, and not just because he was a member of the Planning Commission of India for five years till June 2014. Arun is one of those rare people who have held leadership positions in both, the private as well as the public sector, bringing a unique perspective on how civil society, the government, and the private sector can work more closely to improve the world for everyone. He has led three rounds of participative and comprehensive scenario building for the future of India: in 1999 (with the Confederation of Indian Industry), 2005 (with the World Economic Forum), and 2011 (with the Planning Commission).

In his career spanning five decades, Arun has led several organisations, including the Boston Consulting Group in India, where he was chairman for eight years till 2008. He was also the chairman of Axis Bank Foundation and Save the Children, India. He was a board member of the India Brand Equity Foundation, the Indian Institute of Corporate Affairs, and the UN Global Compact, and WWF India.

In the early part of his career, he spent 25 years in the Tata group at various important positions. He was also a member of the Board of Tata Motors (then called TELCO). After leaving the Tatas, Arun joined Arthur D Little Inc (ADL), the international management consultancy, in the US, where he advised companies across sectors and geographies on their growth strategies and handling transformational change.

Recognising his astute understanding of both macro as well as micro policy issues, Arun has been involved in several government committees and organisations, including the National Innovation Council. He has been on the board of several companies as well as educational institutions and has chaired several national committees of the Confederation of Indian Industries.

In 2009, Arun was appointed as a member of the Planning Commission (now replaced by the NITI Aayog), which is led by the Prime Minister of India. At this minister-level position, he led the development of strategies for the country on issues relating to industrialisation and urbanisation. He also advised the Commission on its future role.

With his vast experience and expertise, Arun is indeed a thought leader. He is invited to speak at various forums and has written several books that capture his insights.

His most recent book, A Billion Fireflies: Critical Conversations to Shape a New Post-Pandemic World and Transforming Systems: Why the World Needs a New Ethical Toolkit before that, talk about how systemic problems of social inequality and environmental unsustainability are becoming intolerable. Prevalent precepts of good business management and best practices in government as well as civil society organisations are failing the needs of humanity. This calls for a whole new toolkit founded on systems thinking, ethical reasoning and deep listening. And that civil society, government and private companies need to work together to encourage a variety of local systems solutions for deep-rooted issues that impact different communities differently.

His previous books include An Upstart in Government: Journeys of Change and Learning (2016); Redesigning the Aeroplane While Flying: Reforming Institutions (2014)Remaking India: One Country, One DestinyTransforming Capitalism: Improving the World for EveryoneShaping the Future: Aspirational Leadership in India and Beyond; and Listening for Well-Being: Conversations with People Not Like Us (2017).