[Image by Gerd Altmann from Pixabay]
“The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”
– From the open letter signed by over 8,000 experts in AI, technology & engineering, science, and social science.
It is a sunny morning in Bangalore. Rahul is a young machine learning expert working for a globally dominant technology platform company. He has come up the hard way from a low income family. He secured excellent marks in school and cracked the engineering entrance exam. Rahul also won a merit scholarship to complete his engineering. His dad is a security supervisor and his mother a janitorial staff in one of the many tech parks that dot Bangalore. Rahul wants his parents to have a comfortable retired life. He wants to move with family to an apartment in a good locality. Rahul has identified a perfect apartment and needs a loan to buy it. The bank with his salary account is not giving him the lowest interest that they give his colleagues but 1.5% higher. Rahul is a trifle surprised but not worried. He is trying with other banks that have competitive interest rates. He is expecting the decision of two banks today. By afternoon, Rahul is devastated when both the banks reject his loan application outright.
Slowly it dawns on him. His current address is that of his parents’ rented house. And all his identity proofs have this address. This current house is in a locality that has historically had a high number of two wheeler loan defaults. And probably no one has ever approached a bank for a home loan from his locality. He clutches his fist and mutters, “bloody training data...bloody machine learning system that has no ethics”. If only he could explain to the banks the bias in the machine learning system and that it is not ethical. Rahul calls the bank’s representatives, who tell him they cannot do anything if the system outright rejects. They too are surprised given where Rahul works. One of them asks Rahul if he has ever been arrested by the police in the past. Rahul cuts the call. This was turning out to be very bad day...
Some of us remember only a few fundas (concepts) from the innumerable courses we completed in university. For me, one such funda along with a striking example still rings clear from an introductory course on methods for management research. It is the concept of ethical frameworks. Let me describe the frameworks with an example.
The objective for a cigarette company is to determine out of several TV ads in contention, which one will be most effective in selling a new product. The new product being a cigarette whose primary feature is reduced tar. While ads suggest that the lower tar content is a “healthier” option, in reality, a smoker may have to inhale more frequently from a cigarette with lower tar to get the flavour of a regular cigarette. Let us analyse this case from three dominant ethical perspectives in management research.
1. The egoistic perspective states that one takes actions that result in the greatest good for oneself. The cigarette company is likely to sell more cigarettes overall with this brand of cigarettes through the “healthier” messaging. Assuming the new cigarette wins over customers from competition and is more than cannibalisation of the company’s existing brands. And if this cigarette company does not launch such a brand, a competitor may do so and take some customers away. From the egoistic perspective, it is a go ahead for identifying the most effective advertisement.
2. The utilitarian perspective states that one should take actions that result in the greatest good for all even if the good is not evenly distributed. This is obviously good for the cigarette company as seen from the egoistic perspective. The new brand of cigarette provides a “healthier” choice or at least a choice for existing smokers and hence is good for customers. From the utilitarian perspective, it is a go ahead for identifying the most effective advertisement.
The egoistic and utilitarian perspectives are together called the teleological perspective where the focus is on the results.
3. The deontological perspective focuses on the intention behind the decision rather than the results. Clearly, the concept of this new cigarette is a deception that it is “healthier” and it does endanger the health of smokers. Knowingly endangering the health of humans is not an ethical intention. Advertising for such a product that is not ethical is also not ethical. From the deontological perspective, it is a no go for launching this cigarette, releasing any of the advertisements, and identifying which is the most effective advertisement.
AI and ethics
Let us now focus on the artificial intelligence (AI) systems context. My hypothesis is that commercially available AI systems are based on algorithms that are often optimised using the teleological perspectives of ethics and not the deontological perspective. To be fair, the creators of these AI systems may not even be aware of the larger ethical implications. It is possible that the creators realise the implications only after these AI systems are made public, and are used in real world contexts. What does this lead us to? Let us analyse a context where AI’s success is often celebrated.
Facial recognition is now ubiquitous with AI. Those of us active on social media take the automatic tagging of people in photographs for granted. Facial recognition is part of a larger context of image recognition by AI systems. The year 2015 was a heady one for image recognition with major digital platform companies announcing impressive breakthroughs. Both Microsoft and Google announced that their AI systems were outperforming humans in recognising images. However an AI system introduced in 2015 with much fanfare in the USA failed to recognise faces of African Americans with the same accuracy as Caucasian Americans. Google, the creator of this AI system, quickly took remedial action when this flaw was made public.
From a teleological perspective, this AI system gets a go ahead. According to the 2010 census, Caucasian Americans constitute 72.4% of the population of the US, and hence, are the majority users of AI systems. So, an AI system that identifies Caucasian American faces better is more useful for a majority of users, and hence to Google.
From a deontological perspective, it appears that the intention of this AI system was not to pursue an objective of identifying faces of all races that constitute America. In fact, shouldn’t digital platform companies whose market spans a large number of countries across the globe aim to identify faces of all races with equal accuracy?
One would have thought that four years is a long time in AI. And that by now facial recognition systems would have fixed this for good. Recent research from Wake Forest University, the US, suggests that is not the case. Popular AI-based emotion reading systems scored African American faces consistently as angrier than Caucasian American faces for every smile.
Those of us in India who dismiss the contexts we have discussed so far, saying that the skin colour differences are not as stark in India, may be in for a surprise. A study by a team of researchers from around the world—including from Cambridge University, the UK and Centre for Cellular and Molecular Biology (CSIR), India—suggests that skin colour in South Asians has enormous diversity, with a colour range that’s three times larger than that for East Asians or Europeans. How facial recognition AI systems work in a diverse setting like India is anybody’s guess. Especially in an Indian cultural milieu which is traditionally biased in favour of lighter skin tone.
Then there are some who may say that the use of AI facial recognition systems is limited to identifying faces in photos shared on social media, and that accuracy does not matter. This is not the only context where AI facial recognition systems are put into use. These systems are increasingly used for recruitment and identifying threats to public safety. Imagine the implications of being labelled as a threat to public safety just because limited data from your skin colour was used to train the AI system.
Americans are taking note. Recent news reports suggest that the city of San Francisco has banned use of facial recognition by law enforcement, and the state of California may also follow soon.
Often we wrongly ascribe these ethical biases in an AI system to the black box nature of its algorithm.
We don’t need to unravel the black box of AI systems, since their past track record or results will tell us about their efficacy. But a good track record of results does not tell us about the ethical basis. Shouldn’t we know the ethical basis of every AI system that is used by government or industry? An ethical basis resting on both teleological and deontological perspectives will give us more faith in AI systems compared to a system resting only on a teleological perspective.
As a student of the history of technology and business, I understand that it is not fair to blame technology platform companies and their AI systems alone for their ethical bias. They are probably perpetuating a deep-rooted bias in technology that has existed for decades. Imaging technologies in the past like photography are also skewed in their ethical basis. They too have favoured the teleological perspective more than the deontological perspective.
From the 1940s Kodak colour film processing for skin tone was based on a reference image (Shirley card) of a Caucasian lady. This often meant that if one had both Caucasians and Africans in the same frame, the Africans’ facial features would barely be visible. It was only in the 1990s that colour film processing was designed to produce good quality images of people with darker skin tones. And the reference image Shirley card included an African and an Asian lady along with a Caucasian lady.
Sometimes inclusivity isn’t always with ethical intentions. Polaroid’s ID-2 camera and film was among the first to provide good quality photographs of people with darker skin. One of the enhancements was boosting the intensity of flash in the camera by about 40% to compensate the additional light that darker skin absorbs. In an ordinary context, this would have been welcomed as a move towards deontological perspective. But that was not to be. There are reports that Polaroid developed this system for use in ‘dompas’ (colloquial, meaning dumb pass), a compulsory identification document that black South Africans were forced to carry during apartheid. Polaroid has been dismissing this association from the early 1970s.
The equivalent of Shirley cards in the motion pictures was the China Girl—a colour reference based on a Caucasian lady—to help photo lab technicians process colour film, especially in the mid-20th century. Film makers had to use different lighting on dark skinned actors while filming them. The first colour movie that adjusted lighting for dark skinned actors was the 1967 classic In the Heat of the Night featuring Sidney Poitier. This was almost thirty years after colour become mainstream in Hollywood with the 1939 Wizard of Oz and Gone with the Wind.
What this means for India
I’m sure many of us are now thinking what this means for India. We are just ramping up to adopt AI systems, and India is at an inflection point. The NITI Aayog report National Strategy for Artificial Intelligence estimates that AI’s impact on the Indian economy will be in the order of $1 trillion by 2035. This is the right time for us to understand and educate ourselves on the ethical basis of AI systems. This requires a multidisciplinary and multi stakeholder approach. There will be a need for appropriate policy interventions as well. All these are not easy given the magnitude of the digital divide in India. We must begin the journey now. The journey is likely to be arduous, but it is vital to fulfil our aspirations to become a developed country. It is also time for us to break away from the shackles of the past.
NITI Aayog has already identified healthcare, agriculture, education, smart cities, and smart mobility. Let us take the case of healthcare in India. The potential of AI is so alluring that policy makers feel that it is an important arsenal for India to leapfrog its current healthcare bottlenecks. There is a shortage of qualified healthcare professionals and services like qualified doctors, nurses, technicians and infrastructure: as evidenced in 0.76 doctors and 2.09 nurses per 1,000 population (as compared to WHO recommendations of 1 doctor and 2.5 nurses per 1,000 population respectively). Access to healthcare is not uniform across the country and physical access continues to be the major barrier to both preventive and curative health services. And there is glaring disparity between rural and urban India. AI’s use cases in keeping well, early detection, diagnosis, decision making, treatment, and research can help in improving healthcare in resource-constrained contexts like India. While this is a fantastic objective, healthcare policy makers should also be sensitized to the ethics of AI in healthcare. For example, training data sets should include a sample of all Indians. The decision made by an AI system should be explained clearly so that a digital illiterate Indian can understand the context, implications, and options clearly.
AI has already shown us its transformative technological abilities. Let AI also become transformative in terms of its equitable ethical basis as well. The silver lining is that multidisciplinary discussions on AI and ethics have begun in India. Computer science professionals are beginning to realise that they cannot wash away responsibility by stating that given the training data our algorithm is accurate to a certain degree. That is, blame the efficacy of the AI system on the training data. They understand that AI systems may be making important decisions that will fundamentally affect the lives of Indians. A multi-disciplinary approach with experts from the domain, humanities, public policy, and law will be required to implement AI systems in the Indian context. For example an AI system deciding on a loan application will need the expertise from computer science, retail banking, anthropology (to ensure that some of the unjust biases in the historical data are not perpetuated), and law. This approach will lead to the adoption of better AI that will benefit all Indians.
Let me end with words that ought to echo more often. Stephen Hawking, the renowned cosmologist made a poignant observation at the Zeitgeist 2015 Conference, “Computers will overtake humans with AI at some point within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.” While we don’t know exactly when AI will overtake humans, we should realise that we should constantly ensure that AI’s goals are better aligned collectively to all of humankind.
(itihaasa has signed a MoU with IIT Madras for researching AI and Ethics, Fairness and Explainability and works with the Robert Bosch Center for Data Science and Artificial Intelligence (RBC-DSAI), IIT Madras. Views are personal.)