The unintended consequences of technology and why it matters

In this podcast, Wharton professor Kartik Hosanagar talks about how automated decisions affect billions and what consumers can do to take back control, can a VC firm be a Pixar of VC firms, and the organisational changes Google made to become AI-first

N S Ramnath

Kartik Hosanagar works at the intersection of business and technology. He is professor of Technology and Digital Business and a professor of marketing at Wharton. He is also an entrepreneur, he advises companies, and is an investor.

In this podcast, based on his recently published book, A Human’s Guide to Machine Intelligence (read an excerpt here) he dwells on

  • The unintended consequences of technology and why it matters.
  • A bill of rights for the AI age. What should consumers expect when more and more of the things that affect their lives is informed and driven by AI.
  • What large firms can learn from digital-native firms like Google that have made big transitions—in Google’s case, it’s decision to become AI-first was not trivial. It required a change in organizational DNA.
  • How does Pixar deliver a blockbuster every time, where other film companies manage one in every 10-odd films they produce? And can venture capital firms benefit from the way Pixar goes about developing winning ideas?

Edited Transcript

Founding Fuel: Any decision that we take or any intervention that you do either as a business or as a policy maker, they have unintended consequences. So what is it different in artificial intelligence that we have to be concerned more about the unintended consequences?

Kartik Hosanagar: I have a chapter in my book, called ‘The Law of Unintended Consequences’. I start off by talking about the concept of unintended consequences from sociology, which predates

FF: You talk about the cobra effect.

KH: ...all of these things, right? So it all comes from decisions humans are making. I'm pointing to the fact that the decisions technology is making also have unintended consequences. But I feel that the unintended consequences of technological decisions require a little more attention because technology decisions scale in a way that human decisions don't.

What I mean by that is, if you talk about decisions made by politician it impacts billions of people. But if you talk about the decisions that we all make in our workplace, you know, an editor makes a decision that affects his readers; a doctor makes a decision that affects his or her patients; a lawyer makes a decision that affects the defendants who come to the court. But it is limited to, say, a few thousand people who are treated by a doctor or affected by the judge's decisions.

But technology just scales. If you have a technology that works, the nature of technology is, it is looking to scale on its own. You cannot constrain it. So if a Google search engine works, it doesn't work for just a hundred or thousand or 10,000 people. It works for billions of people. And so something that's wrong with that can affect billions of lives. It is for that reason, we should be careful about it.

FF: I remember in one of the speeches you mentioned that it is easier to fix technology than it is to fix human beings. So the fix that you offer is a bill of rights [for when we use algorithms]. Can you take me through what it is?

KH: There are lot of potential issues with technology and the book is focused on specifically automated decisions. As you were saying, my contention is that there are biases and problems with human decisions, there are biases and problems with tech, automated decisions. But I think we can fix automated decisions better than with human decisions. Because, you meet any person with prejudice, you're trying to convince them, it is a very hard task because we have so much conscious and unconscious biases. It is very hard to train a person. But with technology, especially AI, which is driven by data, it's all about data. If we can build checks and balances, you can fix it.

So what are those checks and balances? One of the key ones that I have in mind is just have an audit process. 

Today if you look at automated decisions or you look at the whole world of data science, there isn't an idea of an audit. There isn't even an idea of QA. In software development you have programmers and you have test engineers and so you have a QA process, quality assurance process. But in data science, there's no QA process. The data science builds it, the data just tests it, and then releases it. So the idea is you need another person who is thinking independently, who can come with a fresh pair of eyes, evaluate the code and evaluate how it's working and who explicitly has to test for certain things. So what should they test for? The things they should test for, one, is bias and fairness—is the data reflecting biases in society and therefore is captured in the machine learning algorithm? That's one.

Second is around safety and security. 

Because there are ways in which adversaries can affect machine learning systems as well. One example that I mentioned in the book is a driverless car. It can be trained on a lot of data. It can work really well with that data. But if somebody wants to trip it, they can. They have to go out and create signs that are confusing. In the US there was a study where if you put a post-it note on a stop sign, the driverless car couldn't recognise the stop sign. And that's all it took for somebody to fool the driverless car. So you have adversaries. You have to have a security mindset to look at security assessments. There are a few of these kinds of tests. So audit is one, but my bill of rights is more focused on consumers. 

So what should consumers expect?

One is transparency. Sometimes we don't even know when an algorithm is making a decision. We don't know what kinds of data the algorithm is using. We don't know what are the key factors behind that decision. So transparency is about sharing these things with consumers.

Europe has actually started that process. They have this privacy initiative called GDPR. But within GDPR they have one of these sections on right to explanation. This idea that lets—not in every setting but in important settings like loans or criminal sentencing—when a decision is made by an automated algorithm or a decision maker is informed by an automated algorithm, the consumer has a right to explanation. That is very important. That actually has an effect.

I spoke with a bank that is using machine learning to make credit, loan approval decisions and they said we are not able to use black box algorithms like neural nets because we cannot provide explanations and regulations require us to provide explanations.

It does have an impact. That's transparency.

The other thing I bring up is control. The idea there is that we should have some control whether it is to say I don't want the algorithm making decisions or I want to give feedback to the algorithm. If you look at Facebook's fake news issue, lot of people were seeing fake news in the news feed. They were aware of it. They were telling their friends about it. But they had no way of informing the algorithm. So control could be as simple as, now with two clicks you can inform Facebook's backend, the algorithms, that I think this post is fake news or I think this post is offensive—things like that. Or you imagine a driverless car. Earlier some of the driverless cars did not even have a steering wheel, like Google's early driverless car prototypes.

You're sitting in a driverless car and you're not happy with the choices it's making but can't take back control. So I think control is the idea that the designer engineer has to really think about—did I give the user enough chances to take back control? Unless you bring it into the design ethic, nobody's explicitly going to build in the UI [user interface] to give users some control.

I think audit, transparency and control are some of the main ones.

FF: This will be self-regulation? Will the companies regulate themselves or will it have to be enforced by someone, some outside agency? The reason I am asking is in your Fast Company piece, you actually start with an atom bomb and how the scientists themselves took on the responsibility to undo the wrongs they unleashed upon the world.

But if you look at the technologists…I mean a bomb is a bomb; it’s designed and created to kill people. But, talk to anyone from Google or Facebook, they think that they're actually doing a social service. And, in many ways you can say it is so. Given that the technologists who create these tools actually think it is good—and to a large extent it is good—does that make self-regulation a little bit hard to impose or is there any other explanation why technology companies are not regulating themselves? Mark Zuckerberg, why should he ask the government regulate us, why not regulate themselves?

KH: I did a bunch of talks as part of my book tour in lots of companies. And this question and this discussion came up many times. One of the things I was trying to tell these companies and their executives was that you better take self-regulation seriously. Because if you don't, heavy regulation is coming your way.

If you can prove to your consumers that you are responsible and you can self-regulate, then the demands for government regulations will not be the same and you might end up with very heavy regulations. If you look at GDPR, it has had very negative impact for a lot of companies because the compliance costs are so high. And the value to users versus the compliance cost—you could even argue that the compliance cost might exceed the value to users in many settings. And why is that? Because the industry was not responsible.

Privacy concerns have been there for 20 years now, from the early days of the Internet. And yet companies didn't do anything and they were saying, we can continue doing what we're doing and now you have this heavy regulation and now they're complaining so much.

The way I see it is, there is hope for companies to self-regulate. But I don't think that will be the end all. I think there will be companies that will misbehave because at the end of the day companies are still driven by the goal of maximizing shareholder value rather than maximizing all stakeholders’ value. And that means focusing on profitability. That means focusing on whatever it takes to push that up, whether it's at the expense of privacy or fairness and so on.

I think self-regulation is important. Companies should do it. And if you notice in the book, I didn't really talk a lot about government regulation, mostly my messages is self-regulation because I think we have to give that a chance first before we do aggressive regulation. But I do think regulation is coming. The US already has a proposal for audits. In fact, the book came out in March. In April there was a proposal for auditing algorithms in socially important settings. That all companies with over 50 million in revenues or over 1 million users will need it. So that is being proposed in the US.

I think there will be government regulations also coming in. Especially also when companies like Facebook, Mark Zuckerberg is saying, you please regulate for us, tell us what you want us to do. Because I think this is a tough area as well.

What is needed at the end of the day is everyone has to play a role—consumers, firms, regulators. Firms have to first self-regulate and show you can trust us. Consumers have to look at the self-regulation and ask ourselves whether we are happy with that or we want more. And if we want more we need to make a noise about it. And then the government has to listen to the noise we make and come in and intervene.

All three things are going to be necessary.

FF: Shifting the conversation to what companies have been doing. One of the things is like Facebook made a shift to mobile. Or Google, the shift it made to AI. Is there anything that large organizations can learn from the way Google went about it?

KH: You mentioned how Google. Incidentally—was it 10 years back or 12 years back—they announced that we want to be a mobile-first company and then a few years back they announced that they want to be an AI-first company. It's really interesting because back then Google realized that most of search is going to happen on mobile devices and if they don't come control, or first of all, if they don't deliver search well on mobile and then also they don't control the gateway to search, they could lose the search wars.

They not only invested in making sure Google search works well on mobile, they also went and acquired Android and invested heavily in it and made sure Android is successful. And then of course Google search is a default search engine in Android. And so they did all that.

Again with AI-first, what is interesting is many companies say, ‘oh, you know, it's easy for Google to be AI-first. How can I be AI-first?’ But that misses the point. For Google to say, we are going to be AI-first was not trivial because when Google announced that they already had about 30,000 engineers. Most of those engineers were trained in computer science at a time where AI was not hot. When machine learning was not hot. Most of them probably did not take a single course in machine learning. 

They were writing code the old way, which means they were writing every instruction, every step, every part of the logic. Now you're saying we want it to be AI-first. We want to learn from data and don't write the logic, instead learn the logic from data. This is not something most of them have done before.

They embarked on massive change, organizational changes. Of course when they did AI-first, they did the obvious things, like they made big acquisitions, they bought Deep Mind, made it their learning centre for AI. They created Google brain project. But not only that, they did a few other interesting things.

One of the things they did was they started a training programme. Training for every person, first online, then in person. Then they announced a programme where engineers could take a six month break from the product team, spend that six months with the AI team, the centralized AI team, and be assigned an AI mentor who would work with them, who would guide them, who would advise them. Do a project, then go back to your team.

What it did was, earlier you had a centralized team that was trained in AI that was exceptional at AI, but the knowledge was not diffused. Now the knowledge became diffused because people went did the projects and came back. And now that changed how they were coding.

The other thing Google's management did was they told every team you have to roll out some AI in your product within six months. It doesn't matter what it is, it doesn't matter if the ROI is negative. We want you to do it. What is interesting is, they said, we want you to roll it out in six months, not three years. And they also said ROI we are not going to measure. You want resources for that, you take the resources, you go do it—six months. But you have to roll something out in six months.

What that did was, each team had to do something and they did stuff. What it did was it wasn't focused on ROI and so on. It was focused on changing organizational DNA. All of a sudden everyone was comfortable with AI.

These are really interesting lessons for how you do massive technology-enabled business transformation, which is not just around having some big AI strategy and announcing a big AI initiative like driverless car, but it's around making sure it seeps throughout the organization.

Almost every company I see that says we are going to do things in AI, they will have two, three big projects. That's it. And they will hire a new team working on that independently, but it's not changing the organizational DNA. And Google has done that really well.

FF: Are there any other examples apart from Google? Or is this kind of detailed execution, is it only with Google? What about other organizations? How easy will it be for them to replicate what Google did?

KH: If you look at previous technology disruptions—mobile, you mentioned Facebook—they did that well. You look at Internet, again there are so many companies went under, but then there are some companies that managed that change well. Walmart, in the US, has survived as an online retailer, still struggling to compete with Amazon, but it's still in the race and still a huge retailer. And one of the largest online retailers in the US. And that's because they managed that change. Well, they invested in that transformation. There are other examples that did that.

Again, if you go back to the Internet. When some company said, ‘okay, we need internet, we need to have an internet strategy’, they hired a consulting firm. They hired an outside agency and they said, set up an ecommerce store for us. So the agency set up an ecommerce store that looks fine and then they now have an ecommerce operation. That doesn't make your company a digital native company because now you don't know how digital touches your store. You don't know how to create an omni-channel experience where a customer can go online and say, order here, but I'll pick it up at store and do those kinds of things.

Those things happen when you really invest in these organizational changes. Those are the efforts where I see with regard to technology, business transformation that are working well. 

FF: A somewhat related question: in your super hilarious speech that you gave about the dalliance with Bollywood. It was very funny. I was laughing out loud when I was listening to it. You tell about Pixar and how it managed to innovate, while others give one blockbuster in 12 or so movies. That ratio kind of reminded me of VC firms. VC firms typically get huge returns on one investment. Others aren't that good. Are there any lessons from Pixar for investing, which you also do? Can a VC firm be a Pixar of VC firms? Is that even possible? Or is it completely two different things and we shouldn't even be comparing them?

KH: That's again a brilliant question and I think it's possible. Just to set context, the idea is that if you look at movie studios, you know one in 10, at best two in 10 movies actually are hits. The vast majority are flops and they're hoping the hits are such big hits, they make up for all the flops. But you look at Pixar, every movie has been a blockbuster. Actually I've spent some time studying Pixar, interviewed a few people there. One of the directors at Pixar, Lee Unkrich, he once said, it's not that we don't fail, it's just that we don't fail by the time the movie comes out. So their whole idea is, they come up with lots of ideas. They have an internal process to figure out which idea goes to the next phase. Then for the next phase they might develop the idea further—what is known as a treatment which is a three four pager on the ideas. Again, they have internal discussions, voting and so on to figure out which ideas go to the next phase. They kill some ideas, saying this is not working. Then they will develop that into a script. Again, some of those, they will kill there. Some will go to the next phase. During production again some will get killed. By the time the movie comes out, only one movie... They make very few movies. In a given year, they may have maximum two movie; some years, no movie that year. They're killing the bad ideas early. They're coming up with as many ideas, but the other thing they're doing is they're creating the ideas in-house. Most studios today are buying the ideas. They don't have writers in-house. Writers come, pitch ideas and they buy a few ideas. Pixar develops the idea in house. So they can be part of the process of idea development, idea change and so on.

I think those ideas are applicable in the venture world as well. And it's about… one is Pixar says we'll have writers in-house. So that is a model where the founders are in-house. You have to develop the ideas in-house, figure out how to kill ideas early and then see which ones work and then spin them off, and then work on the next several ideas, kill a lot of the bad ideas early, have the exceptional ideas get through the process. When it's working, spin it off, start working on the next stage, and so on. There are some companies trying this. It's very hard to pull off because it's not natural to the venture world. Venture world is, invest and you are done. 

There is a company called Idealab, which is run by Bill Gross. He's been doing it for about 15 years. He has a very good track record. Some very good companies have come out of Idealab. Their record is also, I would say, mixed. It's not like Pixar, 100% success, but their success rate I would argue is pretty good. And their returns are good.

There's one venture studio that I actually work with, I've been working with them for the last few months. It's called Atomic Labs. And Atomic is also a venture studio. So there, the idea is it's like a studio, they are creating the ideas internally and then spinning it off. It’s early days, we have to see how this will work. I actually believe there is potential in that approach. 

FF: How easy will it be for a large organization to adopt this? Let's say Unilever or take any of the Indian organizations. Will it work within a large organization setup, or will culture play a major role and larger companies cannot adopt that kind of thing?

KH: I think culture plays a major role. It's harder for a large organizations to pull this off. The odds are stacked against them. Corporate entrepreneurship in general is very hard. Because you could argue at some level it’s an oxymoron. Because corporate is about process and about making sure you're not rocking the ship. And entrepreneurship is all about taking gambles, which companies don't like to take. So, making corporate entrepreneurship work is hard. Some companies have successfully managed that. But then within that, if you're talking about this kind of a model that I outlined, it's hard. I don't think it's impossible, but it requires a lot of investment in creating the right processes and support for an organization that's in charge of creating the next big successes for the organization. Next several big successes. If you have one big product or even three big products, but you're looking for the next big hit, it's hard for large organizations to create that. But I think they need corporate entrepreneurship and an approach like this could help.

FF: Some of the points that you mentioned earlier, of self-regulation, making sure that they don't make the mistakes that the technology industry made with respect to privacy. Do you see any leaders on the horizon? Do you see that happening? Or will someone, like a privacy or AI native should come and kind of set things?

KH: I think in terms of machine learning and AI and extracting the opportunities from that, it requires a visionary CEO, somebody who can see the future and connect the dots in terms of here's the technology today, here’s how it's expected to evolve. This is impossible today, but it can enable this and 10 years from now we need to invest now. So that requires a visionary. But your other point about, you know, managing the risks, the visionary is not the one who worries about that because they break rules to try and get to new places. It’s the Tim Cooks of the world who are worried about risk management and governance and so on. And they are better at these kinds of things. But I think one interesting example where company culture plays a role, is if you look at Google.

Google's employees are very involved and will campaign and canvas for principles and ethics. So when Google was working with the government, even though as I've learned, it appears like Google's AI work with the US government was not particularly diabolical and it was fairly simple, but the employees protested saying that AI can be used by government in potentially bad, dangerous ways ranging from AI-enabled weaponry to surveillance of citizens and so on. And Google then pulled out and added it into its set of guiding principles that they will not work with government, because employees pushed them.

The other thing Google tried to do, unfortunately not yet successful, is to create an AI ethics board. Unfortunately what happened was they announced it and then some employees complained about one individual in there and they had to dissolve it.

FF: One of the things that I found in the few speeches that I read is that you highlight the risks, you come out as a sceptic of artificial intelligence, machine learning. At the same time you also say that you are an optimist. This ambiguity, is this the ideal approach to dealing with AI? Should we all be sceptical optimists? And where do you personally draw the line?

KH: Actually, I wouldn't hopefully classify myself as a sceptic. I'm actually an AI optimist. But I think even optimism should be guarded because it is when we become overly optimistic that we make foolish choices.

And I think that some amount of, maybe you could call it scepticism, but I think somewhere, as you were saying, in between. So I think of myself as being more of an optimist, but not a complete optimist. I think we do need some checks and balances. Mostly, like I was saying, my message is not that we should run away from AI, we should fear it, but it's that we should use it carefully because this is a tool like many tools we've had in the past that can backfire. One of the analogies that I provide in the book is that, us running away from AI might be like a caveman running away from fire. Fire has caused a lot of damage, but fire has also been responsible for a lot of human progress. Our ability to control fire. AI is like that to me. That if we don't control it, it can create damage. But when we learn to control it, it can create a lot of value.

FF: So how do we prepare ourselves as individuals in terms of when we get into this brave new world? As individuals, consumers, entrepreneurs and the state. What should we expect from the state, what should we expect from the businesses and most importantly, what are the skill sets, the mental models that we should have? What are the skill sets that they should develop to make sure that we use whatever benefits technology has to offer without getting harmed by the risk?

KH: My answer is going to be somewhat mundane, but I think it's important and it's under-appreciated, which is that the power we have as individuals is, it just comes down to education, our votes and our wallets.

Education is just about, we need to be more aware. Our use of technology is extremely passive. For example, how many people consume content on WhatsApp and Facebook and YouTube and do not think for a second whether this might be false. It is a unique problem of this century. Earlier there were editors who were responsible for curating the news. There is no curation today. There is so much fake stuff being circulated. So education is very important and we underestimate its value.

Look at Facebook announcing recently that they're going to support encryption of messages, they're going to support short-lived messages.

It was not regulation that caused it. It was because people complained. That is our impact. When we are informed about these things. Google Duplex, where the system could call a restaurant, make reservations on your behalf, call other companies and so on. But the system would make the other person believe they are talking to a human being. And some people immediately tweeted saying, it is unethical to fool the other person like that. And immediately Google said, okay, apologies, we will actually now let them know. So I think when we are aware and when we push back, companies are forced to take action.

Same thing with when I say our wallets. I think at the end of the day we are buying and using these products and we have to, if we say something is not agreeable, we have to say, I will not cross this line. 

Maybe that line is different for different people. Maybe if you say privacy is important for me and I feel Facebook is violating my privacy, then we should not use it. I'm not saying it is, but each person should draw a line somewhere.

And votes are the same way. I think we need to again take our vote seriously. It has an impact. And I think that it then comes back to the companies and the government. I think companies need to self-regulate. If they don't, then… we're already hearing talk in the US about big tech regulation. In Europe this has been going on, a lot of big fines, anti-trust analyses of these big companies is going on now. So there's a big cause for concern for these big companies. If they don't self-regulate, they're going to be in deep trouble.

FF: Will that translate to a danger to technology itself? Will the pendulum swing so much that, we will face an AI winter once again, or stop all self-driving cars? Is that danger also there when the activism from the society kind of closes doors?

KH: It is in the realm of possibility. I hope it doesn't happen because sometimes I see some people I would say are misleading and irresponsible critics. People who don't understand the technology deeply and who are using some highly negative terminology to refer to technology, which paints technology as being uniquely, or exclusively bad for society. 

FF: All our technology-based movies are dystopian. 

KH: They are all dystopian. But now it's not just science fiction, right? It's the media reporting that talks about oppressive algorithms and destructive and so on. Yes, there are problems, and that's why in the book I tried to take an in-between stand and talk about the positives and the negatives. But I think there is that risk. It is in the realm of possibility that there could be a severe backlash. I hope it won't happen. I do think we are already getting to a point where technology is the new Wall Street in the sense that, there's a negative view of finance and Wall Street, whereas technology was always, ‘oh there's this visionary entrepreneur in a garage who is an innovative inventor’. It was positive words. Do no evil and those kinds of things. But now that's shifting.

Also Read

‘As machines become more intelligent, they also become unpredictable’
In 2016, Go world champion Lee Sedol lost to AlphaGo, Google’s Go-playing AI program. How did the program do this? And why does this feat matter? Kartik Hosanagar answers that in this extract from his book, ‘A Human’s Guide to Machine Intelligence’

Virtuoso

Virtuoso features conversations with a cross-section of veteran entrepreneurs, business strategists and thought leaders from India and abroad

Was this article useful? Sign up for our daily newsletter below

Comments

Login to comment

About the author

N S Ramnath
N S Ramnath

Senior Editor

Founding Fuel

NS Ramnath is a member of the founding team & Lead - Newsroom Innovation at Founding Fuel, and co-author of the book, The Aadhaar Effect. His main interests lie in technology, business, society, and how they interact and influence each other. He writes a regular column on disruptive technologies, and takes regular stock of key news and perspectives from across the world. 

Ram, as everybody calls him, experiments with newer story-telling formats, tailored for the smartphone and social media as well, the outcomes of which he shares with everybody on the team. It then becomes part of a knowledge repository at Founding Fuel and is continuously used to implement and experiment with content formats across all platforms. 

He is also involved with data analysis and visualisation at a startup, How India Lives.

Prior to Founding Fuel, Ramnath was with Forbes India and Economic Times as a business journalist. He has also written for The Hindu, Quartz and Scroll. He has degrees in economics and financial management from Sri Sathya Sai Institute of Higher Learning.

He tweets at @rmnth and spends his spare time reading on philosophy.

Also by me

You might also like