Navigating shades of grey

This Week: A framework for making tough calls, the Patanjali phenomenon, artificial intelligence, LEGO’s platform strategy, and more

Founding Fuel

Dear Friend,

The two new stories this past week explore two related challenges. One, in the current discourse in the country, why are polarizing views flying thick and heavy and how do leaders shield their teams from the chaos to stay focused on their mission?

Charles Assisi asked one of the finest minds of our times—a gentleman who’s work changes people’s lives “at a scale the untrained eye cannot comprehend”—that question. Read Charles’s intriguing takeaways from the conversation that followed.

Two, how many times have you come across problems at work that test your humanity, your analytical skills as well as your judgement?

In his column, Subroto Bagchi had presented his perspective on some such problems: For instance, should you let go of a star performer who is somewhat dishonest? What if his replacement is honest but not very effective at work? Or let’s say you are hungry to grow your business and you get a client whose requirements are a little bit beyond your current capabilities. Should you take the project?

Your decisions will impact the lives of many people who work with you and there are often no black and white answers. In his recent book, Managing in the Gray, Joseph L Badaracco presents a five-question framework that helps explore the various possibilities and arrive at optimal decisions. Read Pepsico India’s CEO D Shivakumar’s summary of the book.   

On the theme of decision making, I’ve picked out another piece from our archives: Dan Ariely’s insights on our innermost decision-making foibles, and how they influence our decision making in the modern world.

Like the newsletter? Do share it with your friends and ask them to subscribe to it.

Do write in with your comments and feedback. You can reach us on Twitter, Facebook or the comments section on www.foundingfuel.com. We’d love to hear from you.

I wish you a great week ahead.

Sveta Basraon

On behalf of Team Founding Fuel

Featured Articles

Two idiots and three pigs

How do generals win a war? By pacing themselves right even as they use idiots and pigs to protect their soldiers, finds Charles Assisi. (Read Time: 7 mins)

Making tough calls in business and life

In his book ‘Managing in the Gray’, HBS ethics professor Joseph L Badaracco presents a framework to arrive at the best possible decisions in a hard situation. D Shivakumar presents the gist of the book. (Read Time: 6 mins)

From Our Archives

The Patanjali phenomenon

How and why a low-key Ayurvedic brand is stealing a march on established FMCG firms.

[By Alokprasad, CC BY-SA 3.0, under Creative Commons] 

Patanjali is in the news again with Baba Ramdev claiming that the group will double its turnover from the existing Rs 10,561 crore within a year.

In this article from a year ago Indrajit Gupta says, “For the FMCG companies the disruption that Patanjali has wrought is for real. Because it challenges many of the long-held assumptions in the classical FMCG world.” (Read Time: 3 mins)

What We Are Reading

How to prepare for an automated future

[By Mixabest (Own work) (CC BY-SA 3.0), via Wikimedia Commons]

How do you educate people for an automated world? Can we change education fast enough to outpace the machines? What can workers do now to prepare?

These are some of the questions that a new research seeks to answer. Pew Research Center and Elon University surveyed 1,408 people who work in technology and education to find out if they think new schooling will emerge in the next decade to successfully train workers for the future. Two-thirds said yes.

What computers can’t do

Professor Hubert L. Dreyfus, whose 1972 book “What Computers Can’t Do” became an inspiration to researchers in artificial intelligence, died on April 22. This obituary by The New York Times, published earlier this week, details why he thought artificial intelligence couldn’t replicate what the human brain can do.

“Professor Dreyfus argued that the dream of artificial intelligence rested on several flawed assumptions, chief among them the idea that the brain is analogous to computer hardware and the mind to computer software.

“In this view, human beings develop an accurate picture of the world by adding bits of information and rearranging them in a procedure that follows predictable rules…. Inevitably, he said, artificial intelligence ran up against something called the common-knowledge problem: the vast repository of facts and information that ordinary people possess as though by inheritance, and can draw on to make inferences and navigate their way through the world.”

AI’s PR problem

“While it’s true that today’s machines can credibly perform many tasks (playing chess, driving cars) that were once reserved for humans, that doesn’t mean the machines are growing more intelligent and ambitious. It just means they’re doing what we built them to do.”

Platform strategy and walled gardens in the toy industry

How Lego, Hot Wheels, and others create walled gardens to compete for your child’s attention

“When you think toys and interoperability, the first thing that comes to mind is, of course, LEGO…. A few core components form the infrastructure and the other components attach on as complements. Once a kid is entrenched enough with the core infrastructure, the company can keep selling complements to the kid…. you only need one LEGO set to get entrenched with a kid. Every subsequent LEGO set is interoperable in a way that the value of owning multiple LEGO sets, especially those with widely varying themes and scope, scales non-linearly.”

Was this article useful? Sign up and we'll send you articles like this every week. Here's a sample

Comments

Login to comment