Are LLMs Just Clever Parrots?

Inside the growing chorus of doubt from AI’s own pioneers

Shishir Prasad

On October 21, OpenAI unveiled Atlas—a web browser that embeds ChatGPT throughout the browsing experience. Within hours, Alphabet’s shares fell 2%, erasing tens of billions in market value. Built on Chromium—the same open-source platform that powers Chrome—Atlas triggered panic in the very house whose foundation it stands on.

Call it the AI boogeyman effect—the mere hint of AI entering a new domain is enough to rattle trillion-dollar companies. And the artillery behind this advance is formidable: NVIDIA plans to invest up to $100 billion in OpenAI as it builds 10 gigawatts of AI data centers. Microsoft, SoftBank, Oracle, and AMD have poured in billions more. OpenAI’s latest share sale values the company at $500 billion.

The very exuberance that sent AI valuations soaring is now drawing scepticism—from inside the tent.

After three years of unbridled euphoria, the champagne is starting to go flat. Yes, large language models can write slick code, churn out spreadsheets, polish presentations, even compose poems. But is that all?

As corporations weigh bets that could reshape their entire operations, the questions are getting sharper—and some of AI’s most celebrated pioneers are offering uncomfortable answers.

When the Architects Turn Critics

The scepticism isn’t coming from Luddites or technophobes. It’s emerging from the architects of the revolution itself.

At CES 2025, Yann LeCun, Meta’s Chief AI Scientist and a Turing Award winner, didn’t mince words: “There’s absolutely no way that autoregressive LLMs, the type we know today, will reach human intelligence. It’s just not going to happen.”

LeCun’s criticism cuts to the core of how these systems work. LLMs, he argues, are trained to predict tokens—discrete units of text. “When you train a system to predict tokens, you can never train it to predict the exact token that will follow,” he said. More fundamentally, “every attempt to get a system to understand the world by predicting videos at a pixel level has failed,” he told audiences at NVIDIA’s GTC 2025.

In LeCun’s view, today’s LLMs will be “largely obsolete within five years,” replaced by systems with genuine world models—able to reason, plan, and understand causality, not merely pattern-match from data.

Two other pioneers have since joined this growing chorus—each with a distinct, but equally troubling, critique.

Richard Sutton, inventor of reinforcement learning and winner of the 2024 Turing Award, offered a stark assessment on Dwarkesh Patel’s podcast. “Having a goal is the essence of intelligence,” he said. “Something is intelligent if it can achieve goals… You have to have goals or you’re just a behaving system.”

His argument? LLMs are mimetic machines—they imitate rather than act. “The next token is what they should say, what the actions should be. It’s not what the world will give them in response,” Sutton explained. They predict words, not consequences; complete sentences, not objectives. And crucially, they cannot learn from surprise or feedback the way true intelligence does.

A few days later, Andrej Karpathy—a founding member of OpenAI and former head of AI at Tesla—added his own, equally damning critique. Asked when LLMs might function like human interns, he replied: “They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking—and it’s just not working.”

His critique of reinforcement learning—the very technique meant to make LLMs smarter—is especially telling. He describes it as “sucking supervision through a straw,” where “you may have gone down the wrong alleys until you arrived at the right solution,” yet every mistaken step still gets rewarded simply because it appeared in a successful trajectory.

Humans, by contrast, learn through reflection—analyzing which paths worked and why. Current models, Karpathy notes, lack the hippocampus, amygdala, and other ancient brain structures that underpin memory, emotion, and instinct. “We wouldn’t hire today’s LLMs as interns,” he quipped, “because they’re missing too many cognitive parts.”

A Pattern Emerges

Strip away the technical jargon, and a clear pattern emerges. LLMs are brilliant at compression and mimicry, but they lack the deeper architecture for true intelligence.

They cannot

  • Form goals or pursue them adaptively (Sutton)
  • Learn continuously from experience (Karpathy)
  • Build causal world models that enable reasoning (LeCun)

Together, their critiques suggest that today’s AI boom rests on clever prediction, not real understanding.

There’s a historical echo here. When Google launched Chrome in 2008, Microsoft’s Internet Explorer seemed unassailable. Yet Chrome triumphed by being faster and built on a better architecture. The question now is whether today’s LLMs are the new Internet Explorer—dominant but doomed by design—awaiting a smarter successor.

The Irony of Scale

The irony runs deep. The debate is now subverting AI’s own conventional wisdom.

For years, the AI community has lived by The Bitter Lesson, Sutton’s famous essay arguing that simple, scalable methods powered by massive computation will always beat approaches based on human insight.

Large language models seemed to prove him right. Just keep scaling—more data, more parameters, more compute—and the machines would keep getting better.

Now Sutton himself has turned sceptic. He argues that LLMs are a dead end precisely because they feed on human-generated text instead of learning from real-world experience.

LeCun adds that scaling is “saturating”—producing diminishing returns. And even OpenAI’s Sam Altman admits the limits of the model: despite charging $200 a month for ChatGPT Pro, the company is still losing money.

What Do We Want AI to Become?

All this leads to an uncomfortable question: What exactly are we trying to build?

Many researchers dream of systems that can reason, plan, and understand the world—something edging toward superintelligence. Their frustration is that LLMs, for all their surface polish, still can’t get there.

Others have issued stark warnings about pursuing exactly those capabilities. Geoffrey Hinton, who won the 2024 Nobel Prize in Physics for his work on neural networks, offered a chilling metaphor: “We are like someone with a cute tiger cub. Unless you’re sure it won’t want to kill you when it grows up, you should worry.”

Last Christmas, Hinton estimated a “10 to 20 percent chance” that AI could wipe out humanity within three decades. He’s joined by more than 800 prominent figures—from scientists to artists like Stephen Fry—who have signed open letters urging restraint.

Maybe that’s why today’s LLMs remain bounded—impressive but limited tools, not runaway intelligences. Their constraints may not be flaws but features: accidental safety valves, buying humanity time to face questions it still doesn’t know how to answer.

The Corporate Reckoning

For companies weighing big bets on AI, these technical debates have immediate, high-stakes consequences.

If LeCun, Sutton, and Karpathy are right, today’s LLMs may already have hit a ceiling—impressive, but ultimately bounded. Companies pouring billions into scaling the same architecture could be chasing a dead end, while the next breakthrough may demand a completely different foundation.

Yet the opposite scenario is just as unsettling. What if the critics are wrong? What if relentless scaling really does produce more autonomous, more capable systems? Then Hinton’s warnings come into play, and the question shifts—from Can we build it? to Should we?

Looking Ahead

AI has survived more than one “winter,” when inflated expectations froze under the weight of reality. Another turning point may be approaching—though this time, the outcome is far from clear.

One path forward is optimistic: the current limits spark innovation, leading to new architectures that can truly reason, plan, and learn—developed responsibly and safely.

Another is bleak: we hit a wall, enthusiasm collapses, or worse, we build systems too powerful for us to control.

And then there’s the ironic middle: LLMs stay useful but limited—transformative for business, yet never truly intelligent.

What’s clear is that a reckoning has begun. The easy gains from scaling are over; the hard questions—about what intelligence truly requires, and what we want to build—are only beginning.

Three years ago, ChatGPT’s release sparked a gold rush. Today, even the prospectors are beginning to wonder if they’ve been digging in the wrong mountain.

The question now isn’t whether LLMs are impressive—they are. It’s whether impressive is enough, or if the next leap demands something altogether different.

Dig Deeper

  • Richard S. Sutton—The Bitter Lesson
    The 2019 essay that shaped modern AI thinking—arguing that scalable, computation-driven methods always outperform approaches built on human knowledge.
    Read it here 

  • Yann LeCun—Why Current LLMs Won’t Reach Human Intelligence
    Meta’s Chief AI Scientist explains why today’s autoregressive models can’t reason or understand the world—and why new architectures are needed.
    Read the interview

  • Andrej Karpathy—“Sucking Supervision Through a Straw”
    The former OpenAI founding engineer dissects why reinforcement learning struggles to make LLMs truly intelligent—and what’s missing from their design.
    Listen or read highlights

Great Ideas Start Here. It Needs Your Spark.

For over a decade, Founding Fuel has ignited bold leadership and groundbreaking insights. Keep the ideas flowing—fuel our mission with your commitment today.

PICK AN AMOUNT

Want to know more about our voluntary commitment model? Click here.

Was this article useful? Sign up for our daily newsletter below

Comments

Login to comment

About the author

Shishir Prasad
Shishir Prasad

Senior

Business Journalist

Shishir is a senior business journalist who has written extensively on technology, business strategy and finance. He loves to make complex topics accessible to the readers. In his last assignment, he built ET Prime into India’s leading business media subscription platform as its founding editor. In his past assignments, he has worked with Forbes India, Businessworld, The Economic Times, and Business Standard. He has also had stints outside of journalism, for instance at TCS.

Also by me

You might also like