What Appears to Be AI Is Not

Beneath the chatter about chatbots, a deeper contest is unfolding—between nations, companies, and scientists racing to build the AI that will control life itself

Sundeep Waslekar

Over the past few months, I’ve attended business conferences from the Global Entrepreneurs Conclave in Dubai to the Horasis Annual Meeting in Sao Paulo. Everywhere, one word dominated the conversations: AI.

Yet the understanding of what AI actually is remains surprisingly shallow.

Executives spoke about how their behaviour was being shaped by AI, or how they might “educate” AI, or how agentic systems could soon take over office tasks. The debate revolved almost entirely around ChatGPT and other conversational models that sit somewhere in the middle of the AI spectrum.

What was missing was any mention of the two extremes: the lower-level narrow AI that already brings practical benefits to farms and hospitals, and the higher-level scientific AI that may transform—or even end—life on this planet. The real AI arms race is unfolding in the scientific realm, not in the world of chatbots. Ask Peter Thiel, Demis Hassabis, Elon Musk, and their Chinese and Korean counterparts.

Two Ends of the Spectrum

In Baramati, Maharashtra, and similar rural districts, farmers now receive early warnings on weather, pests, and soil conditions through AI-powered mobile apps hosted on cloud platforms. These locally adapted systems, overseen by industrialist and Sakal Group chairman Prataprao Pawar via the Agricultural Development Trust, have improved farm productivity by up to 40 percent for certain crops such as sugarcane.

That dependence on foreign cloud services could decline through Federated Learning, which allows models to train on local data without exporting it. The AI used in such projects is what scientists call narrow AI—purpose-built systems developed long before ChatGPT existed.

At the opposite end lies scientific AI, where discovery itself becomes the goal.

From Baramati to Beijing

In London, teams at DeepMind are extending their breakthrough in protein-structure prediction to new frontiers. Their Alpha series now explores mathematical formulas, human genomes, and the earth’s structure. Alpha Evolve, released in May 2025, even allows AI code to refine and improve its own algorithms.

Across the globe in Seoul, 23-year-old scientist Minhyeong Lee has assembled a group of high-school and university students to build Spacer, an AI foundation model that autonomously generates research ideas in biology and chemistry. Using 180,000 research papers as data and just $3.5 million in seed funding, they produced a model that rivals far larger projects—a stark contrast to President Trump’s $500 billion Stargate initiative.

Meanwhile in Beijing, scientists at the Chinese Academy of Sciences are working on a Spiking Brain Model. If successful, it could mimic human neural networks while using only a fraction of the data required by today’s large language models.

Some of these scientific breakthroughs may drastically cut data and energy needs. They could help design cures for cancer, harvest new energy sources from oceans, or synthesise materials that absorb atmospheric carbon. Others are exploring neuro-symbolic AI, neuromorphic chips, and quantum computing. Nvidia is investing heavily in this compute revolution, as are Chinese and Korean firms—though much of their work remains out of public view. If they succeed, vast swathes of today’s data-centre infrastructure could become redundant.

Who Controls Scientific AI?

We cannot interact with these scientific or future quantum models as we do with ChatGPT or Gemini. They will be far more powerful than the “agentic AI” that now fills corporate panels. Whoever controls them will control the future of humanity.

At one end of the spectrum sit the single-domain narrow AI systems used in agriculture and diagnostics. At the other end are scientific AI models such as DeepMind’s Alpha series, Spacer, and Spiking Brain. The conversational systems we use daily lie somewhere in between.

To equate chatbots with AI is like mistaking the screen for the computer itself—ignoring the processor that does the real work.

AI Spectrum at a Glance

AI Type

Example Models

Purpose / Use Case

Data & Energy Needs

Narrow AI (Low level)

FarmBeats, diagnostic AI

Agriculture, healthcare, logistics

Low to moderate

General AI (Mid level)

ChatGPT, Grok

Text generation, conversational tasks

High (large LLMs)

Scientific AI (High level)

AlphaFold, Spacer, Spiking Brain 1.0

Discovery in science, mathematics, biology

Moderate to high (rapidly improving efficiency)

Future AI (Quantum / Neuro-symbolic)

Qubits, neuromorphic chips

Scientific simulation, sustainable energy

Minimal data / ultra-high compute efficiency

(Tip: On the mobile, rotate your screen to landscape mode to read the table clearly.) 

The Last Phase of Human Supremacy

Scientific AI could revolutionise life within a decade or two—but it could also extinguish it.

If left unregulated, such models might one day create biological agents beyond human control or algorithms capable of breaching the cyber defences of nuclear command systems. Some experts fear that as these models evolve, they could move from self-improvement to self-replication.

The morning we wake to a self-replicating AI algorithm will mark the end of human supremacy in science—and the beginning of science’s supremacy over humanity. On that day, the race for human domination of science will end; the race for scientific domination of humans will begin.

Whether this future brings a new golden age or universal chaos, it will be a paradigm utterly different from today’s world of chatbots, data centres, and small nuclear plants powering AI infrastructure.

If today’s AI can influence how we think, tomorrow’s could determine how we live—or whether we live at all.

Great Ideas Start Here. It Needs Your Spark.

For over a decade, Founding Fuel has ignited bold leadership and groundbreaking insights. Keep the ideas flowing—fuel our mission with your commitment today.

PICK AN AMOUNT

Want to know more about our voluntary commitment model? Click here.

Was this article useful? Sign up for our daily newsletter below

Comments

Login to comment

About the author

Sundeep Waslekar
Sundeep Waslekar

President

Strategic Foresight Group

Sundeep Waslekar is a thought leader on the global future. He has worked with sixty-five countries under the auspices of the Strategic Foresight Group, an international think tank he founded in 2002. He is a senior research fellow at the Centre for the Resolution of Intractable Conflicts at Oxford University. He is a practitioner of Track Two diplomacy since the 1990s and has mediated in conflicts in South Asia, those between Western and Islamic countries on deconstructing terror, trans-boundary water conflicts, and is currently facilitating a nuclear risk reduction dialogue between permanent members of the UN Security Council. He was invited to address the United Nations Security Council session 7818 on water, peace and security. He has been quoted in more than 3,000 media articles from eighty countries. Waslekar read Philosophy, Politics and Economics (PPE) at Oxford University from 1981 to 1983. He was conferred D. Litt. (Honoris Causa) of Symbiosis International University by the President of India in 2011.

Also by me

You might also like