[The future of AI will be decided not by brilliance, but by who operates it]
The debate on artificial intelligence is being framed incorrectly.
Will AI destroy jobs? Will it replace humans? Will it upend entire industries?
These questions dominate headlines, conferences, and boardroom conversations. They also miss the point.
Technologies rarely eliminate tools or even whole industries. What they do—again and again, with remarkable consistency—is rearrange where value is captured. They turn yesterday’s profit centres into tomorrow’s utilities. They make some players indispensable and others interchangeable. Activity continues; profitability shifts.
AI will not be an exception.
Personal computers became universal, yet most of the value migrated away from companies assembling PCs and toward the layers built above them—operating systems and software. Microsoft became a multi-trillion-dollar company while standalone PC makers faded into low-margin obscurity. PCs did not vanish; they became economically ordinary.
Smartphones followed a similar arc. They spread everywhere. Dozens of manufacturers sold hundreds of millions of devices. But the bulk of industry profits accumulated with Apple—not because Apple made phones, but because it controlled the interface, the ecosystem, and the customer relationship. The phone became a gateway to something larger. The tool persisted; the profit pool concentrated elsewhere.
Tools survive. Profit pools don’t.
Tools survive, profit pools don’t
There is another reason the dominant narrative around AI is incomplete.
As AI improves, many assume humans will gradually disappear from the loop. In reality, in most high-stakes systems they cannot. AI can generate answers, but it cannot bear responsibility for them. And where consequences matter, someone still must.
That distinction—between intelligence and responsibility—turns out to matter more than it first appears. In some economies it will be incremental. In others, it may be decisive.
Before turning to India, however, it is worth returning to a more basic question: when intelligence becomes abundant, what happens to value?
Survival is not the same as value
The past year has seen sharp corrections across enterprise SaaS stocks.
On the meltdown, Nvidia CEO Jensen Huang said: “It's the most illogical thing in the world. There's this notion that the tool is in decline and being replaced by AI.”
[Nvidia CEO Jensen Huang speaking at the Cisco AI Summit]
That observation is true—but incomplete.
Tools rarely disappear. We still use screwdrivers centuries after their invention, and we will still use enterprise software decades from now. Even in a future where robots repair other robots, something—or someone—will still need tools.
But survival is a low bar.
Whenever a capability becomes abundant, its economics change.
For SaaS, the issue is not survival but value capture. AI agents and intelligent interfaces are inserting a new layer between users and underlying software—one that orchestrates workflows across systems and abstracts complexity away from individual tools.
As users increasingly interact with this intelligence layer, the interface becomes the product. The underlying software becomes replaceable infrastructure. Competitive intensity rises. Renewals are negotiated harder. Pricing power softens. Switching costs fall.
More software will be sold. Less value will be captured per unit.
SaaS will survive. SaaS multiples will not.
Why this matters more for India
The SaaS debate matters only marginally to India. What matters far more is a $250-billion software services industry that employs lakhs of young professionals and anchors the aspirations of engineering graduates as they leave college.
For India, the question is not what happens to SaaS firms elsewhere—but what AI does to Indian IT and to the broader economy it supports.
The reason is structural.
For more than three decades, Indian software services benefited from a particular form of scarcity: skilled technical labour that was significantly cheaper than in developed markets. That scarcity allowed firms to scale rapidly while maintaining margins. Labour arbitrage worked because capability was scarce, switching was costly, and alternatives were limited.
AI changes that equation.
Coding, testing, documentation, integration, maintenance—tasks that once required large, specialised teams—are increasingly assisted or automated. Productivity jumps. Effective capacity expands. And when capacity expands faster than demand, pricing power weakens.
AI is not just an automation technology. It is a scarcity-destroying one
This is not a story about extinction. It is a story about repricing.
For countries already operating near the global productivity frontier, the implications are limited. AI will be used primarily to optimise—to shave costs, accelerate research, and refine already efficient systems.
For a country like India, still grappling with deep inefficiencies across large parts of its economy, the implications are far more consequential.
India does not suffer from excessive automation. It suffers from too little.
This is where India’s AI conversation often goes astray.
The central issue is not whether India can build frontier models or produce world-class AI researchers—it can and will. The more urgent question is whether AI can be deployed quickly and widely enough to raise everyday productivity across millions of workers, small enterprises, and public systems.
A middle-income economy gains more from broad productivity uplift than from technological prestige. The breathless AI news cycle makes us think that we have lost the battle—but in the real world things move far, far slower. Even a decade of aggressive AI adoption across India’s core sectors would not exhaust the country’s productivity gaps. That is the scale of the opportunity—and the backlog.
Indian IT: repricing without extinction
The same forces reshaping global SaaS economics play out even more starkly in India’s software services industry.
AI will not destroy Indian IT. It will reprice it.
For decades, Indian IT firms scaled by assembling large teams to deliver predictable outcomes at lower cost. Utilisation mattered more than differentiation. Execution mattered more than judgment. That model worked because effort was scarce and coordination was expensive.
AI-assisted tools are changing those fundamentals. Smaller teams can now do more. Work that once required weeks of coordinated effort can be delivered in days. Effective capacity expands without proportional increases in headcount.
Lower costs and shorter timelines may increase demand. Enterprises need more software, not less; many projects are abandoned because of cost or time overruns. The demand-supply gap can be bridged if Indian IT majors play it well. But the transition will not be painless—things could get worse, before they get better, as the market resets pricing, contracts and expectations. It’s likely that we’ll see more layoffs in the interim.
However, when capacity expands faster than demand, pricing power weakens. Competition intensifies. Margins compress.
The pattern is familiar. Airlines still fly planes. They simply do so in an environment where fares are relentlessly competed down, utilisation is squeezed, and profitability becomes structurally fragile. The industry remains large, visible, and essential—yet, far less rewarding than it once was.
Indian IT risks a similar transition.
Large firms will not disappear. Their client relationships, compliance capability, and institutional memory remain real strengths. But the economics of their core activities will not remain untouched. Clients will push harder on pricing. Renewals will be negotiated more aggressively. Smaller, AI-assisted teams will compete for work that once belonged naturally to incumbents.
Sridhar Vembu, founder of Zoho, has been unusually candid about this shift. Long before AI became fashionable, he argued that SaaS economics were more fragile than they appeared. More recently, in a post on X, he has pointed out that AI-assisted code engineering has altered productivity so dramatically that tasks once requiring weeks of senior engineering effort can now be completed in days. That kind of gain does not eliminate software or services. It eliminates the scarcity that once protected margins.
What is now emerging is not just an operational shift, but a valuation problem.
Markets are beginning to sense this unease. The recent reset in Fractal’s IPO expectations is telling—not because Fractal lacks capability, but because as one of the first AI-driven Indian firms to approach public markets, it has become an early test case for how investors price AI-centric services as intelligence becomes more widely accessible.
Indian IT faces the same question confronting SaaS vendors globally: not whether it will remain relevant, but where it will sit in the value chain once intelligence becomes abundant.
Like global SaaS, Indian IT’s challenge is not survival, but its place in a value chain reshaped by abundant intelligence
Answering that question requires a new playbook.
Application before invention
Much of the public imagination around AI is captured by generative systems that write, design, and converse. These are powerful and visible. But for India, the larger gains will come from closed-loop systems embedded directly into workflows—systems designed for specific tasks and outcomes, often quietly and invisibly, but at massive scale.
These systems do not need to be universally intelligent. They need to be reliably useful.
AI does something more fundamental than automate tasks. It compresses the cost of routine skills and expands access to higher-level capabilities.
In many fields, the binding constraint has never been demand, but expertise. There are too few doctors, too few skilled teachers, too few experienced legal professionals. If AI lowers the cost of delivering expertise, demand is likely to expand rather than contract.
History offers familiar precedents. Spreadsheets did not eliminate accounting; they multiplied it. ATMs did not eliminate bank tellers; they changed what tellers did. Cheaper computation did not reduce economic activity; it created entirely new industries.
AI is likely to follow a similar path. Some roles will shrink. Others will expand. The total volume of work requiring human oversight, judgment, and interaction is likely to grow.
For labour-rich economies, this shift is decisive.
Countries with high baseline productivity will use AI primarily to optimise. In economies where productivity per worker remains low and expertise is scarce relative to need, AI can act as a multiplier—raising the capability of millions at once.
India sits squarely in the second category.
For India, the central AI question is not whether machines will replace humans. It is whether AI can make its humans dramatically more capable.
Humans remain in the loop
Much of the AI discourse assumes a steady removal of humans from decision-making. This misunderstands a basic institutional reality: AI systems do not possess agency.
They cannot bear legal responsibility. They cannot sign contracts. They cannot be sued.
Modern AI systems are powerful pattern recognisers. They learn from vast amounts of historical data and perform impressively when situations resemble what they have seen before. But when they encounter genuinely unfamiliar situations—edge cases, anomalies, or novel combinations—they do not know what to do. They generate probabilistic responses based on past patterns, not judgment shaped by experience.
In low-stakes contexts, this limitation is manageable. In high-stakes environments, it is not.
In medicine, a misdiagnosis can be fatal. In finance, a flawed decision can destroy capital. In infrastructure or governance, errors can cascade into systemic failure.
AI can generate answers. It cannot bear responsibility for them
AI can achieve high statistical accuracy. What it cannot possess is agency. Even a system that is right 98% of the time still fails 2% of the time—and in high-stakes systems, that residual uncertainty is precisely where risk sits.
This is not a temporary safeguard until AI improves. It is a structural feature of how critical systems operate.
AI will draft legal briefs, but lawyers will sign them. It will assist diagnoses, but doctors will own treatment decisions. It will recommend trades, but portfolio managers will carry responsibility.
The human remains in the loop—not because AI is weak, but because accountability cannot be automated.
For Indian IT firms, this distinction is consequential. Productivity gains do not destroy demand; they destroy the ability to charge a premium for routine work. Firms must move up the value chain—towards work that combines judgment, accountability, and system-level integration.
At the same time, the work that makes systems reliable—deployment, testing, edge-case resolution, continuous support—does not disappear. As software becomes cheaper and more ubiquitous, demand for dependable human-in-the-loop operations grows.
That is not residual labour. It is an operating layer.
What is true for firms can also be true for countries? That’s where India’s opportunity lies.
India does not need to win the race to build intelligence. It needs to win the race to operate it
India’s playbook: building capacity, not chasing the frontier
Much of today’s global AI race is about pushing models from 90% accuracy to 92%, from 92% to 94%, from 94% to 98%. Each incremental gain now demands exponentially more computing power, capital, and energy. Data centres consume electricity and water at the scale of small towns. Costs rise steeply to squeeze out the last few percentage points of uncertainty.
That may be a race worth running for those building frontier models. It is not the race India needs to prioritise.
Instead of chasing near-perfect autonomy, systems can be designed to assume imperfection and manage it intelligently: models operating within defined boundaries, delivering high-confidence outputs most of the time, and escalating to humans when uncertainty rises. Systems where responsibility, judgment, and moral choice sit clearly with people—not probabilistic engines.
In practice, 95% machine accuracy with human supervision can be more valuable than 98% machine accuracy with no accountability.
This approach plays directly to India’s strengths.
India has built systems like this before. We did not invent biometrics; we built Aadhaar. We did not invent digital payments; we built UPI. In both cases, India took existing global technologies and engineered reliable, low-cost, high-volume systems suited to Indian realities—deployed at scale, in the real world, not just demonstrated in labs.
AI presents a similar moment.
Once AI is designed this way, the race changes. The goal is no longer to build ever-larger models chasing marginal gains. It is to build industry-grade systems that combine frontier intelligence with human oversight and reliable escalation—AI that knows when to act, and when to stop and ask.
India does not need to burn vast capital and energy competing in a global parameter race. It can take the most powerful models available and use them as foundations—distilling, adapting, and constraining them into purpose-built systems that are narrower, cheaper, and designed to default to humans when they encounter unfamiliar territory. Such systems may occasionally slow down. They are far less likely to make expensive, irreversible errors.
Seen this way, frontier-scale models—now running into hundreds of billions of parameters and requiring extraordinary computing and energy infrastructure—are not endpoints, but inputs. Their real value may lie in enabling many smaller, task-specific models that can be deployed widely and operated reliably. Framed this way, India’s choices around compute infrastructure and data centre policy begin to look less like a race to replicate global scale, and more like a question of how intelligence is best distributed, governed, and applied.
Designing AI this way also creates a new economic layer. Human supervision, validation, and exception handling become integral to system performance. Large numbers of trained professionals can operate as the reliability layer that turns probabilistic intelligence into accountable outcomes—across domains from diagnostics and legal workflows to compliance, engineering, and public administration. This is not low-end labour arbitrage. It is capability amplification at scale.
As AI spreads, trust—not raw intelligence—becomes the binding constraint on adoption.
The world will not adopt the most intelligent AI. It will adopt the AI it can trust.
That is exactly the kind of race India has won before.
A recent pivot by Indian startup Sarvam captures this shift in real time. Like many young AI firms, it initially set out to compete in the global race to build frontier language models. That ambition has since been set aside. The company has moved instead toward applications that can be deployed immediately—voice interfaces, multilingual systems, and enterprise workflows shaped by Indian realities. Competing at the frontier demands capital, compute, and scale available to only a handful of global players. The larger opportunity lies elsewhere: not in building ever-larger models, but in embedding intelligence into systems that are actually used.
Compounding impact: where AI actually moves the needle
None of this requires dramatic breakthroughs. It requires systematic deployment—placing intelligence where friction is highest and where productivity gains compound fastest.
India does not suffer from too much technology. It suffers from too little usable intelligence embedded in everyday systems.
The judiciary is one example. Tens of millions of cases remain pending across courts. Contracts weaken when enforcement takes years. AI will not replace judges—but it can cluster cases, surface precedents, draft routine orders, and optimise scheduling. Faster justice is not just a legal reform; it is an economic multiplier.
Healthcare is another. India has too few doctors and wide variation in diagnostic quality. AI-assisted triage and decision support can extend the reach and consistency of care, allowing one doctor to supervise far more patients.
Small enterprises face similar constraints. Millions operate with little managerial support. AI copilots trained on Indian regulatory and market contexts can translate rules into actionable guidance and reduce dependence on intermediaries.
Then there is the machinery of government itself. India does not lack schemes or spending. It often lacks continuous visibility into execution. AI systems that track progress, flag delays, and detect leakages across thousands of projects simultaneously could transform state capacity without expanding bureaucracy proportionately.
None of these applications will make global headlines. None require India to build the world’s most powerful models.
India does not suffer from too much automation. It suffers from too little
But together, they represent a broad-based lift in national productivity.
These are not glamorous use cases. They are compounding ones.
India does not need AI that dazzles conference audiences. It needs AI that removes friction from daily life.
A narrow window
None of this is automatic.
The global conversation on AI is still fixated on who will build the most powerful systems. The more important question is who will use them best.
India’s advantage has long rested on labour and cost arbitrage. In an AI-enabled world, that advantage may shift toward capability arbitrage—the ability to combine human judgment with machine intelligence to deliver reliable outcomes at scale.
That transition will not happen by default. It will require rapid adoption, widespread access to usable tools, and ecosystems that allow small firms, independent professionals, and public systems to deploy AI responsibly.
India has been here before—choosing deployment over invention, scale over spectacle. If it makes the same choice again, AI may turn out to be less a threat than one of its largest structural opportunities..
The window to act is narrow. It will not remain open indefinitely.