
When Trust Becomes Infrastructure
Part 4 of The Future of Work and Agentic AI series: As agents begin working together, trust shifts from intelligence to standards, governance, and control.


TL;DR

In the last article, we were in the middle of a hiring decision.
Agents and humans, evaluated not just for what they could do, but for how they behaved. Assessment systems—internal and independent—stepping in to make those differences legible.
But that raises a more fundamental question:
If organisations begin to rely on these systems to decide what “better” looks like, how do those systems themselves become trustworthy?
Standardisation as Infrastructure
As agents and humans begin to share workspace, roles, and goals, most organisations start to resemble portfolios.
A more subtle shift follows when multiple agents from different providers operate within the same firm—and often within the same team. Each arrives with its own operating logic.
The organisation is forced to answer two questions:
- How do external agents plug into internal systems and data?
- How do agents from multiple vendors collaborate as part of one team?
The first is, at one level, a plumbing problem.
An external agent does not know where anything is. It does not know how customer records are organised, where pricing approvals sit, or what the internal shorthand for a “priority account” actually means.
Every firm carries years of accumulated data that makes perfect sense to those who built it—and very little sense to anything arriving from outside.
In the early phase of the AI era, this was solved manually. Someone built connectors. Someone wrote documentation. Someone spent months ensuring that the agent could find what it needed without breaking something else.
This approach does not scale when agents arrive from a marketplace.
A Shared Grammar: MCP
This is where standardisation enters.
A quiet but significant piece of infrastructure has begun to emerge: MCP—the Model Context Protocol.
Introduced in late 2024, MCP defines how AI agents ask an organisation’s systems what is available, and how to use it. It is less like software and more like a shared grammar.
Think of it like a new employee’s first day.
She arrives capable and well-trained, but she does not know where anything is. She does not know how pricing approvals work, who owns which data, or what the organisation means by a “priority account.” Someone has to show her around.
Now imagine that happening every week, across multiple teams, for systems that need to operate immediately.
MCP acts like a standardised induction system for agents. An external agent connects and asks, in a common language: what is available, what can I access, and how should I use it?
The organisation responds in a language the agent already understands. The agent finds what it needs, respects what it cannot access, and begins work.
What once took months of integration compresses into days.
However, it is not a complete solution.
What an agent can see is different from what it is allowed to see. Questions of access—who grants permission, how it is audited, and what happens when something goes wrong—remain unresolved in most organisations.
But the direction is clear.
The interface is being standardised.
The question has shifted from:
- Can we connect this agent?
to:
- What should we allow it to touch and under what conditions?
The Harder Problem: Coordination
The second problem is more interesting—and harder to resolve with a protocol.
In a room of human colleagues, coordination emerges through a mix of explicit instruction and implicit understanding. People read the room. They sense ownership. They know when to defer, when to push, and when to step back.
These dynamics are invisible, largely unspoken, and essential.
Consider a loan approval process.
A bank deploys three agents on the same file. One analyses financial history. One checks regulatory compliance. One evaluates risk.
Each is specialised. Each is correct within its domain.
But they do not always agree.
One signals approval.
Another flags regulatory concerns.
A third recommends waiting.
In a human team, this would trigger discussion, escalation, and eventual decision.
In a multi-agent system, the coordination must happen within the system itself.
A First Step: A2A
A standard known as A2A (Agent-to-Agent)—introduced by Google in April 2025 and now stewarded by the Linux Foundation—takes a first step in this direction.
Each agent publishes a capabilities statement:
- What it can do
- What it cannot
- How it prefers to receive tasks
Other agents can discover this, delegate accordingly, and negotiate handoffs.
It is not unlike a new team member introducing themselves—except that the introduction is machine-readable, and the negotiation happens in milliseconds.
Through A2A, each agent declares what it can do and how it prefers to work.
They pass tasks between each other. They escalate when needed. They signal disagreement explicitly.
The loan is neither approved nor denied multiple times.
It is routed.
Either to the next agent best suited to resolve the issue, or to a human when the question exceeds what any one system should decide.
It is coordination without a meeting.
What remains unresolved is the question of authority.
A protocol can indicate who spoke. It cannot determine who was right.
A protocol can indicate who spoke. It cannot determine who was right.
In human teams, that question is resolved through hierarchy, trust built over time, or the intervention of someone with enough standing to break the deadlock.
In multi-agent systems, that arbiter must still be defined.
In most organisations today, it remains a human—at least for decisions that matter.
The more interesting design question is not how to remove that human from the loop, but how to make their involvement precise.
To call them in at the moment where judgment is required—and not have them constantly manage what the system could have resolved on its own.
What We Choose to Standardise
The trajectory from the previous article becomes clearer here.
If behaviour becomes the basis of selection, and assessment becomes the basis of trust, then standardisation becomes the basis of scale.
What we standardise shapes what we can compose.
What we assess shapes what we allow.
And what we allow—quietly, one agent at a time—becomes the intelligence the organisation is willing to scale.
The firms that will navigate this transition well are not necessarily those with the most agents or the fastest deployments.
What we standardise shapes what we can compose.
They are those that have thought clearly about what they want to become—and have built the governance to hold that shape as the workforce around them changes.
That question does not remain theoretical for long.
It shows up in economics—cost structures, margins, speed, and risk.
In the final article of this series, we turn to those consequences.
Dig Deeper
Previously in The Future of Work and Agentic AI series
Part 3: When Intelligence Is Taken for Granted, Behaviour Is What Matters
Earlier parts:
Part 1: The Knowledge Divide: How AI Will Reorganise Work, Margins and Power
Part 2: When Work Runs on Two Minds
Next in the series
The final part examines the economic consequences of agentic organisations: margins, coordination costs, organisational design, and the changing structure of work itself.
Join the conversation
Arjo Basu
Systems thinker & technologist | Entrepreneur
Arjo Basu is a systems thinker, technologist, and entrepreneur working at the intersection of narrative, data, and AI. He believes the future of work, and leadership, depends on how well we humanize technology while building structures that can scale trust, clarity, and opportunity.
With over 25 years of experience across data strategy, enterprise architecture, and AI-led product innovation, Arjo has spent his career designing systems that bridge people, platforms, and purpose. His work is guided by a simple belief: systems thinking, when paired with the right technology and a clear narrative, leads to sustained impact.
He founded Moksho, an AI-powered interview intelligence platform reimagining how we hire and how we prepare to be hired through simulated scenarios, sharp feedback, and credibility-building certifications.
He is the co-founder and CTO of stotio, an AI-powered Narrative OS built to help businesses distil strategy into connected and clear growth narratives across moments that shape outcomes be it fundraising, sales, brand evolution, and leadership reviews. stotio blends structured storytelling frameworks with a context-driven intelligence layer, so organizations build narrative consistency across stakeholders and decisions.
Previously, Arjo served as a Principal Data Architect and Strategist for global financial services firms in the United States, where he led high-performance teams across geographies, built enterprise-grade data platforms on Snowflake and Databricks, and created the Data Maturity Framework, now used by multiple organizations to guide scalable, insight-led transformation.
Alongside his technology work, Arjo writes fiction, poetry, and essays that explore identity, memory, and belonging, often mirroring the same questions he engages with in systems and strategy: how structure shapes behaviour, how silence carries meaning, and how humans navigate complexity.
Across technology, narrative, and design, his work reflects a commitment to building systems with structure, clarity and momentum.
Debleena Majumdar
Entrepreneur & business leader | Author
Debleena Majumdar is an entrepreneur, business leader and author who works at the intersection of narrative, numbers, and AI. She believes that in a world where AI can generate infinite content, the differentiator is not volume, it’s meaning: the ability to connect strategy to a coherent story people can trust, follow, and act on.
She is the co-founder of stotio, an AI-powered Narrative OS built to help businesses distil strategy into connected and clear growth narratives across moments that shape outcomes be it fundraising, sales, brand evolution, and leadership reviews. stotio blends structured storytelling frameworks with a context-driven intelligence layer, so organizations build narrative consistency across stakeholders and decisions.
Debleena’s foundation is deeply rooted in finance and investing. Over more than a decade, she worked across investment banking, investment management, and venture capital, with experience spanning firms such as GE, JP Morgan, Prudential, BRIDGEi2i Analytics Solutions, Fidelity, and Unitus Ventures. That grounding in capital and decision-making continues to shape her work today: she is drawn to the point where metrics end and decisions begin and where leaders must translate complexity into conviction.
Alongside business, Debleena has been a published author, with multiple fiction and non-fiction books. She contributed data-driven business articles, including contributions to The Economic Times over several years. She loves singing and often creates her own lyrics when she forgets the real ones. Humour is her forever panacea.
Across roles and mediums, her learning has been to use narrative with numbers, as a clear strategic tool that makes decisions clearer, communication sharper, and growth more aligned.
Beyond the noise is the signal.
FF Insights: Sharpen your edge, Monday–Friday.
FF Life: Culture, ideas and perspectives you won't find elsewhere — Saturday.

Founding Fuel is sustained by readers who value depth, context, and independent thinking.
If this essay helped you think more clearly, you may choose to support our work.

Founding Fuel is sustained by readers who value depth, context, and independent thinking.
If this essay helped you think more clearly, you may choose to support our work.


Readers also liked

When Intelligence is Taken for Granted, Behaviour is What Matters
Part 3 of The Future of Work and Agentic AI series: As intelligence becomes abundant, organisations stop hiring for capability and begin designing for behaviour.


Arjo Basu & Debleena Majumdar
When Intelligence is Taken for Granted, Behaviour is What Matters
Part 3 of The Future of Work and Agentic AI series: As intelligence becomes abundant, organisations stop hiring for capability and begin designing for behaviour.


Arjo Basu & Debleena Majumdar

When Visibility Stopped Working as a Measure of Performance
As work becomes less visible, the systems used to measure it are starting to fail—and expose what they were really rewarding

Kavi Arasu
Leadership and Talent Development Professional
When Visibility Stopped Working as a Measure of Performance
As work becomes less visible, the systems used to measure it are starting to fail—and expose what they were really rewarding

Leadership and Talent Development Professional

When Work Runs on Two Minds
As human and machine learning begin to intersect, work is no longer shaped by a single reinforcement loop, but by the interaction of fundamentally different ones. Part 2 of an ongoing series on the Future of Work and Agentic AI.


Arjo Basu & Debleena Majumdar
When Work Runs on Two Minds
As human and machine learning begin to intersect, work is no longer shaped by a single reinforcement loop, but by the interaction of fundamentally different ones. Part 2 of an ongoing series on the Future of Work and Agentic AI.


Arjo Basu & Debleena Majumdar
Explore more
Dive into other themes from our network.

