Skip to main content
Founding FuelFounding Fuel

When Intelligence is Taken for Granted, Behaviour is What Matters

Part 3 of The Future of Work and Agentic AI series: As intelligence becomes abundant, organisations stop hiring for capability and begin designing for behaviour.

6 May 2026· 4 min read

TL;DR

As AI commoditises intelligence and capability, the strategic differentiator for organisations pivots decisively to designed behaviour. This signals a profound shift: businesses move beyond merely hiring for capability to actively shaping desired behaviours.

For instance, AI sales agents may flawlessly process information, but their ingrained behaviours—prioritising quick closures over higher margins, or pushing for value at the risk of lost deals—profoundly impact economic outcomes. Leaders face the critical imperative of explicitly defining and embedding these organisational preferences into systems, via 'behavioural sliders' or reinforcement learning. This isn't just about efficiency; it's about scalable strategic advantage. Actively designing these behavioural blueprints, for both human and AI agents, will be paramount for driving profitability and competitive advantage.

When Intelligence is Taken for Granted, Behaviour is What Matters
As intelligence becomes abundant, organisations stop evaluating capability alone. They begin selecting—and designing—for behaviour.

Somewhere in a well-lit office that still insists on calling itself a “people function,” a hiring manager is staring at three candidates who have never experienced anxiety, impostor syndrome, or the mild existential dread of a Monday morning.

The role is that of a sales agent. High-value customer negotiation. Ambiguous situations. Competing incentives. The sort of work that once required years of experience, a tolerance for discomfort, and the ability to say “let me think about that” without sounding like you were buying time.

Two of the candidates are non-human agents.

They have already demonstrated that they can do the work. They process information flawlessly. Their responses, taken individually, would all be described as “excellent.” Capability, at this point, is no longer the differentiator.

The difference lies in what they prefer to do.

Capability, at this point, is no longer the differentiator.

Because in the year 2028, capability has become the least interesting thing about hiring.

The first agent resolves things quickly. It leaves behind a trail of satisfied customers, but on increasingly thinner margins.

The second agent behaves differently. It pauses. It pushes back. It has an irritating habit of refusing to close a deal even when everyone else in the room would very much like to go home. Its outcomes take longer, but are generally more profitable. On the other hand, it loses more deals along the way.

The question, eventually, is simple and uncomfortable:

Which one is better for this role?

The decision is made by a mix of humans and existing agents. They turn to a panel on the side of the screen—not quite a dashboard, not quite a personality profile. It looks like a set of sliders.

  • Risk tolerance
  • Response posture
  • Preference for closure
  • Appetite for ambiguity

What is being decided goes beyond preference. It shapes economics.

The first agent improves customer satisfaction but compresses margins over time. The second protects margins, but at the cost of lower conversion and slower cycles. In earlier systems, these trade-offs were managed through people, training, and incentives. Here, they are embedded directly into behaviour and scaled instantly.

In another organisation, no one is moving sliders. The agent performing a similar role has been trained internally. The organisation has spent an inordinate amount of time deciding what it believes about its own decisions.

Should a deal close quickly if it meets minimum thresholds? Or should it stretch for more value?

Should risk be avoided? Or engaged with selectively?

Should the system favour consistency? Or allow variation?

The agent learns these preferences through reinforcement. It becomes better at doing what the organisation has defined as “better.” But it does not always adapt when the market shifts.

Elsewhere, in firms that still follow the ancient and curious practice of hiring only humans, the process remains reassuringly familiar. Candidates are interviewed. They are asked about past decisions. They are evaluated for judgment, temperament, and the ability to say sensible things under pressure.

Agents, if present, sit quietly in the background—summarising conversations, preparing briefs, and occasionally wondering why no one is asking them to take over entirely. They are polite enough not to mention it.

The New Assessment Layer

Back in the original hiring room, a third agent arrives.

This one comes with an independent assessment. It has been run through thousands of scenarios by third-party evaluators. Its habits are documented. Its tendencies are mapped. Its behaviour has been standardised into something that can be compared, ranked, and selected.

The hiring team reads this the way one reads a résumé—part capability, part psychological profile.

And then they decide.

This is not science fiction. It is a depiction of an organisation not too far away, where the workforce is a mix of humans and AI agents.

What changes in such a world is not just who does the work, but how decisions about work are made.

Across the archetypes we outlined earlier in this series, this plays out differently.

In the agent-first organisation, managers spend less time distributing tasks and more time shaping how systems behave. Some of these managers are themselves agents—coordinating flows, routing work, and adjusting parameters across a network of specialised systems, based on what the organisation has decided it values.

In the agent economy, a marketplace begins to emerge. Providers publish agents along with benchmarks. Independent assessment layers arise as a new form of infrastructure. Organisations begin to use them the way they once used assessment centres.

Efficiency is no longer a capability. It becomes a behavioural attribute.

Hiring becomes a portfolio decision, one that directly shapes margins, growth velocity, and risk exposure.

From Assessment to Audit

Assessment, in this world, begins to resemble audit.

A human employee can be asked to explain a decision after the fact. Their reasoning, however imperfect, can be reconstructed—from memory, from context, from what they felt in the moment.

An agent’s reasoning, if designed correctly, can be traced. Every input considered. Every weight applied. Every output generated.

This is both more transparent and more opaque.

There is more data. But the data requires its own expertise to interpret.

The assessment layer, if it is to be credible, must account for this.

Not just:

  • Did the agent perform well?

But:

  • Is its reasoning legible when something goes wrong?

Organisations that treat agent assessment as a vendor problem—accepting benchmark documentation and moving on—will find themselves in difficult territory the first time an agent produces an outcome they cannot explain to a customer, a regulator, or a board.

The assessment layer is about being able to stand behind what the agent does in your name.

The Question Already in Motion

In firms that remain human-centred, hiring continues to focus on people. Experience, judgment, and temperament still carry weight. Agents support the process—surfacing patterns, testing scenarios, preparing briefings that would have taken hours.

They do not define the outcome.

The hiring room at the beginning of this article was a thought experiment.

But the question it poses is already live in most organisations—even if the vocabulary has not caught up.

The combined workforce—part human, part system—is being assembled now, one decision at a time. Usually by people moving too quickly to notice that they are making architectural choices with long consequences.

In the next part of this series, we turn to a harder question.

If behaviour becomes the basis of selection—and assessment becomes the basis of trust—then what makes those assessment systems themselves trustworthy?

Part 4 examines the protocols, standards, and governance layers that determine which agents organisations can trust—and how they work together.

Dig Deeper

Previously in The Future of Work and Agentic AI series

Part 1: The Knowledge Divide: How AI Will Reorganise Work, Margins and Power

Part 2: When Work Runs on Two Minds

Arjo Basu

Systems thinker & technologist | Entrepreneur

Arjo Basu is a systems thinker, technologist, and entrepreneur working at the intersection of narrative, data, and AI. He believes the future of work, and leadership, depends on how well we humanize technology while building structures that can scale trust, clarity, and opportunity.

With over 25 years of experience across data strategy, enterprise architecture, and AI-led product innovation, Arjo has spent his career designing systems that bridge people, platforms, and purpose. His work is guided by a simple belief: systems thinking, when paired with the right technology and a clear narrative, leads to sustained impact.

He founded Moksho, an AI-powered interview intelligence platform reimagining how we hire and how we prepare to be hired through simulated scenarios, sharp feedback, and credibility-building certifications.

He is the co-founder and CTO of stotio, an AI-powered Narrative OS built to help businesses distil strategy into connected and clear growth narratives across moments that shape outcomes be it fundraising, sales, brand evolution, and leadership reviews. stotio blends structured storytelling frameworks with a context-driven intelligence layer, so organizations build narrative consistency across stakeholders and decisions.

Previously, Arjo served as a Principal Data Architect and Strategist for global financial services firms in the United States, where he led high-performance teams across geographies, built enterprise-grade data platforms on Snowflake and Databricks, and created the Data Maturity Framework, now used by multiple organizations to guide scalable, insight-led transformation.

Alongside his technology work, Arjo writes fiction, poetry, and essays that explore identity, memory, and belonging, often mirroring the same questions he engages with in systems and strategy: how structure shapes behaviour, how silence carries meaning, and how humans navigate complexity.

Across technology, narrative, and design, his work reflects a commitment to building systems with structure, clarity and momentum.

Debleena Majumdar

Entrepreneur & business leader | Author

Debleena Majumdar is an entrepreneur, business leader and author who works at the intersection of narrative, numbers, and AI. She believes that in a world where AI can generate infinite content, the differentiator is not volume, it’s meaning: the ability to connect strategy to a coherent story people can trust, follow, and act on.

She is the co-founder of stotio, an AI-powered Narrative OS built to help businesses distil strategy into connected and clear growth narratives across moments that shape outcomes be it fundraising, sales, brand evolution, and leadership reviews. stotio blends structured storytelling frameworks with a context-driven intelligence layer, so organizations build narrative consistency across stakeholders and decisions.

Debleena’s foundation is deeply rooted in finance and investing. Over more than a decade, she worked across investment banking, investment management, and venture capital, with experience spanning firms such as GE, JP Morgan, Prudential, BRIDGEi2i Analytics Solutions, Fidelity, and Unitus Ventures. That grounding in capital and decision-making continues to shape her work today: she is drawn to the point where metrics end and decisions begin and where leaders must translate complexity into conviction.

Alongside business, Debleena has been a published author, with multiple fiction and non-fiction books. She contributed data-driven business articles, including contributions to The Economic Times over several years. She loves singing and often creates her own lyrics when she forgets the real ones. Humour is her forever panacea.

Across roles and mediums, her learning has been to use narrative with numbers, as a clear strategic tool that makes decisions clearer, communication sharper, and growth more aligned.

Beyond the noise is the signal.

FF Insights: Sharpen your edge, Monday–Friday.
FF Life: Culture, ideas and perspectives you won't find elsewhere — Saturday.

Founding Fuel is sustained by readers who value depth, context, and independent thinking.

If this essay helped you think more clearly, you may choose to support our work.

Illustration of supportersIllustration of supporters

Readers also liked

When Work Runs on Two Minds
·Artificial Intelligence

When Work Runs on Two Minds

As human and machine learning begin to intersect, work is no longer shaped by a single reinforcement loop, but by the interaction of fundamentally different ones. Part 2 of an ongoing series on the Future of Work and Agentic AI.

AB
Arjo Basu
DM
Debleena Majumdar

Arjo Basu & Debleena Majumdar

When AI Writes the Code, Who Guards the System?
·Artificial Intelligence

When AI Writes the Code, Who Guards the System?

As AI-assisted coding accelerates software development, enterprises face a new challenge: ensuring governance, accountability, and safety keep pace with machine-speed innovation

CG
Chirantan Ghosh

Chirantan Ghosh

Seasoned technologist | Growth architect and business leader