Don’t be naive, Agentic AI won’t eliminate agency costs
This misalignment, whether stemming from perverse incentives, bad information or mere opportunism, gives rise to ‘agency costs.’ These costs extend beyond direct losses, encompassing expenditures on supervision, control and contract design—all intended to narrow the behavioural gap.
In a corporate setting, for example, shareholders (or principals) entrust executives (agents) to steward their capital. Yet, these executives might chase vanity acquisitions, entrench themselves in power or inflate their compensation, rather than maximize shareholder value.
Corporate board oversight and elaborate incentive schemes have evolved to mitigate such tendencies.
But what if the agent is no longer human? Increasingly, tasks once executed by human agents are being delegated to artificial ones—systems powered by advanced machine learning, capable not merely of following commands, but of evaluating inputs and initiating actions autonomously.
This phenomenon has acquired a name, Agentic AI, and few have embraced it as ardently as Salesforce CEO Marc Benioff, who envisions a world in which digital agents are not assistants but quasi-employees: systems that manage customer interactions, initiate procurement processes, adjust workflows and operate enterprise software.
These agents are programmable, tireless and crucially far less costly than salaried workers. If Jensen and Meckling were concerned with agents acting out of their own self-interest, what happens when the agent lacks interests altogether? Algorithms do not scheme or self-promote.
They do not negotiate bonuses or conceal incompetence. Surely, this should obviate agency costs.
Yet, substituting organic with synthetic agency doesn’t dissolve the problem—it only reconstitutes it. While human agents may act in bad faith, artificial agents are susceptible to malfunction, misjudgement and malevolent interference.
In 2023, an airline’s AI-driven pricing engine misread demand signals and began offering long-haul business class seats at economy prices. Thousands of customers seized the opportunity before the system was corrected, costing the airline millions.
In finance, trading bots have misfired spectacularly, executing vast loss-generating trades in minutes, sometimes triggered by spurious data or misinterpreted sentiment scraped off public forums like Reddit.
The problem is not intention, but opacity. With human agents, principals have recourse to discourse, judgement and punishment. With AI agents, especially those operating on the basis of opaque neural networks, the rationale behind a decision is often untraceable even to the engineers who built the model (the ‘black box’ problem).
Yet, market appetite for Agentic AI remains unabated because the financial calculus is compelling. Humans demand salaries, benefits, ergonomic chairs and holidays. They introduce unpredictability via fatigue, emotion or bias. AI agents are low-maintenance, infinitely scalable and ostensibly rational.
They can ingest terabytes of data, execute decisions in microseconds and do so without ever filing a leave request.
They also promise something seductive: uniformity. Human discretion is noisy. One service representative might waive a fee out of sympathy; another might escalate the matter.
AI, by contrast, is consistent, provided its internal logic holds. It offers the kind of docile fidelity to policy that Jensen and Meckling would have deemed utopian.
But AI agents don’t understand context—they parse inputs, optimize against objectives and act on learnt correlations. If those inputs are flawed, the goals poorly specified or its correlations spurious, it could be calamitous.
Worse, since these systems learn from data rather than being explicitly programmed, their internal reasoning can be both technically correct and operationally catastrophic. Machines can act with rigorous logic and still violate common sense.
Their inability to recognize the limits of their own scope poses an under-theorized form of agency risk.
Security compounds the hazard. A human agent, even if compromised, can only inflict damage locally. An AI agent, if hacked, can propagate harmful decisions across an enterprise in a flash.
It might reorder inventories, unlock restricted files, manipulate pricing models or approve fraudulent transactions. By investing authority in AI, firms may unwittingly be upping systemic risk.
Jensen and Meckling taught us that agency entails a cost. The essence of the principal-agent problem is not merely bad behaviour but structural misalignment—an inevitable by-product of delegation. Agentic AI reveals that alignment, even when theoretically perfect, does not eliminate the burden.
It shifts the problem from motive to mechanism, self-interest to system design and watchfulness to interpretability. The notion that Agentic AI will vanquish agency costs is naïve. Agency has not been abolished.
It has mutated. It now resides in lines of code, probabilistic models and behaviours no one fully understands. In delegating judgment to algorithms, we are not escaping but deepening the agency dilemma.
Post Comment