The Principal-Agent Problem

The Principal-Agent problem is a common issue in business and economics that arises when one party, known as the Principal, hires another party, known as the Agent, to act on their behalf. The problem occurs when the interests of the Principal and the Agent are not aligned, and the Agent may act in their own self-interest instead of working towards the Principal's goals.
For example, shareholders (the Principals) hire executives (the Agents) to manage a company. The executives may prioritize their own compensation and career advancement over the long-term success of the company, leading to conflicts of interest.
This is actually a major issue in most public companies today. Executives are graded by the market on how well they hit the quarterly targets, and are quick to be cycled out if they are not hitting those expectations. Without being allowed to sacrifice short term profits for long term sustainability, and often bestowed with a ton of stock based compensation, public company executives are all but forced to become misaligned Agents to protect their own wellbeing. Government contractors act in the same way, as the desire of the taxpayers for them to complete the work quickly and efficiently is in direct conflict with their desire to make the most money possible. Another scenario is with advertising agencies, who develop a plethora of metrics to try to prove that what they are doing is moving the needle. Agencies of all sorts have more incentive to increase the amount of responsibility, or assets under management, or political sway, than they do provide actual benefit to the organization they work for.
Without being allowed to sacrifice short term profits for long term sustainability, and often bestowed with a ton of stock based compensation, public company executives are all but forced to become misaligned Agents to protect their own wellbeing.
This may seem like a familiar theme that I have touched on previously in Middlemen America or Apps and ZIRPs. What happens within organizations when technology or central bank interest rate policies open the door for tremendous growth? They hire more people. Because the root cause of the growth was never actually revealed to the Principals (and was perhaps purposefully obscured by the Agents), they misattribute the success and hire more people, thinking that more people will lead to more growth. The unpeeling of this is playing out in Tech right before our eyes.
Agent Inception
An interesting phenomenon begins to occur. Agents hire Agents and become the Principal of their own facade. Since the goal of the Agent on Layer 1 is to obscure reality, that becomes the goal of the Agent on Layer 2.
There are as many different ways for Agent Inception to spread through an organization as there are ways to structure one functionally. Whether any of the sub-Agents realize what they are doing or are merely acting out of instinct is of no import. Even a well intentioned employee of an organization, when reporting into an Agent layer, becomes an Agent. Just like in The Matrix, anyone can become an Agent at any time, if the circumstances of their, or their direct leader’s incentives change.
Eventually, when you abstract a series of Agents far enough away from the Principal’s goal, you get a bunch of people working actively against said goal, yet with each individually having the best of intentions.
AI Agents!
All of this Agent madness leads companies into bloat or bureaucracy. Worse still, they suffer long term growth issues because they are never able to accurately define what the success was attributed to. AI is actually able to help solve this in a meaningful way.
AI Agents are different from human Agents. AI Agents are essentially pre-prompted Language Models that are given a task and then connect to a program to complete an assignment. For example, we all know that ChatGPT has limited knowledge of recent history. However, if an AI agent is connected to the internet, it can realize that it is unaware of a question that it has been prompted, and can search the internet to find the answer for you. It can then add that answer into it’s memory bank to improve its learning. This recursive layer of learning is different from the RLHF (reinforcement learning from human feedback) that got the LLMs (Large Language Models) to their current level of proficiency. It opens up a whole new slew of use cases for the savvy programmer. Things are moving fast in the space.
So how does this help with the bureaucracy issue?
By virtue of its inherent ability to up-level human knowledge, giving more power to the Principal, they allow organizations to slim down the bureaucracy, which reduces the power of the misaligned human Agents. As AI models become increasingly sophisticated over time, the knowledge that human Agents hold over Principals narrows greatly. For it is not the productivity output that Agents hold over their Principals, it is their specific knowledge base, or subject matter expertise. As I’ve discussed in The AI Article, and The End of Knowledge, both technical productivity skills and learned knowledge can be up-leveled via AI.
So now you have Super Principals protecting their own interests, but what about misaligned Super Agents, or worse, rogue AI Agents that become sentient and decide that their best interests involve the enslavement of humanity, a la The Matrix?
My opinion, which goes against many prominent thought leaders in society today, is that AGI (Artificial General Intelligence) or ASI (Artificial Super Intelligence) or whatever you want to call it, is the stuff of science fiction, and not a danger that we have to worry about. But please note that definitions change over time, and semantics are important. A decade ago, the term AI colloquially meant what we refer to as AGI now, and Machine Learning back then. That is to say the the common definition of AI has changed over time.
In any event, I believe this concept of a rogue or misaligned AI stems directly from our PTSD around the existing human Principal-Agent problem. We are so used to the idea of misaligned incentives from our fellow peers, that we immediately assume a computer algorithm, one that we literally program to align with our incentives, will betray us. How sad.
The concept of a rogue or misaligned AI stems directly from our PTSD around the existing human Principal-Agent problem. We are so used to the idea of misaligned incentives from our fellow peers, that we immediately assume a computer algorithm, one that we literally program to align with our incentives, will betray us. How sad.
In the case of misaligned Super Agents, we can fight them like we fight people whose speech we disagree with. The answer to bad speech is more speech. The answer to misinformation is more information. The answers to misaligned AI supplemented knowledge is more aligned AI supplemented knowledge.
The Future
So you are probably reading this and thinking, “so your solution to this age-old problem is to simply have more highly effective, highly aligned employees at your organization? Wow, so insightful (/s)”. Yes! But hear me out. I am not saying that the Principal-Agent problem goes away in entirety, in fact, it will almost assuredly get worse in the short term as “AI experts” insert their way into organizations and make things appear more difficult than they actually are. New technology is always going to inflate the layer of Middlemen (misaligned Agents or otherwise). The major breakthrough here is that companies can now fight back with an order of magnitude less strain. The same way Shopify turned store owners into quasi-developers, AI turns quasi-developers into full blown software engineers.
The major breakthrough here is that companies can now fight back with an order of magnitude less strain. The same way Shopify turned store owners into quasi-developers, AI turns quasi-developers into full blown software engineers.
There is also the larger trend of what is going on within the economy. Small businesses fluctuate as around 50% as percent of US GDP, and over 1 million small business start up each year. Considering that about half of them fail in the first 12 months, the upleveling of knowledge can probably increase the success rate. Also consider that you don’t need as many people to start a company now, and you don’t need as many people to run a large organization. All of this leads to companies with fewer headcount on average, with small businesses making up an increasing portion, which leads to more competition in the market. For the winners, they won’t be pressured to “grow at any cost”, and we will probably see more “lifestyle businesses”, as opposed to “growth companies” (led astray by misaligned Agents), where the employees who got in at the ground floor are paid handsomely and they just print money for decades.
There will still be, of course, the mega-corporations, as the consolidation of conglomerates won’t slow down any time soon. But, with more proliferation of small businesses and startups, the incentive to go off on your own will increase over time. Pair this with open source code, the onshoring of supply chains, the desire for more authenticity from brands, and the ability to fight bot armies with authentication, and you have a path towards a more decentralized economy. As I type this out, I am getting more and more excited about the future that technology enables for us, not less. Sure, at the root of it we will all still be paying “gas” to the underlying AI models as their GPUs burn actual electricity, but maybe that yin is good for our yang.. a topic for another day.
Really interesting post - never considered principal agent problems in the context of the AI dawn.
I did discuss agency problems in venture capital here: https://open.substack.com/pub/zantafakari/p/9-describing-agency-problems-in-serfdom?r=p7wqp&utm_campaign=post&utm_medium=web
You might enjoy it - and to your last paragraph - maybe the ability for small businesses to proliferate without outside cash might help resolve some of the concerning relations that have developed between VC and small companies over the last couple of decades.