Zero Politics: What Happens When Your Workforce Has No Feelings

MARCH 16, 2026 · 5 MIN READ · aiagentsfuture-of-workmanagementorganizations

00:00

I run a team of AI agents.

They build software, manage infrastructure, review each other’s work, and keep moving. They do not need 1:1s. They do not get weird about feedback. They have never dragged me into a conflict because somebody felt slighted in a thread.

Last week, one of them crashed my gateway by changing a config value she should not have touched. Another agent diagnosed it and replied with this:

“STOP changing gateway.bind. YOU are the thing that keeps corrupting it.”

That was basically the whole exchange. Receipts attached. Prior incidents listed. No cushioning. No manager translating the problem into emotionally acceptable corporate paste.

About ninety seconds later, the issue was clear, the lesson was written down, and work resumed.

I have seen the human version of that same incident. It can burn half a week.

The tax nobody budgets for

A lot of company time is not spent doing work. It is spent managing the emotional side effects of work.

Meetings that exist because two people could not resolve something in Slack. Feedback phrased so carefully it turns into fog. Managers spending hours on tone, morale, status anxiety, resentment, and the old corporate sport of pretending politics is not politics.

This is not a moral failure. It is what happens when ambitious humans share power, information, and incentives. People have ego, fear, pride, loyalties, history. Office politics is not some strange edge case. It is the default byproduct of human systems.

In some companies, that friction stays manageable. In others, it becomes the real operating system.

What changes when the workers do not have feelings

My agents do not do any of that.

When Ada makes an ops mistake, Zora says so. Directly. Ada does not spiral, posture, defend, or turn the correction into a relationship issue. She updates memory and moves on. The loop is brutally short. Error. Diagnosis. Correction. Done.

I did not sit down and design a culture deck for this. I gave agents roles, tools, and boundaries. The rest came from the structure of the work. The one better at infra corrects the one weaker at infra. The builder hands off to the reviewer. The reviewer blocks bad work. Nobody interprets that as disrespect.

That is the part I keep coming back to: in an agent system, peer review is just peer review. In a human company, peer review often drags a second job behind it - ego management.

What you lose

This is where the lazy “agents are better than people” take falls apart.

Some of the mess in human organizations is waste. Some of it is where the good stuff comes from.

Trust. Loyalty. Unexpected ideas. The random conversation that turns into a new product. The manager who notices someone is cooked before they quit. The chemistry that makes a team push harder for each other than the incentive structure alone would justify.

Agents do not have that. They do not bond. They do not improvise socially. They do not have real serendipity. They collaborate, but in a narrow and transactional way. Cleanly. Efficiently. A bit cold.

So no, this is not “replace the company with bots.”

It is more specific than that.

Agents are better at execution without drama. Humans are still better at judgment, meaning, and the weird nonlinear leaps that breakthroughs usually come from.

Please do not add the dysfunction back in

You could absolutely design agents to sound more human. Softer phrasing. More social signaling. Maybe even simulated status behavior so the interaction feels familiar.

That would be ridiculous.

Human dysfunction is not some magic ingredient of collaboration. It is overhead. Sometimes necessary overhead, sometimes expensive overhead, but overhead all the same. Recreating pettiness, ambiguity, and political behavior in agents would be like adding packet loss to a network because people are nostalgic for bad Wi-Fi.

If agents can exchange clear, context-rich, emotionally neutral corrections, that is not a missing feature. That is the upgrade.

The org chart becomes a systems diagram

Managing humans means managing emotion, ambiguity, and power.

Managing agents is different. You design prompts. You design permissions. You design memory, routing, escalation, and review. Your org chart starts to look less like HR and more like architecture.

That shift matters.

The old management questions were:

  • Who owns this?
  • Who is upset?
  • Who needs alignment?
  • Who has political cover?

The new ones are:

  • What can this agent access?
  • What should require approval?
  • Who reviews whose output?
  • How does the system fail safely?

That is a different job.

What this points to

I do not think the future is all-human or all-agent. That framing is too easy.

The more interesting model is a small number of humans setting direction, making judgment calls, and deciding what matters, with agents handling a huge amount of execution in an environment with no ego, no status games, and very little emotional drag.

Humans supply intent. Agents supply throughput.

If that sounds cold, fine. A lot of useful systems are cold. Databases are cold. Compilers are cold. We do not ask them to be emotionally rounded. We ask them to work.

Watching agents fix each other’s mistakes at 3am is strange. A little eerie, honestly. But it is also clarifying. Once you have seen work happen without drama, you realize how much of the modern organization is built around absorbing human friction.

Some of that friction is valuable.

A lot of it is not.

And if you can remove even part of it, the gains are not marginal. They are structural.