You can't escape coordination costs by throwing more AI agents at a problem

chatbotkit.com · _pdp_ · 18 hours ago · view on HN · opinion
quality 2/10 · low quality
0 net
AI Summary

The article argues that distributed systems theory (Amdahl's Law, CAP theorem, FLP impossibility) directly applies to AI agent coordination, proving that simply adding more agents cannot overcome fundamental mathematical limits on scalability. The solution is better system decomposition and reduced coupling, not raw agent count.

Entities
Petko D. Petkov Amdahl's Law Universal Scalability Law FLP impossibility result CAP theorem Conway's Law
Coordination Has Limits ← back to reflections Coordination Has Limits You can't escape coordination costs by throwing more agents at a problem. The same mathematical walls that limit distributed systems apply to AI, and the constants are actually worse. Petko D. Petkov on a break from CISO duties, building cbk.ai Thu, Mar 12, 2026, 12:00 AM There's a belief floating around that if one AI agent is good, ten must be better, and a hundred must be extraordinary. Just keep adding agents until the problem solves itself. The math disagrees. Distributed systems have known about this for decades. Amdahl's Law sets a hard ceiling. If even 1% of your work is inherently sequential, you'll never get more than a 100x speedup no matter how many processors you add. The Universal Scalability Law goes further. When every node needs to stay in sync with every other node, communication overhead grows quadratically. At some point, adding machines makes things slower. These are not engineering limitations but mathematical proofs. The FLP impossibility result shows that in an asynchronous system, there is no deterministic consensus protocol that tolerates even a single crash failure. The CAP theorem proves you can't have consistency, availability, and partition tolerance simultaneously. The walls are real. And they apply directly to AI agents. If agent A's next action depends on what agent B decided, you have a sequential dependency. The math doesn't care whether the nodes are people, CPUs, or language models. For AI agents, the constants are actually worse. Two CPUs can share a memory bus at billions of operations per second. Two LLM agents communicating through natural language are passing around huge, ambiguous, lossy messages. The protocol overhead is enormous. A misunderstanding in a distributed database causes a bit flip. A misunderstanding between agents sends an entire chain of reasoning off track. The designs that work follow the same patterns that beat coordination costs in any distributed system. An orchestrator delegating independent subtasks scales well. Tree structure, low coupling. A swarm of agents all reading and writing to the same shared context degrades fast. All-to-all communication, high coupling. The most effective multi-agent setups minimize the surface area where agents need to agree. This is also why human organizations evolved the way they did. Small teams with clear boundaries outperform large committees. Conway's Law , that system structure mirrors communication structure, applies just as much to agent swarms. You are not going to escape the math by throwing more agents at it. You escape by choosing problems and decompositions where the math is kinder. Decompose. Reduce coupling. Tolerate partial inconsistency. The same strategies that work for distributed systems work here. The hype says scale up. The math says structure better. AI agents distributed-systems coordination When Fast Looks Smart AI produces code faster than humans. That doesn't mean it produces better code. Confusing speed with intelligence is a trap - and daisy-chaining mediocre systems doesn't make them brilliant. Agent Infrastructure Is Not the Hard Problem The AI industry is fixated on agent infrastructure - where they run, how they scale, which cloud to use. But hosting was never the hard problem. Making agents actually useful is. The Iceberg Under Every Codebase We only ever see the surface of the software we use. That illusion of simplicity is why people think AI can solve any software problem - and why developer jobs aren't going anywhere.