Why Rational Companies Are Making a Collective Mistake They Cannot Stop
On March 2, 2026, two economists published a working paper with a finding that should stop every executive signing a layoff memo cold. Brett Hemenway Falk of the University of Pennsylvania and Gerry Tsoukalas of Boston University built a mathematical model of what happens when competing companies replace workers with AI. Their conclusion was not that automation is bad. Their conclusion was that competition itself guarantees companies will automate far beyond the point that is good for any of them — and that knowing this in advance is not enough to stop it.
The paper is called "The AI Layoff Trap." It is available on arXiv and SSRN. It has not yet completed formal peer review. The mathematics, however, are not contested, and the mechanism they describe is already visible in the labor market data.
This is not a story about AI being dangerous. It is a story about market structure — about how competition between rational, fully informed actors can produce collective outcomes that harm everyone involved, including the companies doing the cutting. That distinction matters, because it changes what kind of problem this is and which interventions can actually address it.
The Mechanism
The model is deliberately simple. Several competing firms operate in the same sector. A new technology arrives — the paper uses agentic AI as its example — that can perform workers' tasks at lower cost than human labor. Each firm decides how much of its workforce to replace.
The individual incentive is straightforward: automated tasks cost less, so automating saves money and improves competitive position. What the model formalizes is the feedback loop that makes this rational individual decision collectively destructive.
Workers are also consumers. When their income disappears, their spending disappears with it. That spending loss hits every firm in the sector roughly equally — not just the firm that did the firing. So the company that automates captures 100 percent of its cost savings and absorbs only a fraction of the demand destruction it creates. The rest lands on its competitors. The paper calls this a demand externality, and it has a precise economic meaning: a cost that one actor imposes on others without bearing it themselves.
The result is a Prisoner's Dilemma. If your competitor automates and you don't, you absorb your share of their demand destruction without capturing any cost savings. You're worse off than if you had automated too. So you automate. Everyone does. At equilibrium, every firm in the sector has automated at the same rate, the competitive advantages have canceled out completely, and the only thing that has changed is that more workers have lost income, more consumer demand has been destroyed, and the collective damage is larger than before.
Crucially, the paper assumes full transparency. Every company in the model can see exactly how automation translates to lost income and lost demand. The cliff is visible. The model shows they race toward it anyway, because restraint is not a viable individual strategy. A company that holds back loses ground to competitors who don't.
The paper states this consequence plainly: the resulting surplus loss is not a transfer from workers to shareholders. It is a deadweight loss — economic value that disappears entirely, harming both workers and firm owners. "At the limit," the authors write, "this becomes self-destructive: firms automate their way to boundless productivity and zero demand."
The Red Queen
The intuitive response to this analysis is that better AI should help — that as automation becomes more capable and less costly, the economic disruption will be temporary and the long-run gains will outweigh it. The paper's extension on AI productivity directly addresses this, and the conclusion is the opposite of reassuring.
When AI becomes more capable, each firm perceives a stronger individual incentive to automate faster than its rivals and capture market share. But because every firm is making the same calculation simultaneously, at the symmetric equilibrium these gains cancel. Nobody gets ahead. The only thing that changes is that the automation rate at equilibrium rises and the demand externality widens.
The authors call this the Red Queen effect — a reference to the scene in Through the Looking Glass where the Red Queen tells Alice it takes all the running she can do just to stay in the same place. In evolutionary biology, the term describes arms races where predators get faster, so prey gets faster, so predators get faster again, with neither side gaining ground. Applied here: better AI doesn't resolve the trap. It deepens it. Every improvement in the technology intensifies the race without changing the destination.
What Doesn't Work
The paper systematically evaluates every proposed solution that has gained policy traction. The results are clarifying because each failure tells you precisely where the intervention missed.
Universal Basic Income addresses the floor on living standards for displaced workers, but it doesn't alter the per-task calculation a firm makes when deciding whether to automate. Each firm still captures the full cost savings and still bears only a fraction of the demand destruction it creates. The automation incentive is unchanged.
Capital income taxation — taxing the profits of automating firms — operates on profit levels rather than the per-task margin where the externality lives. Scaling the profit function by a constant doesn't change which automation rate maximizes profit. The race continues.
Worker equity participation — giving displaced workers shares in the companies that fire them — narrows the externality wedge somewhat but cannot eliminate it. Workers' spending still falls when their wage income disappears; equity distributions are less liquid and more concentrated than wage income.
Voluntary agreements between competing firms to collectively restrain automation fail because automation is a dominant strategy. Even if every company in a room agrees that collective restraint would benefit everyone, each firm's individually rational move is still to defect. No voluntary agreement is self-enforcing when the incentive to break it survives the handshake.
Coasian bargaining — the economic theory that private negotiation can internalize externalities if property rights are clear — fails here for the same reason voluntary agreements fail. Automation is the dominant strategy regardless, so no negotiated outcome can hold.
The only instrument the paper identifies that actually corrects the distortion is a Pigouvian automation tax: a per-task levy set equal to the uninternalized demand loss each firm imposes on its competitors. This operates directly on the per-task margin where the externality lives, changing the calculation each firm makes about each additional automated task. The revenue can fund worker retraining and upskilling, which raises the reabsorption rate for displaced workers, which shrinks the demand loss, which makes the tax smaller over time.
The paper acknowledges that this is politically difficult and that a unilateral national tax risks pushing automation investment offshore. It does not claim the solution is easy. It claims it is the only instrument that addresses the actual distortion — and that everything currently being discussed in policy circles does not.
What the Data Already Shows
The paper's model is formal theory, but the real-world trends are running on the same track.
The scale of displacement is already large and accelerating. The paper cites over 100,000 tech worker layoffs in 2025, with AI cited as the primary driver in more than half the cases, concentrated in customer support, operations, and middle management. Salesforce replaced 4,000 customer support agents with agentic AI. Cognition's Devin, deployed at Goldman Sachs and Infosys, enables one senior engineer to perform work that previously required a team of five. Block CEO Jack Dorsey cut nearly half of the company's 10,000-person workforce in February 2026, stating publicly that AI had made those roles unnecessary and that within a year, most companies would reach the same conclusion.
The boomerang is also already visible, though it is happening quietly. Forrester Research's "Predictions 2026: The Future of Work" report found that 55 percent of employers regret laying off workers attributed to AI. Klarna replaced approximately 700 customer service workers with an AI chatbot, watched satisfaction scores drop and complex interactions overwhelm the system, and began rehiring. Forrester predicts that half of AI-attributed layoffs will be quietly reversed — but not at the same pay rates, and often offshore. The corrections are not announced at press conferences.
The demand externality is also not purely theoretical. Challenger, Gray & Christmas found that AI is now the fifth most commonly cited reason for job cuts in 2026 — behind market conditions, restructuring, and closures, which means that the majority of people losing their jobs are losing them for the reasons companies have always cut headcount, with AI serving as the press release. The paper's mechanism does not depend on whether companies are sincere or cynical about this. The competitive structure produces the same outcome either way.
The Pipeline Problem
The longest-range consequence of the current displacement wave may be the one that is least discussed in quarterly earnings calls.
The National Student Clearinghouse's 2025 Final Fall Enrollment report documented that undergraduate enrollment in computer and information science programs at four-year universities dropped 8.1 percent in the 2025-2026 school year — the steepest single-year decline of any field of study since at least 2020. Within the CS category, the computer science major specifically fell 11.2 percent. Graduate CS enrollment dropped 14 percent. The Computing Research Association found that 62 percent of universities surveyed reported declining computing enrollment. This is not a marginal shift. It is a structural reorientation of where a generation of students is choosing to invest their educational capital.
The shift is going somewhere identifiable. Undergraduate engineering enrollment grew 7.3 percent in the same period. Electrical engineering rose 13.8 percent. Mechanical engineering rose 11.4 percent. Healthcare, skilled trades, and government fields — sectors where physical presence is required and AI substitution is constrained — are absorbing students who are making rational decisions about where stable career paths exist.
The companies announcing the loudest AI-driven headcount reductions right now will need skilled human engineers and developers in five years to maintain, extend, and fix the systems they are currently building. Those engineers are currently choosing mechanical engineering. The pipeline does not refill automatically, and the lead times on workforce development are measured in years, not quarters.
The Trust Deficit
There is a parallel damage accumulating that is harder to quantify but shows up clearly in workforce surveys.
Forrester tracks a category of workers it calls "coasters" — employees who have concluded that their employer does not deserve their discretionary effort and are performing at the minimum required level. That group represented 27 percent of the workforce in 2024, 25 percent in 2025, and is projected to reach 28 percent in 2026. The pattern makes sense. Employees watching colleagues lose jobs to AI systems that aren't yet capable of doing those jobs, seeing entry-level positions eliminated so that new talent cannot enter, and observing offshore arbitrage dressed up as innovation are drawing rational conclusions about where to put their effort.
The compact between employer and employee — the understanding that sustained contribution to a company earns sustained investment from that company in return — is being dismantled at scale, and the workforce is noticing. When the companies currently making the loudest noise about replacing humans with AI decide they need skilled human judgment again, those humans will remember how this period went. Trust at scale, once broken, does not appear as a line item that can be fixed in the next fiscal year.
The Structure of the Problem
The AI Layoff Trap paper is careful to distinguish the kind of failure it is describing. This is not a story about corporate malice. The companies making these decisions are doing what the incentive structure rewards them for doing. Block stock jumped more than 20 percent the day 4,000 people were fired. Microsoft shares fell nearly 4 percent when the company offered a gentler voluntary retirement option. The market is telling companies precisely what to do, and companies are doing it.
The paper's contribution is to show that this outcome — individually rational, collectively damaging, and persistent in the face of full information — is what economists call a market failure. Not a moral failure. Not a failure of foresight. A structural failure that requires structural correction to address.
The workers watching it happen are rational actors too. Students are looking at entry-level tech employment data and choosing mechanical engineering. Experienced developers are moving toward fields with more durable demand signals. People are redirecting their career capital away from industries that have signaled they view human workers primarily as a cost center to be optimized away. That is the same logic companies are applying, running in the opposite direction.
The paper's final note is that its models and extensions consistently point in the same direction: the problem is likely larger than what the formal framework can capture, not smaller. The demand externality is a floor, not a ceiling.
The question is not whether the correction will come. It will. The boomerang is already in motion. The pipeline is already narrowing. The question is whether it comes from deliberate policy that addresses the per-task incentive structure, or from the market correcting itself after enough damage has accumulated to make the problem impossible to ignore. The paper's model is clear on what kind of correction the second option looks like.
Sources: Brett Hemenway Falk and Gerry Tsoukalas, "The AI Layoff Trap," arXiv:2603.20617v1, March 21, 2026 (also available via SSRN, abstract ID 6448898); National Student Clearinghouse, Final Fall Enrollment Trends 2025, February 2026; Computing Research Association, CERP Pulse Survey 2025; Forrester Research, "Predictions 2026: The Future of Work"; Gartner, February 2026 workforce report; Challenger, Gray & Christmas, 2026 layoff data; Built In, "Computer Science Degrees Are Losing Popularity in the AI Era," March 12, 2026; BusinessToday coverage of the Falk/Tsoukalas paper, April 2026; Olive Badger on YouTube
Jonathan Brown (A.A.Sc., B.Sc) writes about cybersecurity infrastructure, privacy systems, the politics of AI development and many other topics at bordercybergroup.com and aetheriumarcana.org. Border Cyber Group maintains a cybersecurity resource portal at borderelliptic.com . He works from a custom-built Linux platform (SableLinux™) which is currently under development and fully documented at https://github.com/black-vajra/sablelinux.
If you would like to support our work, providing useful, well researched and detailed evaluations of current cybersecurity topics at no cost, feel free to buy us a coffee! https://bordercybergroup.com/#/portal/support
Member discussion: