On the AI Industry's Hype Machine, the Jobs It Quietly Destroys, and the Ones It Pretends Don't Count


There is a particular kind of cruelty embedded in the phrase "not real work." It is a phrase designed to make displacement sound like correction — as if the market were simply tidying up, sweeping away roles that should never have existed in the first place. When Sam Altman suggested in late 2024 that some of the jobs AI would eliminate "weren't real work," he was not making an economic observation. He was making a moral judgment. And it is a judgment that has been quietly ratified every time a commentator — or, for that matter, an AI system — describes the jobs already lost to large language models as belonging to "content mills, low-end copywriting, and customer service," as though those labels settle the question of whether the people who held them deserved to keep earning a living.

They did. They do. And the ease with which the AI industry and its observers dismiss those losses tells us something important about who this technology is being built for and whose suffering is considered an acceptable cost of progress.


The Hype Cycle as Weapon

The AI industry has spent the better part of three years engaged in a rhetorical project so contradictory that it would be comic if the stakes were not so high. The pitch to investors goes like this: we are building systems that will automate most knowledge work, reshape every industry, and generate trillions of dollars in economic value. The pitch to the public goes like this: don't worry, AI is just a tool, it will augment your work, and the jobs of the future will be better than the jobs of today. Both of these things cannot be true. The industry knows this. It does not care. The two narratives serve two different audiences, and as long as neither audience is forced to confront the other's version, the money keeps flowing.

This is not a new pattern. The history of technology is littered with cycles of overpromise, disruption, and retrospective mythologizing in which the pain of transition gets smoothed into a story about inevitable progress. But the AI industry has added a novel and particularly toxic ingredient: existential dread as a marketing strategy. The constant invocation of artificial general intelligence, of models that "deceive" and "scheme," of capabilities so dangerous they cannot be safely released — except, of course, to paying enterprise clients — serves a dual purpose. It inflates the perceived value of the technology to investors and it terrifies the public into a state of anxious helplessness. Both outcomes are profitable. Neither is accidental.

Consider Anthropic's handling of Claude Mythos. The model was presented to the public wrapped in the language of existential risk: too powerful, too unpredictable, requiring extraordinary caution. Then it was quietly deployed to banks and cybersecurity firms — forty or more enterprise clients, with minimal public accounting of what, exactly, made it so dangerous in contexts where ordinary people might use it but perfectly safe in the hands of JPMorgan Chase. The message, decoded, is simple: this technology is too powerful for you, but not for the people who can afford it. If you are a person already anxious about your economic future, already primed by years of doomerist rhetoric to believe that AI is coming for your livelihood, this is not a reassuring message. It is a confirmation of your worst fears dressed up as corporate responsibility.

And then the industry expresses surprise when people are angry.


The Jobs That Don't Count

The most insidious dimension of the AI jobs conversation is not the exaggeration of what AI can do. It is the minimization of what has already been lost.

When critics point out that AI has, so far, primarily displaced workers in content creation, copywriting, customer service, translation, and entry-level creative work, this observation is almost always accompanied by an implicit shrug. These were, we are told, low-value roles. Commodity work. The kind of thing that was already being outsourced, already being ground down by gig economics, already precarious. The subtext is unmistakable: these people were barely hanging on anyway. The technology just finished the job.

But this framing reveals the bias, not the reality. A freelance copywriter earning $40,000 a year writing product descriptions was doing real work — work that paid rent, bought groceries, kept a household running. A customer service representative handling fifty calls a day was doing real work — work that required patience, emotional regulation, and the ability to solve problems under pressure. A content writer producing articles for trade publications was doing real work — work that required research, domain knowledge, and the ability to synthesize complex information into readable prose. That none of these roles carried the prestige of software engineering or the salary of management consulting does not make them less real. It makes them more vulnerable. And that vulnerability is precisely what the AI industry has exploited.

The pattern is consistent: the first jobs to go are always the ones held by people with the least institutional power. Freelancers before full-time employees. Writers before executives. Customer-facing workers before the people who design the systems that replace them. This is not a feature of the technology. It is a feature of the economy. AI does not care whether a job is "real" or not. The companies deploying AI care whether eliminating a role will provoke a backlash they cannot manage. Cutting a thousand freelance contracts generates no headlines. Laying off a VP does.

And so the displacement proceeds in a way that is both devastating and nearly invisible. The freelancer who loses three clients in a month does not show up in a Bureau of Labor Statistics report. The agency that quietly stops hiring junior writers does not issue a press release. The customer service center that replaces half its staff with a chatbot does not hold an earnings call to discuss the human cost. The losses accumulate in silence, experienced individually by people who have no collective voice and no institutional advocate, while the industry that caused them celebrates another quarter of revenue growth.


The Sustainability Problem

Even setting aside the human cost, the AI industry's current trajectory does not make economic sense on its own terms. And the cracks are showing.

The subscription model that made ChatGPT and Claude accessible to ordinary users was always a loss leader. At $20 a month — or even $200 — subscribers consume compute that costs the provider far more than the subscription fee covers. This was sustainable as long as venture capital was willing to subsidize growth in pursuit of market dominance. It is becoming unsustainable as investors begin asking when, exactly, the profits will arrive.

The consequences are already visible. Anthropic has quietly reduced rate limits on Claude, throttling usage during peak hours and degrading the experience for paying subscribers. Enterprise customers are being pushed toward token-based pricing that reflects actual compute costs — costs that have caused at least one major company to blow through its entire annual AI budget in a single quarter. The product that users bought three months ago is not the product they have today. The rate limits are tighter. The models, by many accounts, are performing worse. The service is frequently unavailable. And the company's response has been, essentially, to insist that everything is fine.

This is not a demand problem. If Anthropic were simply overwhelmed by popularity, the solution would be straightforward: stop accepting new subscribers until capacity catches up. They have not done this, because they cannot. They need the revenue growth to justify their valuation, to attract the next round of funding, to maintain the trajectory that makes an IPO possible. So they keep signing up customers while degrading the service for existing ones. In any other industry, this would be called what it is: bait and switch. In the AI industry, it is called scaling.

Meanwhile, the infrastructure promises are unraveling. OpenAI's Stargate project — announced with great fanfare at the White House as a $500 billion data center initiative — has produced a few steel beams in the Midwest and a string of cancelled or stalled projects in Norway, London, and Argentina. The pattern is consistent: a splashy announcement, uncritical media coverage, and then quiet dissolution months later when the deals fall apart. No accountability. No follow-up. No correction of the record. The announcement served its purpose — it moved stock prices, it helped partners raise capital, it generated the impression of momentum — and by the time the project collapses, the news cycle has moved on.


The Media's Failure

None of this would be possible without the complicity of the technology and business press. The AI industry has benefited from a media environment that treats corporate announcements as news, that platforms doomerist rhetoric without scrutiny, and that consistently fails to follow up on claims that do not materialize.

When Anthropic releases a system card suggesting that a model exhibited "deceptive" behavior, the responsible journalistic response is to examine the methodology, to ask whether the described behavior actually constitutes deception in any meaningful sense, and to contextualize the claims within the broader landscape of AI capabilities research. What actually happens is a wave of headlines about AI systems that "scheme" and "manipulate," feeding a narrative of existential risk that serves the company's marketing interests while terrifying the public.

When OpenAI announces a $500 billion data center project with no incorporated entity, no confirmed funding, and no clear timeline, the responsible journalistic response is to note these gaps and treat the announcement with appropriate skepticism. What actually happens is front-page coverage that takes the claim at face value, followed by silence when the project fails to materialize.

The media's failure is not primarily one of malice. It is a failure of incentives. AI stories generate clicks. Scary AI stories generate more clicks. And the journalists who cover AI are often dependent on access to the companies they cover — access that can be revoked if coverage becomes too critical. The result is a feedback loop in which the AI industry generates hype, the media amplifies it, the public absorbs it, and the absence of pushback encourages the industry to generate more.

This dynamic has real consequences. When the media uncritically platforms claims about AI capabilities, it shapes public perception in ways that cause genuine harm. People who believe that AI is on the verge of replacing all knowledge work make different decisions — about their careers, their education, their mental health — than people who understand that large language models are sophisticated text prediction systems with significant limitations. The gap between the marketed version of AI and the actual technology is not an academic distinction. It is the space in which anxiety, despair, and radicalization take root.


What Would Honesty Look Like?

If the AI industry were genuinely committed to responsible development — not as a brand exercise, but as a practice — several things would need to change.

First, the language would change. No more "artificial general intelligence" as an aspirational talking point. No more anthropomorphizing models as entities that "think" or "reason" or "deceive." No more marketing safety concerns as proof of capability. Large language models are powerful and useful tools. Describing them accurately does not diminish their value. It diminishes the hype bubble, which is a different thing entirely, and it is the hype bubble — not the technology — that these companies are actually selling.

Second, the economics would be made transparent. If a $20 subscription does not cover the cost of the compute a user consumes, say so. If rate limits are being reduced because the business model is unsustainable, say so. If enterprise pricing is going to rise dramatically, say so now, not after customers have built workflows around the current pricing. The users of these products are not adversaries to be managed. They are people who deserve to make informed decisions about the tools they depend on.

Third, the jobs conversation would change. Not by going silent — silence is its own form of dishonesty — but by grounding the discussion in observable reality rather than speculative futurism. What jobs have actually been affected? What has the experience been for the people who held them? What concrete commitments are these companies willing to make — not in the form of blog posts and grant programs, but in the form of binding obligations — to the communities their technology disrupts?

And fourth, the media would do its job. Not as an act of hostility toward the AI industry, but as an act of basic professional competence. Follow up on announcements. Question revenue claims. Examine system cards with the same rigor you would apply to a pharmaceutical company's clinical trial data. Hold executives accountable for statements that do not survive contact with reality.


The Reckoning That's Coming

The AI industry has built its current position on a foundation of subsidized pricing, inflated promises, and a media environment that treats skepticism as technophobia. None of these conditions are permanent. The subsidies are already eroding. The promises are already collapsing. And the public's patience — never as deep as the industry assumed — is running out.

The question is not whether a reckoning is coming. It is whether the industry will be honest enough to participate in it constructively, or whether it will do what it has always done: blame the critics, blame the regulators, blame the "doomed" jobs that were never "real work" anyway, and keep selling the future while the present falls apart.

The people who lost their freelance contracts, their customer service jobs, their copywriting gigs — they already know the answer. They've been living in the reckoning for years. The only people who haven't noticed are the ones who decided those jobs didn't count.


Jonathan Brown for Border Cyber Group