What People Actually Mean When They Say They Hate AI


The same week Google's AI Overviews told users to put glue on their pizza, an AI system derived from DeepMind research was being cited in clinical oncology papers at twice the rate of comparable structural biology work. These are not the same technology in any meaningful sense. Treating them as the same technology is where the conversation goes wrong — not because the anger isn't justified, but because anger aimed at the wrong target is a form of paralysis.

I'm not here to defend the AI industry. The AI industry, as currently constituted, deserves substantial criticism, and I'll get to that. But there's a significant difference between the field of artificial intelligence — a branch of applied mathematics and computer science that has been developing since the 1950s — and the commercial ecosystem that has grown up around it in the last five years. Conflating the two produces a fog in which legitimate grievances become untethered from their actual causes, and anger gets spent on the wrong targets. So before I talk about what's worth being angry about, I want to spend some time on what AI actually is, and what it has actually produced.

The Field Versus the Feed

If your primary exposure to artificial intelligence is the generated slop flooding social media, the chatbot that confidently invents legal citations, or the corporate co-pilot nobody asked for embedded in your word processor, you could be forgiven for concluding that AI is a sophisticated machine for producing confident nonsense at scale. That impression is not entirely unfair. But it is incomplete in ways that matter.

AlphaFold, developed by Google DeepMind, solved what structural biologists had spent fifty years failing to crack: the protein folding problem. Given an amino acid sequence, the system can predict the three-dimensional structure of the resulting protein with accuracy that rivals or exceeds experimental methods. Before AlphaFold, determining a single protein's structure could take a research team more than a year. The system worked through essentially every known protein in a fraction of that time, releasing its predictions as a public database. The downstream implications for drug discovery, vaccine development, and our basic understanding of molecular biology are genuinely enormous. Research citing AlphaFold findings appears in clinical and patent literature at rates significantly higher than comparable structural biology work. The 2024 Nobel Prize in Chemistry went, in part, to the researchers behind it. This is artificial intelligence. It has nothing to do with generating marketing copy or producing uncanny images of five-fingered hands.

In oncology, AI diagnostic systems trained on medical imaging have repeatedly matched or exceeded clinician-level accuracy in detecting tumors — breast cancer, skin cancer, diabetic retinopathy — from radiological and photographic data. MIT's work on long-range breast cancer risk prediction demonstrated that AI analysis of mammography images can identify patterns indicating elevated cancer risk up to five years before a diagnosis would otherwise be made, giving clinicians an earlier window to intervene. In earthquake early warning, climate modeling, agricultural yield forecasting under shifting weather conditions, and real-time speech transcription for deaf and hard-of-hearing users, the field is doing consequential work that rarely gets mentioned because it doesn't fit the narrative frame — either the utopian one or the dystopian one — that media and tech companies have found most useful.

None of this is an argument that AI is purely beneficial or that the technology is being deployed wisely. It's an argument for precision. You can believe that Google's AI Overviews are degrading the quality of web search and simultaneously believe that AlphaFold represents one of the more significant scientific achievements of the last decade. These are not contradictory positions. They are the only accurate position.

The Economy Into Which AI Arrived

To understand why public reception of AI has been so freighted with anxiety, you have to understand the world it landed in. AI did not arrive into a society of people who felt economically secure, capable of absorbing disruption, and trusting of the institutions that would govern its deployment. It arrived into the opposite of that.

Since the mid-1980s, U.S. median household income has risen substantially in nominal terms. Home prices have risen faster, dramatically so. The ratio of median home price to median household income — a basic affordability index — has moved from roughly three to roughly five. The median age of a first-time homebuyer, once in the late twenties, now sits at forty. A majority of American households cannot afford a newly built home at median price in most markets; in coastal states, that number exceeds eighty percent. Wages have grown, but not as fast as housing, healthcare, childcare, or education. The margin that separates a working household from genuine crisis has narrowed, in many cases, to almost nothing.

This is not background context. This is the primary variable. When a population is economically secure — when people believe that a layoff will not immediately threaten their housing, that healthcare will be there if they need it, that retraining is a viable option rather than a fantasy — technological disruption reads as opportunity. The iPhone launched in 2007, and while it genuinely upended multiple industries, contributed to the collapse of print journalism, and eliminated entire categories of work, the dominant public reaction was curiosity. People lined up outside Apple Stores. The technology was disruptive; the disruption felt manageable because the floor beneath most people still felt solid enough to stand on while the world shifted.

AI is arriving when the floor does not feel solid. Every credible report about AI's effect on white-collar employment — the lawyers, the writers, the coders, the radiologists — lands not as a prompt for curiosity but as a potential final blow. Not because AI will necessarily eliminate those jobs, or eliminate them on the timescales being suggested, or because the jobs that replace them will necessarily be worse. But because in an environment of genuine economic precarity, the rational response to any significant disruption is not curiosity. It is dread.

The people who tell you that AI is going to free workers from drudgery and create space for more meaningful labor are not necessarily wrong about the long-run potential. They are making an argument that is functionally inaccessible to anyone who doesn't have the savings, the safety net, or the institutional support to survive the transition. Long-run potential is a luxury for people who have a short-run floor to stand on.

What Corporations Have Actually Done

Now I can get to the legitimate critique that often gets mislabeled as hatred of AI: what specific companies have done with the technology, largely without meaningful public consent or regulatory constraint.

Microsoft has embedded generative AI into essentially every consumer and enterprise product it sells, frequently without giving users a genuine ability to opt out. The integration of Copilot into Windows, into Office 365, into GitHub — some of these are useful features, many are not, and the determination of which is which was made unilaterally by Microsoft's product teams, not by the users now paying for access to them. Google's AI-generated summaries at the top of search results have repeatedly surfaced confidently stated misinformation, redirected user attention away from the actual web pages that generated the underlying information, and degraded the experience for millions of people who relied on Google Search as a functional research tool. The companies whose traffic and revenue were decimated by this change were not consulted.

Adobe's decision to train generative image and video models on the creative work of its users — under terms of service that were updated quietly and consented to by no one who clicked through in any meaningful sense — provoked a furious response from professional creatives, and the fury was entirely warranted. Whether a company can unilaterally appropriate years of professional creative output to train a model that will then compete against those same professionals is not a trivial question, and it has not been resolved by the legal and regulatory frameworks that currently exist, because those frameworks were written decades before the question was conceivable.

In workplaces across every sector, employees are being directed to use AI tools — directed, not asked — regardless of whether those tools are accurate, appropriate, or useful for the task at hand. The motivation is rarely efficiency, or at least efficiency is not the whole story. The motivation is often that adoption metrics look good in board presentations, that the company wants to appear current, or that middle management has decided that AI enthusiasm is the right career signal to send. The result is that workers are being asked to trust their professional output to systems that frequently fail in ways that are difficult to detect until the damage is done, with no meaningful recourse when that failure occurs.

These are not objections to artificial intelligence. They are objections to specific decisions made by specific companies that could have decided otherwise. That distinction matters because "AI is being forced into every surface of our digital lives by corporations that have decided we don't need to be asked" is a sentence that has a set of possible responses — regulation, litigation, consumer pressure, labor organizing. "I hate AI" points at a field of research that is not going to stop existing because people are angry about it.

The Real Technical Problems

There are also a set of technical and ethical objections to current AI systems that deserve serious treatment rather than either dismissal or hyperbole.

The accuracy problem is real. Large language models generate text by statistical pattern matching across their training data. They don't reason in the way humans reason. They don't know when they don't know something, and in the absence of that knowledge, they tend to produce plausible-sounding answers rather than admissions of uncertainty. In low-stakes contexts — drafting an email, generating ideas for a marketing campaign — this limitation is manageable. In high-stakes contexts — legal filings, medical triage, financial analysis, investigative journalism — a system that produces confident nonsense with no reliable internal alarm is genuinely dangerous. The bar for deploying these systems where errors have serious consequences should be substantially higher than it currently is, and in many cases it is not being met.

The consent problem is real. The training data for essentially every major language model, image generator, and coding assistant was assembled from the internet at large, from books, from code repositories, from creative platforms, without the knowledge or consent of the people who produced that content. The legal frameworks governing copyright, fair use, and derivative works were not designed to address a situation in which an entity ingests the creative output of millions of people and produces a system that can replicate the statistical patterns of their work. Courts are beginning to grapple with these questions, and the outcomes will matter enormously, but they have not been settled.

The labor displacement problem is real, though its shape is more complicated than popular discourse suggests. AI is not, in the short run, eliminating jobs wholesale. It is changing what skills are valued, which tasks can be automated, and what the floor wage for a given type of work looks like. A freelance writer who could previously charge a reasonable rate for basic content work is now competing with a client's ability to generate a passable first draft for nearly nothing. Whether that writer can find higher-value work, can differentiate on quality and judgment in ways that matter to clients, and whether the market will compensate that differentiation adequately — these are genuinely open questions, and the answers are not uniformly encouraging.

The power concentration problem is real, and arguably the least discussed. A small number of companies — Google, Microsoft, Meta, Amazon, Anthropic, OpenAI — have accumulated an extraordinary proportion of the compute, data, and research talent required to build frontier AI systems. This is not an accident. It is the predictable result of economics: training large models is enormously expensive, which means it is accessible only to entities with enormous capital, which means the people making foundational decisions about how the most powerful AI systems work, what they are optimized for, and what they will and will not do are not accountable to the public in any meaningful way. The concentration of that much leverage in that few hands, without adequate regulatory oversight, is a structural problem regardless of the technology involved.

The Distinction That Changes Everything

"I hate AI" is a sentence that accurately describes how a lot of people feel but does not point anywhere useful. The field of AI research — the mathematics of neural networks, the engineering of training pipelines, the science of making systems that can recognize patterns in complex data — is not going to reverse course because consumer sentiment is negative. It has been developing continuously for seventy years. It will continue.

But the conditions around AI are not fixed. The decision by a technology company to appropriate creative work without consent and deploy the results in ways that undercut the people whose work was taken is a choice, not a law of nature. The decision to integrate AI tools into workplaces without worker input, at a pace driven by capital markets rather than genuine utility, is a choice. The absence of regulatory frameworks governing training data consent, accuracy standards for high-stakes deployments, and competitive practices in AI infrastructure is a policy failure — which means it is a failure that policy can address.

None of those changes are easy. Corporate accountability is not achieved quickly or without sustained effort. Regulatory frameworks for complex technologies take years to develop and decades to mature. Economic precarity on the scale currently afflicting most households in wealthy countries is not addressed by any single policy intervention. But they are achievable in ways that "make the technology not exist" is not.

The fear that many people feel about AI is, in the last analysis, less about the technology than about what it represents: another major force being applied to a situation that already feels unmanageable, deployed by institutions that have demonstrated repeatedly that they will not put the interests of the people they affect above the interests of their shareholders. That fear is rational. The institutions have earned the distrust. But the response to that distrust is not to oppose the technology. It's to insist on the accountability structures that should have been built before the technology arrived, and to build them now, imperfect and late, rather than not at all.

Toward a More Precise Anger

The anger people feel right now is not wasted energy. It is a signal. The question is where to aim it. Not at protein folding research, or cancer detection algorithms, or speech transcription tools for people who cannot hear. At the specific companies that made the specific decisions to force products onto users without consent, to appropriate creative work without compensation, to deploy systems in high-stakes environments without adequate accuracy standards, to pour billions into compute infrastructure without regard for the communities that host it. At the specific policy failures that allowed that to happen without consequence. At the economic conditions that make any disruption feel like a death sentence for anyone without a meaningful cushion.

I hate that there is no regulatory framework for how AI products use creative work. I hate that employers are replacing functional tools with half-baked AI alternatives and calling it innovation. I hate that we exist in an economy so precarious that any disruption — regardless of its actual merits — feels like the thing that might finally break you. Those are sentences that point somewhere. They point toward regulation, labor protections, corporate accountability, economic reform. Things that can be demanded, organized around, and voted for.

"I hate AI" points at the weather.


Statistical figures on housing and income are derived from publicly available U.S. Census Bureau and Federal Reserve data. Academic citations for AlphaFold and cancer imaging research are available on request.


Jonathan Brown is a cybersecurity researcher and investigative journalist at bordercybergroup.com.

If you would like to support our work, providing useful, well researched and detailed evaluations of current cybersecurity topics at no cost, feel free to buy us a coffee! https://bordercybergroup.com/#/portal/support