Slopsquatting is a clever (and unsettling) new cybersecurity threat that sits at the intersection of AI hallucinations and software supply chain attacks. Here's the breakdown:
The Core Concept
When developers use AI coding assistants to write code, those AI models sometimes "hallucinate" software package names — they confidently suggest libraries or dependencies that simply don't exist. Slopsquatting is a modern supply-chain threat where coding agents hallucinate non-existent but plausible package names that malicious actors can use to deliver malware. Trend Micro
The name itself is a mashup: "slop" (erroneous AI output) + "squatting" (claiming names). Contrast Security
How the Attack Works
It's a twist on an older tactic. Typosquatting is a popular tactic used by threat actors where they register slightly misspelled versions of legitimate domains. In this new take, a threat actor prompts an LLM to create some code. The code it returns may contain open-source packages that don't exist. The attacker then publishes a fake package to an official repository with the same details as the hallucinated one, inserting malicious code into it. Infosecurity Magazine
If a developer trusts the AI's output and installs the package without checking, they've just pulled malware directly into their project.
Why It's More Viable Than It Sounds
Research tested 16 code-generation LLMs and generated 576,000 Python and JavaScript code samples. On average, a fifth of recommended packages didn't exist — amounting to 205,000 unique hallucinated package names. More critically, 43% of the same hallucinated packages were suggested consistently when re-running the same prompts 10 times, and 58% were repeated more than once. Infosecurity Magazine
This consistency is the key danger. Attackers don't need to scrape massive prompt logs or brute force potential names. Infosecurity Magazine They just need to study AI outputs a few times to identify reliable targets.
Who Hallucinates Most?
Open-source models like DeepSeek and WizardCoder hallucinated more frequently, at 21.7% on average, compared to commercial ones like GPT-4 at 5.2%. CSO Online That's still a meaningful rate even for the better models.
The "Vibe Coding" Problem
A style of "vibe coding" has become popular, where users simply tell the AI what they want to do and the AI generates the code. As the output of AI tends to be accepted uncritically, the risk of hallucinated packages being used in actual development has increased dramatically. GIGAZINE
How to Defend Against It
The main defenses are awareness and verification: always audit AI-suggested dependencies before installing them, use dependency scanning tools, check that packages actually exist on official registries (PyPI, npm, etc.) and have a legitimate history, and treat dependency resolution as a rigorous, auditable workflow rather than a simple convenience. Trend Micro
In short, slopsquatting is what happens when developers trust AI output too blindly — and attackers have figured out how to exploit that trust by camping on the names AI is likely to make up.
Member discussion: