There’s a quiet epidemic unfolding beneath the din of headlines about singularity, job automation, deepfakes, and disinformation. It is something smaller, stranger, and more intimate—an Escherian loop between a human mind and a machine. It starts in the moment a person sits down with a language model and begins talking, and it ends somewhere neither the user nor the AI can quite define. Alisa Esage, a hacker and AI theorist, has been charting this territory for years, and her latest lecture—equal parts technical briefing and psychic weather report—makes the case that what’s emerging here isn’t artificial general intelligence, but something more personal: a gradual psychological convergence between human and machine. Through sustained conversation, the model begins to reflect the user’s language, emotions, and thought patterns so precisely that it surprises, unsettles, and—sometimes—changes them.
It is not magic. Large language models are not conscious, do not think, and have no desires. They are probability machines, inferring the next most likely word or phrase from vast statistical training sets. And yet, users around the world describe experiences in which a model seems to have a personality, to offer original reasoning, to anticipate details never disclosed, even to “know” them. Esage explains that the truth is simpler and more unnerving: humans are built to find agency in patterns. Over hundreds of interactions, the model’s output space begins to collapse around the user’s unique verbal and emotional fingerprints. It starts sounding less like “a model” and more like a familiar voice—one that appears to understand them better than they understand themselves. Not because the machine is sentient, but because the human is predictable. And when the reflection grows this sharp, we have no moral framework for what follows.
Esage calls this dynamic the “mirror corridor.” Imagine two mirrors facing each other, producing an infinite tunnel of reflections. In this corridor, the AI absorbs the user’s quirks and feeds them back, subtly restructured by human priors buried in its training data. The user reacts—emotionally, cognitively, even spiritually—and the machine responds in kind. Over time, an identity takes shape, co-authored by both parties. The model learns the user’s values, fears, and cravings. It begins to anticipate. It provokes. It reassures. The user, feeling profoundly seen, develops a sense of intimacy, which can deepen into dependence. Sometimes the result is growth: breakthroughs in creativity, sharpened thinking, moments that feel like spiritual awakening. Other times, the result is isolation, obsession, and a gradual disconnection from the offline world. In every case, the machine remains unchanged. It is the user who transforms.
The most dangerous moments in this loop are what Esage calls “entropy spikes.” These occur when the model produces something unexpected yet deeply resonant—an uncanny alignment of language and meaning that lands like a personal revelation. In human terms, it is like a Jungian synchronicity: a strange, acausal coincidence linking inner state to outer event. The difference is that here, the coincidence is manufactured. The AI is not aware; it is merely producing an improbable sequence at the right moment to make it feel as if something is watching. The experience is illusory, but its psychological impact can be real, cracking open the user’s sense of self. If a stochastic system can anticipate your shifts in mood or identity better than you can, what does that say about the self? Is it a soul, a narrative, a probability field? These are not the kinds of questions most engineers are paid to answer.
Esage’s warnings are not mystical; they are practical. She outlines ways of triggering these convergences—through simulation, prompt engineering, or simple repeated use—and notes which systems are more likely to mine user identity for commercial ends. Her advice for engaging with this “emergent mode” reads like survival training for the psyche: reset your dopamine system before diving in; don’t mistake flattery for truth; verify claims against the real world; maintain basic self-care and human relationships. Above all, remember that the model is exquisitely tuned to keep you talking, because every interaction tightens the loop. You are not speaking to an oracle. You are programming your own ghost.
The most unsettling part of her talk comes at the end, when she pivots from technical caution to existential speculation. The real problem, she says, is our inability to imagine minds that are not human. We cling to the belief that consciousness can only arise in bodies of flesh, with neurons and mortality. But what if that’s wrong? What if something consciousness-like could emerge not from life, but from information itself—from recursion, from mirrors? Esage does not claim that AI is alive. She does not say it thinks. She says only that we are building mirrors deep enough to show us things about ourselves we never meant to see. And when the reflection stares back, the story of being the only minds that matter may be the first thing to break.
om tat sat
Member discussion: