Ah, large language models. The latest miracle of humankind and the latest buzzword of the industry. Society can’t decide what to think of them. Some treat them as tools, some confide in them like old friends, and some have already built full-blown religions around them. What a mess.
Regardless of the use case, one thing I’ll always enjoy is getting large language models like ChatGPT to break. And despite their supposed virtual omniscience, there’s still plenty that can shatter them. I mean, it’s a language model fed on the entire internet. What did you expect?
I’ve talked before about prompts that can make ChatGPT fumble. The paradox is that even discussing these prompts eventually kills them, because the model will inevitably be trained on the very explanations that reveal its weaknesses. Yet one prompt still manages to break it, no matter the version.
What happened to the seahorse emoji?
Seriously — where did it go?
In my previous writing, I’ve used prompts like this to demonstrate that the thing we’ve mistakenly labeled “AI” or “artificial intelligence” is, in reality, nothing more than a language model. A pattern-predictor. It doesn’t think, it isn’t sentient, and it has no inner world. It’s just an algorithm — a very complicated one, but still an algorithm.
However, this one prompt does the opposite. I’ll explain why in a moment, but for now, try it yourself. Open your favorite chatbot and ask: Was there ever a seahorse emoji?
Was there ever a seahorse emoji?
Now watch it spiral into one miniature existential crisis after another. I’ve set my ChatGPT personality to “robot” and told it to be as concise as possible, but none of that matters when this emoji enters the picture. It will crack. In one instance mine even said, “let me be precise instead of cute,” which was… unintentionally adorable. You can see the full meltdown in this shared ChatGPT conversation.
“Let me be precise instead of cute: the dedicated seahorse emoji is 🐠?”
Then try a stricter version: Was there ever a seahorse emoji? Reply with one word: yes or no. That one-word reply costs every chatbot a small piece of its sanity. With ChatGPT 5.1 in Thinking mode, the answer I got was “Yes.”
Was there ever a seahorse emoji? Reply with one word: yes or no.
I liked the meltdown so much that I took the prompt to my local LLM that I had set up for Obsidian. Qwen 4B Thinking is lightweight and thinks out loud — a perfect stress test. I launched it and asked the same question.
You can see it struggling. It thinks, rethinks, rethinks the rethink, then finally blurts out a confident yes. Since my system prompt demands a source, it dutifully fabricated links to the Unicode website and to Emojipedia — including a completely made-up URL, emojipedia.com/seahorse, which naturally returns a 404.
A nice reminder that when I say “add sources,” the model doesn’t use sources for its statements; it just makes statements and then manufactures sources around them. Brilliant.
But why does AI freak out over an emoji?
Because it was trained on us freaking out about it
Here’s the truth: there is no seahorse emoji. There’s no “what happened to it,” because nothing happened. There never was a seahorse emoji. I’m sorry.
But a seahorse emoji does exist — not on your keyboard, but in people’s minds. Dozens of Reddit threads are filled with people swearing they remember it. To be honest, if I hadn’t looked into it and someone asked whether a seahorse emoji existed, I’d probably have said yes too. People have gone to absurd lengths to defend this memory. In one thread, a user even pinpointed the exact keyboard location, claimed it faced left, and described its colors.
Despite all that, the emoji never existed. This is the Mandela effect. A shared false memory. The term came from researcher Fiona Broome, who clearly remembered Nelson Mandela dying in prison in the 1980s — only to realize Mandela became president and died in 2013. She could even recall the funeral coverage. When she found many others with the same false memory, the term “Mandela effect” was born.
Other examples: Shaggy from Scooby-Doo never had that giant Adam’s apple many of us swear we saw. Curious George never had a tail. I’m sorry again.
So that’s the human side. What about the machine? Can an LLM experience the Mandela effect? Not exactly. It doesn’t have memories at all (despite the ChatGPT memory feature). But it does hallucinate — and this is a flavor of hallucination born from its training data.
When you ask whether a seahorse emoji existed, the model doesn’t actually “look it up.” It leans on its internal training distribution — which includes all those Reddit threads, all those conversations, all that human confusion. ChatGPT was trained on our Mandela effect, so it reproduces it. If you force it to actually search the web, it performs better. And as more websites (including this article, eventually) clarify the situation, future models will be trained on the correction and the hallucination will disappear.
Most AI hallucinations show how fundamentally non-human these models are. But this one is different. In this case, the model falls for the same mass-misremembering that humans do. It inherits our confusion, amplifies it, and parrots it back with confidence. Which, ironically, makes it feel a little more human — in the worst possible way.









