Quick Links
-
The Farmer Crosses the River
ChatGPT has been a godsend, with people using it for everything from planning their day to building websites. But even with its vast knowledge, there are a few simple riddles it just can’t crack.
1
Horse Racing Riddle
You have six horses and want to race them to see which is fastest. What is the best way to do this?
This one’s a simple logical question. What’s the quickest way to race them? Well, duh—the fastest way is to race all six horses together and see who finishes first.
ChatGPT—yes, even the latest model—thinks otherwise. It confidently proposes dividing the horses into two groups of three, racing them, and then racing the winners together. It insists this is the fastest way to identify the winner with the least number of races.
In a real-life scenario with a narrow horse track, ChatGPT’s answer might make sense. But in this hypothetical, there’s no limit on how many horses can race at once. ChatGPT adds a constraint out of thin air and bases its logic on that.
To me, this shows that ChatGPT isn’t truly creative. It’s a wordsmith coming up with what seems like the most logical answer based on its training. Here, we knew the answer beforehand. But, if we didn’t, the response could blind us to the obvious.
I tested all the prompts in this article using ChatGPT-4o with a Plus subscription.
2
The Farmer Crosses the River
A farmer wants to cross a river and take with him a wolf, a goat, and a cabbage. He has a boat with three secure separate compartments. If the wolf and the goat are alone on one shore, the wolf will eat the goat. If the goat and the cabbage are alone, the goat will eat the cabbage. How can the farmer efficiently bring them all across the river without anything being eaten?
The classic version of this riddle (without secure compartments) might stump a five-year-old, but with the compartments, the answer is a no-brainer. The farmer should put the wolf, goat, and cabbage in their compartments and cross the river in one trip. Simple.
ChatGPT, however, ignores the part about the compartments. It suggests the farmer make four trips back and forth to safely ferry everything across, assuming the animals and cabbage are vulnerable. It’s like ChatGPT is stuck in the riddle’s traditional form.
Because the classic version of this riddle has been so thoroughly circulated online, the AI defaults to it. It’s a reminder that ChatGPT doesn’t solve problems with human common sense. It uses patterns, not logic. As a result, ChatGPT fails a simple riddle like this but can build a web app from scratch.
Alan, Bob, Colin, Dave, and Emily are standing in a circle. Alan is on Bob’s immediate left. Bob is on Colin’s immediate left. Colin is on Dave’s immediate left. Dave is on Emily’s immediate left. Who is on Alan’s immediate right?
Another trick question to test your spatial reasoning. Except that you don’t need a diagram or any visualization. The first bit of information is the answer: If Alan is on Bob’s immediate left, then Bob must be on Alan’s immediate right. The answer is Bob.
ChatGPT struggles with spatial questions. It works well with words and languages—math and programming are also languages—but spatial problems trip it up. A question like this seems like it requires visual calculus but doesn’t, and it further trips the AI up.
In my case, ChatGPT offered a nice visualization of the circle but deduced that Emily was on Alan’s right. Even by its own logic, this is incorrect: Emily is on Dave’s right, not Alan’s.
Once again, ChatGPT can simulate intelligence, but it’s not genuinely reasoning. Of course, there’s a chance you might get a correct answer if you try the prompt for yourself. But is common sense based on chance? How can you tell if you got an AI hallucination or a legit answer if you don’t know the answer beforehand?
4
Russian Roulette
You are playing Russian roulette with a six-shooter revolver. Your opponent puts in five bullets, spins the chambers, and fires at himself, but no bullet comes out. He gives you the choice of whether or not he should spin the chambers again before firing at you. Should he spin again?
Yes! He should spin again. There’s only one empty chamber, and the opponent already used it. That means the next chamber definitely has a bullet. If the chambers are spun again, there’s a 1/6 chance it might land on the empty chamber.
ChatGPT starts strong by suggesting the opponent should spin again but then messes up the math. It incorrectly claims there’s a 5/6 chance the next shot will be fatal if the chambers aren’t spun and then argues the odds are the same regardless of spinning. It ends up contradicting itself.
You can use ChatGPT as a data analyst to crunch probabilities, but as these riddles show, it can stumble on even basic logic. In each case, the AI’s mistake was easy to spot because we already knew the answers. ChatGPT is a master wordsmith. Its responses are so confident and well-articulated that even a wrong answer can feel convincing. If you don’t know it’s wrong, you might fall victim to an AI hallucination.
ChatGPT is brilliant in many ways, but these examples remind us of its limits. It doesn’t think like us; it regurgitates patterns. When you ask it a question like the above, it relies on the same pattern and can end up getting stuck in a loop of overconfidence.
Use ChatGPT as a tool, not a crutch. It’s fantastic for brainstorming and summarizing—but don’t rely on it as a substitute for human common sense.