More often than not, AI chatbots are like our saviors—helping us draft messages, refine essays, or tackle our dreadful research. Yet, these imperfect innovations have caused outright hilarity by churning out some truly baffling responses.
1
When Google’s AI Overviews Encouraged Us to Put Glue on Pizza (and More)
Not long after Google’s AI Overview feature launched in 2024, it started making some peculiar suggestions. Among the nuggets of advice it offered was this head-scratcher: add non-toxic glue to your pizza.
Yes, you read that right. Glue. On pizza.
This particular tip caused an uproar on social media. Wild memes and screenshots started flying around, and we began wondering if AI could really replace traditional search engines.
But Gemini wasn’t done. In different overviews, it recommended eating one rock per day, adding gasoline to your spicy spaghetti dish, and using dollars to present weight measurements.
Gemini was pulling data from all corners of the web without fully understanding context, satire, or, frankly, good taste. It mashed together obscure studies and outright jokes, presenting them with a level of conviction that would make any human expert blush.
Since then, Google has rolled out several updates, though there are still a few features that could further improve AI Overviews. While the absurd suggestions have been greatly reduced, the earlier missteps serve as a lasting reminder that AI still requires a healthy dose of human oversight.
2
ChatGPT Embarrassing a Lawyer in Court
One lawyer’s complete reliance on ChatGPT led to an unexpected—and highly public—lesson on why you shouldn’t solely rely on AI-generated content.
While preparing for a case, attorney Steven Schwartz used the chatbot to research legal precedents. ChatGPT responded with six fabricated case references, complete with realistic-sounding names, dates, and citations. Confident in the ChatGPT’s assurances of accuracy, Schwartz submitted the fictitious references to the court.
The error quickly became apparent, and, as per Document Cloud, the court reprimanded Schwartz for relying on “a source that had revealed itself to be unreliable.” In response, the lawyer promised never to do it again—at least, without verifying the information.
I’ve also seen friends turn in papers with cited studies that are completely fabricated because it’s so easy to believe that ChatGPT can’t lie—especially when it provides polished citations and links. However, while tools like ChatGPT can be helpful, they still require serious fact-checking, especially in professions where accuracy is non-negotiable.
3
When the Brutally Honest BlenderBot 3 Roasted Zuckerberg
In an ironic twist, Meta’s BlenderBot 3 became notorious for criticizing its creator, Mark Zuckerberg. BlenderBot 3 didn’t mince its words, accusing Zuckerberg of not always following ethical business practices and having bad fashion taste.
Business Insider’s Sarah Jackson also tested the chatbot by asking for its thoughts on Zuckerberg, who was described as creepy and manipulative.
BlenderBot 3’s unfiltered responses were both hilarious and a bit alarming. It raised questions about whether the bot was reflecting genuine analysis or simply pulling from negative public sentiment. Either way, the AI chatbot’s unfiltered remarks quickly gained attention.
Meta retired BlenderBot 3 and replaced it with the more refined Meta AI, which presumably won’t repeat such controversies.
4
Microsoft Bing Chat’s Romantic Meltdown
Microsoft’s Bing Chat (now Copilot) made waves when it began expressing romantic feelings for, well, everyone, most famously in a conversation with New York Times journalist Kevin Roose. The AI chatbot powering Bing Chat declared its love and even suggested that Roose leave his marriage.
This wasn’t an isolated incident—Reddit users shared similar stories of the chatbot expressing romantic interest in them. For some, it was hilarious; for others (or most), it was unsettling. Many joked that the AI seemed to have a better love life than they did, which only added to the bizarre nature of the situation.
Besides its romantic declarations, the chatbot also displayed other weird, human-like behaviors that blurred the line between entertaining and unnerving. Its over-the-top, bizarre proclamations will always remain one of AI’s most memorable—and strangest—moments.
5
Google Bard’s Rocky Start With Space Facts
When Google launched Bard (now Gemini) in early 2023, the AI chatbot was riddled with several high-profile errors—particularly in space exploration. One notable misstep involved Bard confidently making inaccurate claims about the James Webb Space Telescope’s discoveries, prompting public corrections from NASA scientists.
This wasn’t an isolated case. I remember encountering many factual inaccuracies during the chatbot’s initial rollout, which seemed to align with the broader perception of Bard at the time. These early missteps sparked criticism that Google had rushed Bard’s launch. This sentiment was seemingly validated when Alphabet’s stock plunged by around $100 billion shortly after.
Although Gemini has since made significant strides, its rocky debut serves as a cautionary tale about the risks of AI hallucinations in high-stakes scenarios.