AI Just Made That Up
- Michael Lee, MBA
- Jun 20
- 4 min read
Why hallucinations happen, how to spot them, and what to do next

🧠 Quick Summary
Generative AI tools like ChatGPT and Gemini can confidently produce false information—a phenomenon known as hallucination. This article explores how and why hallucinations happen, gives real-world examples from law to marketing, and offers a practical decision guide for using AI responsibly.
🤖 What Exactly Is an AI Hallucination?
An AI hallucination is when a tool like ChatGPT generates something that sounds right but isn’t—a confident response filled with incorrect facts, names, or events.
Unlike a typo or software error, it doesn’t throw an alert. It sounds articulate, authoritative… and totally wrong.
🧠 Why It Happens
Generative AI models like GPT-4 or Claude don’t retrieve knowledge—they predict the most likely next word or phrase based on patterns from the data they were trained on. That data may be outdated, incomplete, or lacking entirely for your specific prompt. When the model faces a gap, it fills it with what sounds plausible.
It’s not lying—it’s guessing with style.
🎭 Real-Life AI Hallucination Cases (With Full Context)
1. The ChatGPT Court Case That Never Existed
In May 2023, two lawyers representing a client in a personal injury lawsuit submitted a court brief with several cited legal precedents. The problem? The cases didn’t exist. They had been generated by ChatGPT.
One of the lawyers, when questioned, admitted he had used ChatGPT to research the filing, and even followed up by asking the tool if the cases were real—ChatGPT assured him they were. It fabricated decisions, case numbers, and even judicial commentary, all wrapped in the professional tone of legal writing.
The judge, understandably outraged, imposed sanctions. The incident made international headlines and prompted a wave of warnings about AI misuse in the legal profession. For more details, 🔗Ars Technica
2. Academic Citations Invented Out of Thin Air
Educators across the globe are encountering essays generated by AI tools that include references to journals and authors that don’t exist. One professor reviewed an essay that cited five peer-reviewed articles—all formatted correctly, complete with authors, journal titles, and DOIs. When the professor tried to verify them, they were all fictional.
This issue has become so widespread that universities are updating academic integrity guidelines to reflect this new “fabrication by AI” trend. For students who trust the output blindly, the cost can be severe - accusations of plagiarism and academic misconduct. 🔗 Duke
3. Fake Business Case Studies That Never Happened
Imagine you’re a consultant preparing a pitch for a client on how top firms use AI in HR. You ask ChatGPT for examples. It gives you detailed success stories—naming real companies like Procter & Gamble or Unilever, describing initiatives in resume screening, and even quoting C-level executives.
But none of it happened.
According to MIT Technology Review, these hallucinated examples are increasingly common. In one test, AI was prompted to produce analyst-style summaries for companies, and nearly all included fabricated data or quotes. Even McKinsey now warns that hallucination is a risk when using generative AI for decision-making.
4. Hallucinated Side Effects in Healthcare Queries
A medical student experimenting with ChatGPT asked it to list the side effects of a specific prescription drug. The tool returned a mixture of known effects—but one entry caught the student’s attention. It listed a rare neurological disorder not linked to the drug in any formal database.
The student consulted pharmacological resources and confirmed that the AI was wrong. This could have been catastrophic in a clinical setting. As discussed in Nature, ChatGPT is not a medical database and may invent conditions, misquote journals, or generalize symptoms—dangerous errors in life-or-death environments.
5. A Glowing Testimonial From a Non-Existent Client
A startup used ChatGPT to draft a press release about a new SaaS product. The copy was engaging, the formatting clean, and it even included a glowing quote from a "customer." The founder assumed it was a real quote from previous testimonial material—until a journalist asked for verification.
It turned out the testimonial was entirely fabricated by the AI. The customer didn’t exist, and the quote was AI-generated filler. This created a major embarrassment during a product launch and led to retractions.🔗 Berkeley Skydeck
6. How One Company Used AI Without Hallucinations
A logistics company wanted to use ChatGPT to answer staff questions about internal policies and workflows. Rather than letting the AI generate open-ended replies, they implemented retrieval-augmented generation (RAG). This meant ChatGPT could only answer using company documents they uploaded, such as SOPs, handbooks, and HR guides.
When an employee asked, “What’s the leave approval process?”, the AI responded with direct citations from the HR manual. This grounded, verifiable approach drastically reduced hallucinations and increased trust. 🔗 OpenAI on how RAG works
Think of it this way: instead of the AI making up an answer, it’s now quoting from your trusted files.
🚦 Should You Trust the AI’s Output?
Here's a quick decision guide:

✅ Practical Ways to Avoid Hallucination Traps
1. Treat AI as a Draft Partner Don’t publish or submit anything generated by AI without reading, editing, and verifying it.
2. Always Check Named Entities Whether it’s a study, person, law, or link—Google it. If it doesn’t exist, the AI made it up.
3. Ask for Sources—Then Validate Them Don’t just copy-paste links. Click them. Half the time, they’re dead ends.
4. Use Safer Tools If accuracy matters, try tools like:
Perplexity AI – cites every sentence
ChatGPT with Browsing or RAG – for live verification
Bing Copilot / Claude – often more cautious and source-aware
5. Look for Overconfidence Phrases like “It is well known…” or “According to research…” with no source? 🚩
🧘 Final Thought: AI Is a Gifted Intern—You're the Editor
Generative AI is not inherently dangerous—but misuse is. It can write like a human, but it doesn’t think like one. That’s your job.
Use AI to draft, summarize, and suggest—but let you decide what’s true.
留言