AI hallucinations, fake case citations or real cases cited for propositions they don’t support, are the most-discussed risk in legal AI right now. Across roughly 40 million U.S. court cases filed since January 2023, only about 955 have included a documented AI hallucination. But the lawyers and pro se litigants who do file them keep getting caught, sanctioned, and named in published opinions. Here’s where hallucinations come from, how to keep them out of your own work, and what to do when you find them in an opponent’s brief.
Open any legal news feed in 2026 and you’ll find a fresh story about a lawyer caught citing cases that don’t exist. The coverage has been steady enough that AI hallucinations in law have become the boogeyman of legal AI, the thing that comes up first whenever a partner asks whether the firm should be using these tools at all.
The numbers tell a different story. Hallucinations are real and they have real consequences. As a percentage of total filings, they’re also extraordinarily rare. The lawyers who get caught share a few habits. They used general-purpose chatbots instead of tools built for legal work. They skipped the supervision step at the end of drafting. And when they were caught, they often made things worse by deflecting blame.
Here’s where hallucinations come from, how often they’re actually showing up in court filings, what to do to keep them out of your own work, and how to handle them when you find them in an opponent’s brief.
What AI hallucinations in law actually are
In a legal context, AI hallucinations are one of two things. They’re either citations to cases or statutes that don’t exist, or citations to real authorities for propositions those authorities don’t actually support.
The first kind is the one making headlines. A lawyer or pro se litigant uses a general-purpose chatbot like ChatGPT, Claude, Gemini, Copilot, or Grok to help draft a brief. The model, predicting the statistically likely next word, decides a citation belongs in a particular spot, and produces one. The reporter might be real. The volume number might fall within the right range. The Bluebook formatting is often better than what most associates produce. The case itself just doesn’t exist.
The second kind is older than AI. Lawyers have always occasionally cited a case for a proposition that the case doesn’t stand for. AI has made this kind of error easier to commit and easier to catch.
If you’re hoping the next generation of models will fix this, set that hope aside. Sam Altman has acknowledged that hallucinations aren’t a bug in large language models. They’re a feature of how the technology works, and GPT-5 hallucinates more than GPT-4 did. The hallucinations have gotten more convincing, not rarer. That’s not a reason to swear off AI. It’s a reason to choose your tool wisely, and be disciplined about your workflow. We’ll cover both below.
Why the citations look so convincing
There’s a psychological trap with hallucinated citations. In a brief with 19 citations, an AI tool may produce 18 that are real and one that isn’t. Reviewing the first several and finding them accurate lulls you into trusting the rest. Then citation 14, perfectly Bluebooked and perfectly plausible, points to nothing.
For a generation of lawyers, polished writing has been a proxy for careful lawyering. That proxy is now broken. A motion can be simultaneously flawlessly written and badly lawyered. The perfect Bluebooking is no longer a signal that anyone actually read the case.
That puts the burden of supervision back where it has always belonged: on the supervising lawyer, at the end of the drafting process, before the document goes out. This is already required by ABA Model Rules 5.1 and 5.3. Accuracy is also required by federal Rule 11 (and its state-court analogs). In a court filing, Rule 11 states that everything above your signature is true and correct, whether it came from a paralegal, a first-year associate, or an AI-backed tool. Supervision is one piece of a broader set of ethical duties that apply to AI in legal practice.
Some jurisdictions are responding by adding AI-specific rules. California is considering amendments to its professional conduct rules to address AI directly, and Florida has already done similarly. Those rules will probably not age well. The duty to supervise people and tools that produce work in your name has existed since the profession’s inception. It applies to AI for the same reason it applies to a typist or a junior associate. We probably don’t need a new rule. We need lawyers to follow existing rules.
How often are AI hallucinations really happening?
Damien Charlotin, a researcher who tracks AI hallucination legal cases worldwide, has documented around 1,400 cases globally where AI-generated errors made it into a filing. More than 955 of those are in the United States.
For context, Docket Alarm contains roughly 40 million U.S. cases filed since January 1, 2023, when ChatGPT-style tools entered widespread use. That works out to one documented hallucination per 41,000 cases, or about 0.002 percent. Across the roughly 200 million filings in those cases, the rate is even smaller.
Two caveats. First, that count only includes hallucinations that were caught. The real number is almost certainly higher, since some bad citations slip past both opposing counsel and the court. Second, the denominator includes every filing, not just AI-assisted filings. If only a fraction of lawyers are using generic chatbots in drafting, then the rate within that subset is much higher.
A few other patterns from the data:
- More than 60 percent of the U.S. cases involve pro se litigants, not represented parties.
- The cases that do involve lawyers cut across firm sizes and practice areas. Sullivan & Cromwell was recently called out for hallucinated citations. These AI hallucination lawyer stories aren’t just a small-firm problem.
- The lawyers who get caught with hallucinations sometimes double down. They deny that they used AI. They might insist that the cases are real—until they’re proven wrong.
You’re statistically more likely to encounter hallucinated citations in an opponent’s filing than to produce one yourself. Which is exactly why this matters in both directions.
How to keep AI hallucinations out of your own work
Strong AI hallucination guardrails for legal work come down to four things to look for in any AI tool you use.
- It’s trained on real legal authority, not the open internet. A general-purpose chatbot is trained on pablum like Reddit threads and YouTube comments. You wouldn’t do legal research in those dubious sources. So don’t use a research tool that learned from them either. Solutions like Clio Work and Vincent by Clio are grounded in actual case law, statutes, and rules. We’re obviously not unbiased about those products, but the principle stands regardless of which tool you choose: use a tool that uses real law.
- It can be confined to your jurisdiction. A persuasive case from another circuit isn’t the same as binding authority. Your AI tool should let you direct it to the law that actually applies to your matter.
- It produces verifiable output with hyperlinks. Inside Clio, a phrase is more frequent: “hyperlinks or it didn’t happen.” Citations in AI-generated drafts should link directly to each underlying authority, making the citation easy to verify. The absence of a working link is itself a red flag. Before you file, click every link. Trust but verify.
- It produces a defensible record of how you used it. If a court ever asks how AI fits into your workflow, you should be able to show your AI interactions, the output, and your verification steps. Tools built for legal use create that “trust but verify” audit trail. Public chatbots don’t.
Even with all four in place, you still need that end-stage supervision. Read the cases. Click every hyperlink. If a citation doesn’t resolve to a real case that actually says what the brief claims it says, that’s the moment to catch it, before adding your signature.
What to do when you find AI hallucinations in opposing counsel’s brief
You will run into this, either in your work, or work from someone else. When you do, you have an obligation to catch it. The duty of competence requires you to verify the law cited against you, the same way the supervising lawyer on the other side should have verified it before filing. In Noland v. Land of the Free, L.P., a 2025 California Court of Appeal (Second District) decision, the court sanctioned a party about $10,000 for filing a brief with hallucinated citations. When the non-erroneous party then sought attorney’s fees for the work caused by the hallucinations, the court denied them, finding that they should have caught the errors themselves. Attorney’s fees in these cases tend to track the extra work caused by bad citations, not a separate failure to flag the misconduct, but the principle remains the same. Courts expect you to read the law cited at you.
You also have a choice about how to handle hallucinations once you’ve found them. The model rules guide you either way. Rule 3.3 (duty of candor to the tribunal) and Rule 8.3 (duty to report misconduct) both support raising the issue with the court. Nothing requires you to give opposing counsel a heads-up first.
That said, there’s a strong professional courtesy argument for notifying opposing counsel before the court. We’ve heard an anecdote from a lawyer in a contentious case where opposing counsel had been condescending throughout. He filed a brief with hallucinated citations. She had every reason to drop it on him with the court. Instead, she reached out to him directly, told him what she’d found, and offered him a chance to file an amended brief. His response was to threaten her with sanctions if she was making it up. About a week later, he refiled the brief with the citations corrected, no acknowledgment.
Even in that interaction, courtesy was the right call. The lawyer you’re across from today might refer you a case next year. Zealous advocacy doesn’t require being rude.
Consider giving opposing counsel a chance to fix it if you can. If they decline, or if their response makes you doubt their good faith, report it to the court and consider seeking fees for the time it took you to identify and document the error. Bring receipts. Show the cases that don’t exist or the propositions that aren’t supported. Courts are taking this seriously, and you should ask them to compensate for the work it takes to clean up someone else’s mess.
What to do if you’re the one who filed the hallucination
If you find a hallucination in something you’ve already filed, or opposing counsel does, take responsibility. That sounds obvious. But watching how some lawyers handle it in the moment, apparently it isn’t.
The pattern in the catalogued cases is striking. Confronted with a hallucinated citation, lawyers sometimes deny using AI. They often blame their associate, their software vendor, or their paralegal. They might pivot to attacking opposing counsel’s behavior. Or they sometimes insist the cases are real, then quietly correct the brief without explanation a week later. None of this works. Courts can see what happened. The deflection makes things worse.
The model for the right response is what Sullivan & Cromwell did when it happened to them: own the mistake, take personal responsibility, apologize, correct the filing, and don’t try to delegate the fault. You may still face a sanction. The sanction is almost always smaller, and the professional damage almost always less, than what comes from compounding the mistake with denial.
The bottom line on AI legal hallucinations
AI legal hallucination risks are real, but manageable. They can and do happen, but there are a few best practices you can adopt to keep them out of your work and to handle them when they show up in someone else’s.
- Use legal AI for legal work. General chatbots are great for marketing copy. But they’re not built to cite case law. If you’re producing legal work, use a tool grounded in real legal authority—yesterday’s case, yesterday’s statute, yesterday’s regulation—with hyperlinked citations and a verification workflow.
- Read the cases. Or at the very least, click the hyperlinks and pull a parenthetical quote from each one. The duty to supervise belongs at the end of the drafting process, on the supervising lawyer, before the document goes out. That has always been true. AI just made it more visible.
- Civility costs nothing. If you find hallucinations in opposing counsel’s filing, give them a chance to fix it before going to the court. If they decline, then file. If you’re the one who filed the hallucination, take responsibility quickly and cleanly.
Lawyers using purpose-built legal AI tools like Clio Work and Vincent by Clio, where citations are grounded in real law and verification is built into the workflow, will catch most hallucinations before they leave the office, in their own work and in the briefs filed against them. Used well, AI is a force multiplier in legal practice. Used carelessly, it’s a sanctions risk. The difference is the supervision step.
Subscribe to the blog
-
Software made for law firms, loved by clients
We're the world's leading provider of cloud-based legal software. With Clio's low-barrier and affordable solutions, lawyers can manage and grow their firms more effectively, more profitably, and with better client experiences. We're redefining how lawyers manage their firms by equipping them with essential tools to run their firms securely from any device, anywhere.
Learn More

