Any AI can generate answers. Ours can defend them.
We just launched our new out-of-home campaign across New York. The lines on the billboards are short. Any AI can generate answers. Ours can defend them. The argument behind them is one of the most important conversations happening in legal right now.
Lawyers across the industry are running into the same problem: AI tools fabricating citations even with verification features turned on. This is an architecture problem, not a safeguard gap. And it's the reason we put what we put on a billboard.
We just launched our first out-of-home campaign in New York. If you’re in Midtown or the Financial District this month, you may spot them:
In legal work, the gap between sounding right and being right is where careers get made or lost.
The citation looked real, the case didn’t exist
We keep hearing the same story from lawyers at large firms. Someone runs a research query through a legal AI platform with the verification feature turned on, the one specifically marketed to prevent hallucinations and ground results in real case law. The tool returns a citation that looks legitimate. Full reference number, proper formatting, everything you’d expect. The lawyer checks it. The case doesn’t exist, or the reference number belongs to a completely unrelated matter. The AI fabricated the citation while using the very feature designed to stop that from happening.
This isn’t one isolated incident. It’s a pattern. And it points to something structural about how most legal AI is built today.
A verification layer isn’t the same as a verified foundation
The common approach takes a general-purpose language model and attaches a legal research integration to it. The model does the reasoning. The integration is supposed to keep it grounded. But the model generates the answer first, and the research layer checks the work after the fact. When the model produces something plausible enough that the check doesn’t catch it, or when the model generates something confident enough to confuse the verification layer itself, you end up with fabricated citations dressed in real-looking reference numbers.
This is a structural issue, not a safeguard gap. You can keep stacking more layers of verification on top, but each one adds cost without solving the underlying problem. If the model is still reasoning over general training data rather than verified legal sources, the failure mode stays the same. You’re paying more to compensate for an architecture that wasn’t built for the work in the first place. The question is what the AI reasons over from the start, and that’s a decision that gets made at the architecture level, not the procurement one.
That distinction between bolt-on verification and a verified foundation is what our campaign is built around. It’s what we mean when we say “ours can defend them.” Defending an answer requires more than confidence in it. It requires being able to trace it back to the source.
Citations that come from the source, not the model
Clio’s legal AI is built on 25 years of editorially enriched legal data, over a billion documents spanning 110 jurisdictions, and an engineering team that built Vincent to reason directly over that corpus.
Vincent runs a hybrid pipeline that combines generative AI with rules-based verification. Citations link back to verified primary sources, and lawyers can click straight through to read them in the product. The system doesn’t generate a citation and then run it through a fact-check. The citation originates from the source material itself. That means when a lawyer gets an answer from Vincent, the references trace back to real, editorially maintained case law they can verify in a click, rather than to something the model invented and a verification layer failed to catch.
Foundation models are getting cheaper and more capable every quarter. The models themselves are commoditizing. What makes a legal AI platform meaningfully different from its competitors is the quality of the legal knowledge it reasons over and how tightly that knowledge is integrated into the system. You can’t get there by plugging into someone else’s database after the model has already formed its answer.
Clio’s platform is built on editorially maintained, legally verified data across the full stack. The infrastructure is designed for large organizations. And the system connects the practice and business of law so the AI has the operational context to produce answers worth relying on.
The sorting has already started
The firms and legal departments making platform decisions today are asking harder questions than they were a year ago. The initial wave of enthusiasm for legal AI has run into a more grounded reality: these tools are now in the hands of people trained to question, verify, and challenge every claim. As a result, pilots are more rigorous and evaluations more exacting.
The question is no longer “does it produce output?” but “can I defend this output in front of a judge?” or, more bluntly, “will this get me disbarred?”
That is the standard the market is converging on. Over the next several years, it won’t be set by vendors or hype cycles. It will be set by lawyers applying the same scrutiny they always have. Testing the reliability of claims isn’t new. It’s the core of the profession, and it’s the standard our platform is built to meet.