Courts Are Starting to Pick AI Tool Winners: Breaking Down Morgan v. V2X Inc.

AI Summary

A Colorado federal court has set a new standard for AI use in litigation. In Morgan v. V2X Inc., Judge Braswell ruled that pro se litigants can claim work product protection over AI-generated materials, but that no party may upload confidential information into any AI tool unless the provider is contractually prohibited from training on confidential data, retaining uploaded materials, or disclosing them to third parties beyond what is necessary to complete their services. The ruling gives lawyers a practical checklist for justifying which AI tools they use in litigation.

Like this summary? Manage AI can create summaries like this for your cases and documents.

A new ruling out of Colorado is doing something we haven’t seen a court do this clearly before. It’s drawing a line between which AI tools are appropriate for handling confidential information in litigation and which ones aren’t. As the court put it, “AI is forcing litigants in courts to confront difficult questions about how and to what extent longstanding protections will apply when parties use AI to assist them in the litigation process.”

Morgan v. V2X Inc. is an employment dispute with a pro se plaintiff and a defense contractor as the former employer. On its face, it’s an alleged racial discrimination case. But the discovery disputes that followed have produced one of the more novel judicial opinions on AI, confidentiality, and work product protection that we’ve seen so far.

The opinion comes from Judge Braswell in Colorado, and that matters. Judge Braswell is one of the leading judges nationwide on artificial intelligence, working alongside Judge Schlegel out of the Louisiana appellate courts on a consortium of dozens of judges who are thinking through how AI will affect courts and litigants. This isn’t a judge encountering AI for the first time. This is someone who has been thinking carefully about these questions.

Judge in courtroom

The facts behind Morgan v. V2X Inc.

The defendant, V2X, sought a protective order before discovery even began. They asked the court to require certain protections over information labeled by parties as confidential. Those protections included deletion requirements and accountability measures for the plaintiff.

The court granted a standard protective order. If information is labeled confidential, treat it with the agreed degree of protection. But then the judge noticed the plaintiff was using AI to assist with litigation preparation. The defendant asked for the protective order to be amended to exclude the plaintiff’s use of AI tools entirely.

The plaintiff pushed back, arguing among other things that disclosing which AI tools they used could itself reveal litigation strategy. Judge Braswell split the difference, and the resulting order addresses three questions that matter for every lawyer using AI in litigation. 

Pro se litigants and AI work product protection

The first question was straightforward. Can a pro se litigant’s use of generative AI qualify for work product protection?

Judge Braswell said yes. She cited the Heppner ruling out of the Southern District of New York and Warner out of Michigan, both of which we’ve discussed before, and carried their reasoning forward into this circuit.

Her reasoning is worth reading closely. The court wrote that “The importance of applying these [work product] protections to pro se litigants is magnified in the context of AI—one of the most powerful knowledge tools ever to become available to the masses. This is because pro se litigants are forced to act as both party and advocate, simultaneously. And for the first time in history, widespread access to powerful technology may make that dual role,” meaning the pro se litigant acting as both party and lawyer, “surmountable.” Because Rule 26(b)(3) of the Federal Rules of Civil Procedure, which governs work product protection, does not “condition work product protection over AI materials on the involvement of counsel… ”

In plainer terms, denying work product protection to self-represented litigants simply because they used AI instead of a lawyer is, in this judge’s view, unconscionable. AI is one of the most powerful knowledge tools ever to become available to the public, and pro se litigants who are forced to act as both party and advocate simultaneously should be able to use it with the same protections.

AI for Client Communication: Updates, FAQs, and Plain-Language Summaries

The court also made an important observation about the nature of AI itself. Unlike a Google Doc or a standard piece of software, AI closely resembles “the kind of confidential, strategy-laden iterative work product that Rule 26(b)(3) was designed to protect.” When you interact with an AI tool, you’re sharing strategy, testing arguments, and developing your case theory in a way that looks a lot more like talking to a lawyer than putting something into a Google Doc.

There’s a useful analogy here. Thousands of lawyers and probably millions of litigants use Gmail as their email server. We know that Google trains on that information to some degree, but we still extend a reasonable expectation of privacy to email. Judge Braswell’s dicta is questioning whether it’s time to extend that same expectation to AI tools, at least for pro se litigants, and potentially more broadly in the future.

The new standard for AI and confidential information

This is where the Morgan v. V2X Inc. opinion gets practical. Judge Braswell’s amended protective order sets a clear standard for when confidential information can be used with AI tools.

The order read, in part: “No party or authorized recipient may input, upload, or submit CONFIDENTIAL Information into any modern artificial intelligence platform, including any generative, analytical, or large language model-based tool (“AI”), unless the AI provider is contractually prohibited from: (1) storing or using inputs to train or improve its model; and (2) disclosing inputs to any third party except where such disclosure is essential to facilitating delivery of the service…”

Anyone who has been following this space has probably heard the saying, “If you’re not paying for a product, you aren’t the customer. You are the product.” Judge Braswell is essentially translating that principle into a legal standard. If you’re using the free version of ChatGPT, Gemini, Claude, or any other tool, the information you provide is likely not kept confidential. A paid version with the right settings may be a different story.

The judge framed this as a natural extension of the original protective order. Don’t share confidential information with third parties where you can’t control its distribution, disclosure, or deletion. Now apply that same logic to AI tools, and look for the same controls. And these obligations carry forward through any AI tool you use. If you switch tools mid-case, the same standard applies to the new one.

There is one thing the court could have done better here. In the pro se litigant summary at the top of the opinion, the judge wrote that “you may not upload, input, or submit Confidential Information into any mainstream AI tool like standard ChatGPT, Claude, Gemini, or similar platforms.” That summary could leave the impression that pro se litigants can’t use these tools at all. That’s not actually what the order says. You may use ChatGPT, Claude, or Gemini if you can demonstrate that the proper safeguards are in place. A paid subscription with the right settings and contractual terms might meet this standard. We wish the pro se summary had made that clearer.

The access to justice question

Judge Braswell acknowledged a tension in footnote five of the opinion. Lawyers are using sophisticated AI tools with strong confidentiality protections, and spending real money to do so. Pro se litigants may not be able to afford those same tools. The judge doesn’t resolve that conflict, but she names it directly: confidentiality protections shouldn’t be so expensive that they’re out of reach for people who can’t afford a lawyer.

Procedural Justice: What it is and Why It’s Important

This matters because the Morgan v. V2X Inc. case didn’t end with the amended order. Just last week, the defendant V2X filed a notice stating that the plaintiff’s disclosure of their AI use may be insufficient. The plaintiff disclosed “Google” as their AI provider and included Google’s standard terms of service. The defendant argued that this isn’t specific enough. Google has Gemini, NotebookLM, and several other AI products, each with different capabilities and different data handling practices. The defendant wants to know which specific features the plaintiff is using and which confidential documents are part of that use.

This is where a specificity requirement is emerging. It may not be enough to name the vendor. You may need to demonstrate how you’re using the tool and why your reliance on it carries a reasonable expectation of confidentiality protection.

What Morgan v. V2X Inc. means for your firm

Morgan v. V2X Inc. gives lawyers a practical framework for thinking about AI and confidential information in discovery. Here are the takeaways.

  • Know what your AI tool does with your data. Read the terms and conditions. Check the settings. If your tool or settings allow the tool to train on confidential data or shares confidential data unnecessarily with third parties beyond the scope of providing its services, it does not meet the standard Judge Braswell set. A paid subscription with data protection guarantees is a different product from a free public chatbot, and courts are now drawing that distinction explicitly.
  • Be ready to show your work. If opposing counsel challenges your use of AI, you need to be able to demonstrate that the tool you used has contractual prohibitions against training on confidential data and disclosing them unnecessarily to third parties beyond the scope necessary to provide the service. Document which tools you use, how you use them, and what settings you have in place.
  • Name the specific tool, not just the vendor. The V2X defendant’s challenge to the plaintiff’s disclosure tells us where this is going. “Google” isn’t specific enough. “Gemini with data protection enabled under a paid Workspace account” is closer to what courts will expect.
  • Expect more of these fights. AI and discovery disputes are going to multiply. Until there’s a common understanding of what these tools can do and how they handle data, specificity of use will be litigated again and again.

For lawyers who may find pro se litigants’ use of AI frustrating, it’s worth stepping back and considering what Judge Braswell is really saying. People who couldn’t afford legal advice now have access to powerful tools that can help them participate more meaningfully in the justice system. That’s not a problem to solve. That’s a development worth taking seriously, and one that may ultimately make the system work better for everyone.

Lawyers who use purpose-built legal AI tools with appropriate confidentiality controls are well positioned. The question isn’t whether you can use AI in litigation. It’s whether you can demonstrate that your tool meets the standard. For firms already using tools like Clio Work, that answer is yes.

Related Articles

View More on Technology
Loading ...
  • Software made for law firms, loved by clients

    Software made for law firms, loved by clients

    We're the world's leading provider of cloud-based legal software. With Clio's low-barrier and affordable solutions, lawyers can manage and grow their firms more effectively, more profitably, and with better client experiences. We're redefining how lawyers manage their firms by equipping them with essential tools to run their firms securely from any device, anywhere.

    Learn More