Writing a good prompt is a skill you already have. You’ve been doing it in one form or another throughout your career. When you hand off a research task to a junior associate, you identify the issue, note the jurisdiction, specify how deep the analysis should go, and describe what the finished work product should look like. That’s prompting.
Most legal professionals don’t write prompts that way, though. They use AI like a search engine, typing in a few keywords or a half-formed question and hoping something useful comes back. The difference comes down to context. A colleague at your firm picks up details from case files, conversations, and the way your firm operates. An AI tool doesn’t always have that, which means the quality of what you put in does most of the work.
Clio’s 2025 Legal Trends Report found that 79% of legal professionals now use AI in their work, but most rely on general-purpose tools rather than legal-specific systems. That puts a lot of weight on the prompt. A handful of repeatable habits can dramatically improve the quality of what comes back, whether you’re conducting legal research, drafting for clients, reviewing contracts, or thinking through strategy. Below, we’ll walk through the legal prompt engineering best practices lawyers need to know, the pitfalls to watch out for, and where legal prompt engineering is heading as AI tools continue to evolve.
What is legal prompt engineering?
Legal prompt engineering is the practice of writing clear, context-rich instructions for AI tools so they produce accurate, relevant, and usable outputs. It involves specifying the role, objective, jurisdiction, format, and any constraints.
A useful way to think about AI is as a new associate starting out at your firm. They’re intelligent and efficient, but they have no familiarity with your clients, your matters, or how your firm tends to approach a problem. Provide that context up front, and the first draft is something you can build on. Skip it, and you spend more time rewriting the output than you would have spent producing it yourself.
You don’t need a technical background for this. It calls for the same attention you already bring to a brief or a closing argument—knowing the audience, the ask, and what a strong answer looks like before you start writing. And increasingly, this kind of competence isn’t optional. The ABA’s Formal Opinion 512 makes clear that lawyers must understand the capabilities and limitations of any AI tool they rely on. That starts with knowing how to use it well.
The anatomy of a great legal prompt
AI prompting for lawyers doesn’t need to follow a rigid formula, but having a framework to fall back on makes the work faster and the results more consistent. The strongest legal prompts tend to share six components, and once you’ve used them a few times, you’ll reach for them without thinking.
- Role. Tell the AI who it should be. “You are a litigation associate looking to leverage AI who specializes in employment discrimination in California” produces a sharper output than a prompt with no role at all. Naming the role shapes the tone of the response, the depth of the analysis, and how closely the output stays to your practice area.
- Objective. State exactly what you want done, using specific verbs like “draft,” “summarize,” “compare,” “analyze,” or “outline.” Vague verbs like “review” or “check” leave the AI to decide what you meant, and the answer reflects that uncertainty.
- Context. Provide the jurisdiction, relevant facts, case posture, governing law, client’s objectives, and any constraints that matter for the task. The less you give the AI, the more it fills in on its own. That’s where hallucinations and invented citations tend to surface.
- Format. Describe the form you want the answer to take, whether that’s a table, a numbered list, a memo, or a short summary for a client. This is one of the quickest ways to get an answer you can use.
- Constraints. Set boundaries around the response. That might mean a word count, sources to include or exclude, issues to prioritize, or specific questions you want flagged. Telling the AI what not to do is as useful as telling it what to do, because without guardrails it tends to cover everything broadly rather than anything in particular.
- Iteration instruction. Ask the AI to identify gaps, uncertainties, or areas where more information would improve the analysis. This turns a prompt into a conversation rather than a single request.
Not every prompt needs all six pieces. A quick research question might only call for an objective, context, and format. For memos, briefs, or anything where the stakes are higher, working through the full list will consistently give you a stronger first draft to build from.
5 legal prompt engineering best practices that improve every prompt
The legal professionals who get consistently useful results from AI tools share a habit: their legal prompts read less like questions and more like assignments. A few principles do most of the work.
1. Be specific, and always name the jurisdiction
“Is this non-compete enforceable?” will come back as a generic overview that could apply to any company in any state. Rewrite the question as “analyze the enforceability of the non-compete clause in Section 7.2 under Texas law, factor in the 2024 FTC rule developments, and flag any provisions unlikely to hold up,” and the answer becomes usable.
Jurisdiction is the piece most lawyers skip because it’s obvious to them. You’ve been thinking about Texas employment law all morning. The AI hasn’t. Leave it out and the response tends to blend federal law with a handful of state approaches, none of which may match your client’s situation. A simple gut check: if the same prompt could apply to a range of different matters without changing a word, it’s too broad.
2. Prompt in layers
It’s tempting to pack everything into one prompt and hope for a polished result, but that’s rarely where the best output comes from.
Start broad. Ask the AI to lay out the key issues or map the statutory framework. Follow up with a prompt that carries that earlier context forward, and use a third to refine the output for your specific use, whether that’s a section of a brief or a set of talking points for a call. Think of it less like a single search query and more like a conversation with a colleague who gets smarter about your case with every exchange.
3. Ask for citations and sources
Tell the AI that any legal conclusion needs to be supported with specific authority, such as case law, statutes, regulatory guidance, or whatever applies. This won’t make the output accurate on its own. AI can hallucinate citations that look authoritative but don’t exist, especially with general-purpose tools, and independent verification of every citation is still essential. It does, however, force the AI to cite its sources and gives you somewhere to start when you’re checking the work.
Some legal-specific AI tools like Clio Work are grounded in the law and surface citations automatically with links to the underlying authority, so you can trust what you’re reading and move faster to the analysis.
4. Ask the AI to show its reasoning
Ask the AI to “think step by step” or “walk through the analysis before reaching a conclusion.” The technique has a name—chain-of-thought prompting—and research consistently shows it produces more accurate, better-reasoned output.
5. Build reusable prompt templates
The prompts that work for you once will usually work again. When you land on one that produces a strong result, save it. Remove any case-specific details, leave the structure, and you have a template the next matter can plug into.
This is where individual skill turns into firm capability. A shared prompt library means good output doesn’t depend on whoever happens to be at the keyboard, and newer legal professionals get the benefit of prompts that have already been tested against real work. It’s also where the time savings start to add up, because you stop rebuilding the same prompt from scratch every week.
Common prompting mistakes to avoid
Getting better at legal prompt engineering means catching the habits that undermine your output, and most are easy to correct once you can spot them. The mistakes below are where the majority of the frustration comes from.
Trusting output without verification
AI can produce confident and well-formatted citations that are completely wrong. It can invent case names with plausible reporter references, attribute holdings to courts that never issued them, and present overruled authority as good law, all without any indication that something is off. This is especially true of general-purpose AI tools that aren’t grounded in the law or connected to your matters.
Researchers have documented over 1,300 incidents of AI-generated hallucinations appearing in legal filings worldwide. The pattern has repeated itself hundreds of times since Mata v. Avianca, when two New York lawyers were fined $5,000 for submitting a ChatGPT-generated brief citing cases that didn’t exist.
More recently, attorneys from Ellis George LLP and K&L Gates were ordered to pay $31,100 in opposing legal fees after a federal judge found that 9 of 27 citations in their 10-page brief were AI-generated errors, including two entirely fabricated cases. Legal-specific tools reduce this risk, but no AI output is exempt from scrutiny. Every source, case, and statute needs to be independently verified. This is non-negotiable and core to meeting your duty of competence in the ethics of AI in law.
Inputting confidential client data into consumer tools
Consumer AI tools like ChatGPT may retain your inputs, use them for model training, or store them in ways that create confidentiality risks your client never consented to. It’s especially risky when you’re using a free version of a consumer AI tool, as seen recently in Morgan v. V2X Inc. Anything involving client information, whether that’s case facts, documents, or communications, belongs in legal-specific AI software with clear data-handling policies and zero data retention commitments.
Overloading a single prompt
Stacking multiple tasks into one request (“summarize this deposition, identify credibility issues, compare it to the plaintiff’s affidavit, and draft follow-up questions”) tends to produce a response that does none of them particularly well. Break complex work into sequential prompts that build on one another. The output is sharper, and you keep more control over what the AI is actually doing at each step.
Treating the first output as the final draft
Even a well-constructed prompt produces a starting point, not a finished product. The legal professionals who get the most out of AI treat the first response as something to challenge, refine, and iterate on, not something to paste into a document. Assuming the first draft is the finished draft is where the biggest AI-related mistakes often originate. The courts haven’t been shy about the consequences, as mentioned above.
Prompt examples by legal task
The clearest way to see what a strong legal AI prompt looks like is to put it next to a weak one. Below are five common legal tasks where the difference shows up immediately in the quality of your legal prompts. Each pair applies the same principles from the framework above to work you do each week.
Discovery review
A weak prompt like “summarize this deposition” will get you exactly that: a chronological walk-through that treats every page with roughly equal weight and has no connection to what’s actually at stake in the matter.
The AI doesn’t know you’re preparing a motion for summary judgment, doesn’t know what the plaintiff alleged in the verified complaint, and has no way to separate a throwaway answer from an admission that could decide the case. You end up re-reading the transcript anyway to pull out what matters.
A stronger prompt: “Summarize this 80-page deposition of the plaintiff’s HR manager. Identify statements that contradict the verified complaint, flag admissions relevant to our motion for summary judgment, and note any lines of questioning we should pursue in follow-up. Output as a table with page and line citations.”
This version gives the AI a theory of the case, a clear set of priorities, and a usable format. Instead of a summary you have to mine for insights, you get a working document tied to the strategy. After human review, it’s something you can carry into a summary judgment brief or a deposition follow-up outline.
Legal research
A prompt like “What is the law on wrongful termination?” will likely return a broad, jurisdiction-agnostic overview that reads like the first page of a legal encyclopedia entry. It might be accurate in a general sense, but it won’t help you advise a client or draft a motion.
Compare that to: “Identify the elements required to establish a whistleblower retaliation claim under California Labor Code § 1102.5, including recent appellate decisions from 2023–2026 that have refined the burden of proof. Cite specific cases and note any circuit splits or unresolved questions.”
That prompt names the statute, the jurisdiction, the timeframe, and the type of authority you need. It also asks the AI to flag uncertainty, so you get an honest starting point rather than a confident summary you have to verify from scratch.
Contract review
“Review this contract” is the prompting equivalent of handing a stack of papers to a new associate and walking away. The AI doesn’t know what you’re worried about, so it gives you a little of everything and not enough of anything.
This works better: “Analyze the indemnification clause in Section 9 of this vendor agreement. Identify whether it is mutual or one-sided, whether it includes a liability cap, and how it compares to market-standard SaaS vendor indemnification language. Flag any unusual or missing provisions.” That tells the AI exactly where to focus, what to evaluate against, and what to call out.
Document summarization
“Summarize this document” will get you a summary. It will also usually be too long, organized however the AI sees fit, and built in a way that buries the details you actually care about under background you already know.
Try this instead: “Provide a structured summary of this 47-page lease agreement organized by: (1) key parties, (2) term and renewal provisions, (3) financial obligations, (4) termination conditions, (5) unusual or non-standard clauses. Output as a table.”
Format instructions turn summarization into an output that’s genuinely useful, especially when you’re comparing multiple documents side by side or putting together a clean overview for a client.
Client communication
“Write an email to my client” gives the AI no sense of who the client is, what they need to hear, or how they should feel after reading it. The result tends to be generic, overly formal, and missing the context that makes communication feel personal.
A prompt that works: “Draft a client-facing email explaining the discovery timeline for our employment discrimination case in plain English. Tone should be reassuring but honest about potential delays. Keep it under 200 words.”
Tone, length, audience, and subject are all included, which means the AI delivers a first draft you can edit rather than one you have to rewrite.
Where prompt engineering for lawyers is heading
Everything in this guide rests on one way of working with AI. You write detailed instructions, the AI follows them, and you get a result. That model produces good output when the prompt is well-constructed, but it also puts most of the work on the user. Three developments are reshaping what prompting looks like for lawyers, and each one shifts more of that work onto the system.
Matter-aware AI
Matter-aware AI changes where the conversation begins. When the tool already has access to your case file (meaning the documents, the parties, the jurisdiction, and the timeline), you stop having to rebuild context with every prompt. A 200-word setup becomes a question you could ask a colleague: “What are the strongest arguments for summary judgment based on this matter?” The AI can answer because the context is already there.
Law-grounded AI
General-purpose tools draw on whatever is in their training data, which is why they may fabricate cases, attribute holdings to the wrong courts, or present overruled authority as good law. Law-grounded AI is built differently. It pulls from verified legal sources and ties every conclusion back to authority you can open and read.
Agentic AI
Agentic AI takes a goal and determines the steps on its own, instead of waiting for step-by-step instructions. “Build a defense strategy” or “find everything that could destroy this deal before signing” become workable prompts, because the AI can break a complex objective into a sequence of actions and execute them against your case data.
Our research shows that 84% of queries in Clio Work are already submitted as freeform, goal-based requests rather than structured commands, which suggests lawyers are naturally gravitating toward describing outcomes rather than dictating process.
Clio Work brings all three together in one place. It connects to Clio Manage and pulls directly from the matter it’s working on, including documents, notes, communications, tasks, and deadlines, so you don’t need to paste or re-explain context. With the matter connected, Clio Work can also recommend AI actions based on what’s actually happening in the case. Instead of figuring out what to ask for, you’re choosing from next steps the system has already identified. Much of the prompting falls away as a result.
Those outputs sit on a legal research library of over a billion verified sources and are grounded through Cert, Clio’s proprietary citator that confirms whether authorities remain good law. Clio Work has been independently tested as 3.67 times more accurate than leading general-purpose AI tools, giving firms confidence that AI outputs are reliable enough for client work and court filings. For firms evaluating where to invest, the advantage is that nothing is bolted on. The AI, the matters, research, and workflows all live in the same place.
Prompting skills aren’t going away. The lawyers who can name a clear objective, set the constraints that matter, and judge the work that comes back will keep getting the best results. What changes is the form the skill takes. Today, it means writing a careful set of instructions. Soon, it will mean knowing what to ask for, and trusting the system to handle the rest.
Practice the future of law today
With Clio Work, you go beyond generic chatbots and use AI that understands the context of your matters and delivers precise, cited legal research, analysis, and drafting that moves your cases forward.
Discover Clio WorkDo lawyers need formal prompt engineering training?
Yes, though not in the way the question usually implies. Prompt engineering for lawyers is a skill that improves with practice. The lawyers getting the strongest results today learned by doing. They tested prompts against real legal tasks and adjusted based on what came back.
Formal training still has its place. Programs like Clio’s free Legal AI Fundamentals Certification help when firms are rolling AI out across teams with varying experience levels, because they create a shared baseline to build from. From there, the next step is usually a prompt library, a shared bank of tested prompts for the tasks your team handles repeatedly. It’s how the skills individual lawyers pick up compound into something the whole firm can use.
The best prompt is the one you don’t have to write twice
Getting good results from AI doesn’t take cleverness. The skills that make a great prompt engineer are the same ones you’ve been sharpening since you entered the industry: precision with language, structured thinking, defining clear objectives, and knowing what a good answer looks like before you ask the question. Put them to work on your prompts and the quality of what comes back changes immediately.
The true payoff comes from doing this across a team. Pick the tasks that come up most often in your practice, build legal prompts you can reuse, and share what works with the people around you. Over time, those small investments build firm-wide capability, so good AI output doesn’t depend on whoever happens to be at the keyboard.
To keep learning, explore the rest of Clio’s AI for Lawyers series for practical guides on adoption, ethics, and tools. Or, enroll in the free Legal AI Fundamentals Certification to start building a foundation you can take back to your team.
What is legal prompt engineering?
Legal prompt engineering is the practice of crafting clear, context-rich instructions for AI tools to produce accurate, relevant outputs for legal tasks. It involves specifying roles, objectives, jurisdiction, format, and constraints. These are skills lawyers already use when briefing colleagues or defining case strategy.
How do lawyers write effective AI prompts?
Effective AI prompts include a defined role for the AI, a specific objective with action verbs, jurisdictional context, desired output format, and explicit constraints. Iterative prompting, starting broad and refining, produces better results than trying to get everything in one try.
What are the best AI prompts for legal research?
The best AI prompts for legal research specify the legal issue, applicable jurisdiction, relevant time period, and type of authority needed. They ask for cited sources, step-by-step reasoning, and flag gaps or uncertainties. Always verify citations independently.
What mistakes should lawyers avoid when prompting AI?
The biggest mistakes when prompting AI include vague instructions (“review this contract” without specifying what to review for), omitting jurisdiction, trusting output without verification, inputting confidential data into consumer tools, and overloading a single prompt with too many tasks.
Do lawyers need to learn prompt engineering?
Yes. Prompt engineering is increasingly viewed as part of the duty of technological competence under Rule 1.1. It improves with practice, not certification alone. Formal training and firm-wide prompt libraries help build consistent capability across a team.
Practice the future of law today
With Clio Work, you go beyond generic chatbots and use AI that understands the context of your matters and delivers precise, cited legal research, analysis, and drafting that moves your cases forward.
Discover Clio Work



