Legal professionals are increasingly tapping into AI technology to work smarter and automate tedious tasks, such as legal research and document review.
But as AI becomes embedded in everyday legal work, questions around AI and legal ethics are becoming impossible to ignore. For lawyers, those considerations aren’t abstract; they’re grounded in specific professional obligations that govern competence, confidentiality, supervision, and candor.
In this guide, we’ll cover the professional duties lawyers must consider when using AI in law, the four key ethical risks it introduces, and practical steps for integrating AI into your practice responsibly.
Want AI that actually understands your cases? Clio Work delivers cited research, document analysis, and case strategy grounded in the law. Explore Clio Work and book a demo.
The state of AI ethics in law (2026)
AI adoption in the legal profession is accelerating. According to the latest Legal Trends Report, 79% of legal professionals now use artificial intelligence in their firms. Yet many lawyers remain uncertain about where their professional obligations begin and end when it comes to AI.
Bar associations across the country such as California’s are issuing guidance, and courts are beginning to weigh in on the use of AI in legal work. For lawyers, staying ahead of this landscape is a professional responsibility.
Ethics has always been central to the legal industry. The ABA Model Rules of Professional Conduct have informed lawyer conduct since 1983, with the ultimate purpose of protecting the public and maintaining the integrity of the legal profession. Lawyers are also bound by their state bar’s ethics rules, jurisdiction-specific AI guidance, and applicable court rules. Several of those rules speak directly to how lawyers must approach AI, even if they don’t mention it by name.
Legal ethics and AI: How the ABA Model Rules apply to AI use
The ABA Model Rules don’t reference AI explicitly, but several rules speak directly to how lawyers must approach it. Here’s how the most relevant ones apply:
- Rule 1.1 – Competence: Competence requires lawyers to keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology. In practice, this means lawyers who use AI tools must understand how those tools work, including their limitations, well enough to evaluate their outputs critically.
While most lawyers focus on the risks of technology, the rule explicitly requires the profession to also consider the benefits provided. Benefits of AI for lawyers may include increases in efficiency, speed, and more comprehensive coverage of a topic, all of which are of especial interest to clients. Lawyers can’t ignore these benefits simply to shield themselves from risk. Ultimately, a cost-benefit analysis is what these rules require.
- Rule 1.6 – Confidentiality: Lawyers must take reasonable precautions to prevent inadvertent or unauthorized disclosure of client information. Before inputting any client data into an AI tool, lawyers must understand how that tool stores, processes, and shares information. Many general-purpose AI tools, including free or consumer-grade products, are not designed to meet legal confidentiality standards.
- Rules 5.1 / 5.3 – Supervision: Partners and supervising lawyers are responsible for ensuring that subordinates, and the non-lawyer tools and systems they use, comply with professional obligations. This means AI outputs must be reviewed, not merely accepted. A lawyer can’t delegate professional judgment to an algorithm. Relying on AI-generated work product without meaningful review likely falls short of this standard.
- Rule 3.3 – Candor toward the tribunal: Lawyers have a duty not to make false statements of law or fact to a court. Submitting AI-generated legal research without verification, including citations that may be fabricated (a phenomenon known as “hallucinations”), can directly violate this rule. Several courts have already sanctioned lawyers for filing briefs containing AI-generated but nonexistent case citations.
For a deeper look at how to stay compliant with these rules as AI evolves, see our companion guide on AI compliance for lawyers.
The four key ethical risks of AI in law
AI introduces meaningful ethical risks across four dimensions. Understanding those risks is the first step toward managing them.
1. Bias and fairness
AI technology relies on algorithms to analyze vast amounts of data and uncover trends. If the data it draws from is biased, the results it produces will also be biased. This matters for any industry, but it can be especially detrimental for the legal profession as it can undermine the principles of justice and equal treatment under the law.
When lawyers, or any legal professional, rely on biased information, it can lead to unjust outcomes and compromised legal representation. When a decision has the potential to change the trajectory of people’s lives, such bias is simply unacceptable.
One area where this plays out in practice is recidivism concerns in bail and probation hearings. Some courts use algorithmic risk scoring models to assess how likely a defendant is to reoffend. These models draw on historical statistical data, but if that data reflects a district with a higher level of racial discrimination, the algorithm can perpetuate those same biases, compounding injustice rather than reducing it. We cover this in more detail in our piece on AI use in courtrooms.
Another type of bias that might be invisible to most is how large language models weight information. LLMs tend to prioritize recent sources and topics that appear frequently in their training data. This means recent changes in the law may be underrepresented, either absent from the model entirely or given less weight just because fewer sources have covered them yet. Smaller jurisdictions may face a similar problem: Their legal information can be overshadowed by similar but different laws from larger jurisdictions. A lawyer in Montana, for example, may find that ChatGPT is feeding them California legal writing purely because the AI has been exposed to more California-based examples in training.
Before using AI, lawyers must understand that bias may exist in any AI system and critically examine work products for it. When evaluating a tool, ask the vendor how the model was trained, what data sources it draws from, and what steps have been taken to reduce bias. Prefer tools built specifically for legal use, where training data and outputs are designed with the profession’s fairness obligations in mind.
2. Accuracy and reliability
Many lawyers remain cautious about AI outputs, and for good reason. AI models can generate confident, well-structured responses that are factually wrong, citing cases that don’t exist, misquoting statutes, or drawing conclusions that don’t hold up to scrutiny. This is especially true when using general-purpose AI tools such as ChatGPT.
The problem is compounded by the fact that AI errors can be difficult to spot. A fabricated citation looks identical to a real one. A mischaracterized holding reads like a legitimate summary. This is why lawyers should treat AI outputs as a draft requiring review, verify all citations against primary sources, and never submit AI-generated research to a court or client without independent confirmation.
Legal-specific AI tools can meaningfully reduce this risk. Unlike general-purpose tools, they publish what sources are included in the tools. This means lawyers can see that these tools are grounded in verified legal sources applicable to their practice area or jurisdiction. In addition, solutions such as Clio Work are matter-aware (meaning they understand the context of your specific cases), and built with verification workflows that link every output back to its cited authority. That combination means fewer errors in the first place, and less time spent wondering whether a citation is real. With sources linked directly to every output, verification is a click rather than a research project.
A Legal Professional's Guide to Not Getting Flagged for AI Use
Thinking about using AI in your practice? Learn how to do it responsibly, while staying compliant, protecting your reputation, and avoiding disciplinary risk.
Watch now3. Privacy and confidentiality
AI systems often rely on vast amounts of data, including highly sensitive and confidential information, and may store personal and conversation data. For lawyers, this creates a direct tension with their confidentiality obligations under Rule 1.6.
Before using any AI tool with client information, lawyers should understand how that tool stores, processes, and shares data. General-purpose tools such as ChatGPT, for instance, aren’t designed to meet legal confidentiality standards. Their terms of service may allow conversation data to be used for model training, and they offer no attorney-client confidentiality protections.
The consequences of getting this wrong are no longer hypothetical. In United States v. Heppner, a defendant’s use of a free public AI chatbot to generate defense strategies was ruled unprotected, in part because the tool offered no meaningful confidentiality protections. While the facts involved a client rather than a lawyer, the case underscores how much turns on the confidentiality controls of the tool being used.
A practical rule of thumb: If you wouldn’t email the information to an unvetted third party without a confidentiality agreement, don’t enter it into an AI tool without first understanding its data handling practices.
4. Responsibility and accountability
When AI produces an error in a legal context, there is rarely ambiguity about who is responsible: The lawyer will be held accountable. Professional accountability can’t be delegated to an algorithm, and regulators are looking to codify that expectation into law.
On January 30, 2026, the California Senate passed SB 574, which would require attorneys to verify the accuracy of any AI-generated material, correct hallucinated outputs, and personally read and verify every citation in court filings. The bill is not yet law, but it reflects a broader shift: The legal profession is moving toward explicit statutory accountability for AI use, not just implied professional obligations.
At the end of the day, lawyers are responsible for their own work product and their clients’ interests. While there are many AI use cases in law, it isn’t a replacement for a lawyer’s training and judgment.
Free AI Course
Build your legal AI expertise with our free Legal AI Fundamentals Certification where you’ll learn from real AI experts how to write better prompts, identify the best AI tools, and how to introduce AI into your daily work.
How to ethically integrate AI into legal workflows
Knowing the risks is only half the battle. To get the most out of AI, you have to be intentional about how you implement it. Here’s what that looks like in practice.
- Establish a firm AI policy. Start with a clear policy that documents which tools are approved, what data can be inputted, and how outputs must be reviewed. It doesn’t need to be complicated, but having it in writing protects your clients and gives your team a consistent foundation to build on. See our law firm AI policy template for a starting point.
- Apply human oversight where it matters. AI gives you a strong head start, but your review is what makes the work yours. AI outputs should never reach a client or a court without a lawyer’s eyes on it, and under the Model Rules, that oversight is likely more than just good practice. Use our AI-generated content checklist for lawyers to make sure nothing slips through.
- Vet tools for confidentiality compliance. Not all AI tools are built with legal confidentiality in mind. Before using any tool with client information, check its data retention and privacy policies, and look for tools designed specifically for legal use with enterprise-grade protections built in.
- Document your AI use. Keep a record of which tools you used, for what tasks, and how you reviewed the output. Some jurisdictions are already requiring disclosure of AI use in filings, and good documentation is your best protection if questions arise later.
- Train your team. Your supervision obligations under Rules 5.1 and 5.3 extend to how your staff and associates use AI. A short onboarding session on your firm’s AI policies and the tools’ limitations goes a long way.
- Stay current on bar guidance. This area is moving fast. Subscribe to updates from your state bar and the ABA’s Center for Professional Responsibility, and plan to revisit your AI policy at least once a year.
Final thoughts on AI ethics in law
As the use of AI in law firms becomes increasingly widespread, it’s important that legal professionals move beyond general awareness and engage with the specific professional duties that govern how AI must be used.
In many ways, these issues aren’t vastly different from those that lawyers have faced before regarding emerging technologies. And if the past has taught us anything, it’s that the industry has and can continue to successfully adapt to these changes.
Approaching AI ethics as an extension of your existing obligations, rather than a burden on top of them, is what makes responsible adoption sustainable. Stay informed, apply meaningful oversight, protect client confidentiality, and keep your judgment at the center of every decision.
By doing so, lawyers will be able to enjoy AI’s transformative benefits while maintaining an ethical practice.
Put AI to work on your cases with confidence. Clio Work delivers cited research, document analysis, and case strategy grounded in verified law and the full context of your matters. Book a demo today.
A Legal Professional's Guide to Not Getting Flagged for AI Use
Thinking about using AI in your practice? Learn how to do it responsibly, while staying compliant, protecting your reputation, and avoiding disciplinary risk.
Watch now
