California to Vote on AI Bill Targeting Lawyers’ Use of AI Tools

AI Summary

SB 574 would not ban lawyers from using AI, but would clarify that existing professional duties apply when generative AI tools are used in legal practice, focusing on four key areas:

  • Protecting client confidentiality by prohibiting lawyers from entering confidential or nonpublic information into public AI systems.
  • Requiring attorneys to verify the accuracy of AI-generated content, correct hallucinated or erroneous output, and remain responsible for work produced with AI or by others on their behalf.
  • Preventing discriminatory or biased outcomes resulting from AI use that could unlawfully impact protected groups.
  • Requiring lawyers to consider whether disclosure is appropriate when AI is used to create public-facing content.
The bill reflects growing concern nationwide about the risks of general-purpose AI in legal work and underscores the need for verified, confidential, and well-governed AI tools and firm-wide policies to support responsible adoption.

Like this summary? Manage AI can create summaries like this for your cases and documents.

California’s Senate is set to vote this week on SB 574, legislation that would transform existing bar guidance on AI into enforceable statutory requirements. The bill reflects a growing awareness among lawmakers that the rapid adoption of generative AI, particularly general-purpose tools, poses distinct risks in regulated professions such as law.

If SB 574 passes, lawyers could face sanctions for citing AI-hallucinated cases or mishandling client information through public AI systems.

The bill, introduced by State Senator Tom Umberg, responds to a nationwide concern: Attorneys submitting court documents containing fictitious case citations generated by general-purpose AI tools, which lack built-in verification and citation-checking workflows. According to a Bloomberg Law analysis, these AI-generated citation errors are appearing with increasing frequency in court filings across the country.

If the Senate approves SB 574 this week, it would advance to Assembly committees for hearings and a potential vote before lawmakers adjourn in late August. Here’s what you need to know.

Legal AI ecosystem: AI software for lawyers

What SB 574 requires: four considerations for lawyers using AI

The bill doesn’t ban AI use in legal practice. Instead, it clarifies that existing professional obligations (confidentiality, competence, accuracy, and fairness) still apply when using AI tools.

The bill defines generative artificial intelligence as an “artificial intelligence system that can generate derived synthetic content, including text, images, video, and audio that emulates the structure and characteristics of the system’s training data.”

1. Client confidentiality and AI

SB 574 would prohibit lawyers from entering confidential, personally identifying, or nonpublic information into public AI systems.

What the bill says: Attorneys must ensure that “confidential, personal identifying, or other nonpublic information is not entered into a public generative artificial intelligence system.”

The bill doesn’t define what “public generative AI systems” are, but it does define personal identifying information to include:

  • Driver’s license numbers.
  • Dates of birth.
  • Social Security numbers.
  • National Crime Information and Criminal Identification numbers.
  • Addresses and phone numbers of parties, victims, witnesses, and court personnel.
  • Medical or psychiatric information.
  • Financial information.
  • Account numbers.
  • Any other content sealed by court order or deemed confidential by court rule or statute.

In practice, this means attorneys can’t copy-paste client emails, case facts, or discovery materials into public AI platforms that don’t support attorney-client privilege.

2. AI citation verification

The bill requires attorneys to verify and correct AI-generated content before using it.

What the bill says: Attorneys must take reasonable steps to:

  • “Verify the accuracy of generative artificial intelligence material, including any material prepared on their behalf by others.”
  • “Correct any erroneous or hallucinated output in any material used by the attorney.”
  • “Remove any biased, offensive, or harmful content in any generative artificial intelligence material used, including any material prepared on their behalf by others.”

This means that attorneys must verify factual accuracy, identify hallucinations, maintain quality control over all AI-generated work, and remove problematic content, including work done by others on their behalf. AI may assist with the work, but responsibility remains with the lawyer.

3. Preventing AI bias in legal practice

The bill requires that AI use can’t result in discrimination against protected groups.

What the bill says: Ensure “the use of generative artificial intelligence does not unlawfully discriminate against or disparately impact individuals or communities based on age, ancestry, color, ethnicity, gender, gender expression, gender identity, genetic information, marital status, medical condition, military or veteran status, national origin, physical or mental disability, political affiliation, race, religion, sex, sexual orientation, socioeconomic status, and any other classification protected by federal or state law.”

This provision addresses concerns that AI systems can perpetuate biases from their underlying training data. Under SB 574, attorneys could be held accountable if they use AI known to produce discriminatory outcomes.

4. AI disclosure considerations

Attorneys must consider whether to disclose AI use when creating content for the public. While disclosure isn’t mandatory in all circumstances, firms may want to develop clear policies on when and how to inform the public about AI-generated content.

What the bill says: “The attorney considers whether to disclose the use of generative artificial intelligence if it is used to create content provided to the public.”

Why California’s AI bill matters for lawyers nationwide

California isn’t alone in wrestling with AI’s role in legal practice. The bill acknowledges what many in the profession already know: Not all AI tools are suitable for legal practice. The California Bar already issued guidance on AI use, but SB 574 would codify those principles into enforceable law, moving from “should” to “must.”

Other states are likely to follow suit. The 2025 Legal Trends Report found that 79% of legal professionals use AI, and nearly half of them are using generic AI tools such as ChatGPT, Gemini, and Claude. As adoption grows, so does the need for practical guardrails that help firms use AI without compromising professional responsibilities to protect both lawyers and clients.

How to use AI responsibly in legal practice

For many firms, the question now is what responsible AI use actually looks like day to day, and how to make intentional choices about where and how AI fits into their legal work.

If you’re evaluating your current AI setup or thinking about adding new tools, the following considerations can help guide safer, more practical adoption, regardless of how the Senate’s vote on SB 574 ultimately unfolds.

Use legal AI grounded in the law

General-purpose AI tools, while helpful for brainstorming or general research, generate responses based on patterns in their training data, not verified legal sources. That’s why hallucinated cases appear in court filings. The AI doesn’t know it’s inventing citations because it doesn’t actually check case law.

Legal AI platforms such as Clio Work are built on a different foundation. They’re grounded in authenticated legal databases, with access to primary and secondary law in relevant jurisdictions. This significantly reduces the risk of hallucinations and delivers results that are easier to verify and trust. Lawyers can conduct research more efficiently and get results they can trust.

Choose AI tools that support professional oversight

Meeting verification obligations requires that lawyers have the means to review and confirm AI-generated work. With general-purpose AI, verification often means first figuring out whether a citation or statement is real at all, tracking down sources from scratch and untangling potential hallucinations. That process is time-consuming and increases the risk that errors slip through.

AI built for legal workflows shortens that gap. By providing direct access to verified source documents and enabling side-by-side comparison, legal AI shifts verification from a hunt for missing sources to a straightforward review, making it easier to meet oversight requirements without slowing down legal work.

Protect client information with AI built for confidentiality

Consumer AI tools, especially free or publicly available versions, operate under standard terms of service. Often, this means your clients’ confidential data will be used to improve these models, as there are no contractual guarantees stipulating that client information is exempt. Ultimately, these platforms weren’t designed for attorney-client privilege because they weren’t designed for attorneys in the first place.

Instead, firms should look for AI platforms that offer contractual guarantees that data won’t be retained or used to train models, encryption at rest and in transit, SOC 2 or similar security certifications, and integration with practice management systems.

When AI is built into legal software, client information stays within a controlled ecosystem under the lawyer’s control, and that data is never retained by an AI nor is it shared or used for training. Confidentiality becomes a foundational feature.

Develop firm-wide AI policies

Selecting the right tools is only part of responsible AI adoption. Firms also need clear policies that define when and how AI can be used.

An effective AI policy should address which tools are approved for different types of work, what information can and can’t be entered into AI systems, and how to verify AI-generated content before use. It should also establish training requirements so everyone on the team understands both the capabilities and limitations of your AI tools.

Firms that prepare now, by evaluating current AI tools, documenting verification processes, and establishing clear policies, will find themselves better positioned as regulations like SB 574 continue to evolve.

Adopt AI responsibly with legal AI

Even if SB 574 doesn’t become law, this bill reflects a growing recognition among lawmakers that not all AI tools are appropriate for legal practice. The bill highlights the risks lawyers face when using general-purpose AI systems that were not designed with legal ethics, confidentiality, or professional oversight in mind.

While AI offers real benefits, the risks vary significantly depending on the type of tool being used. Public, consumer-focused AI platforms may be useful for general tasks, but they can pose serious challenges for lawyers when client confidentiality and accuracy are at stake.

Legal-specific AI tools like Clio Work, by contrast, are built for the realities of legal practice. They are designed to support attorney-client privilege, rely on verified sources, and support the level of review and accountability required for lawyers.

See how Clio Work helps lawyers use AI responsibly. Explore AI built for legal practice, with verified sources, built-in review tools, and safeguards designed to protect client confidentiality.

Book a Clio demo

Related Articles

View More on Technology
Loading ...
  • Software made for law firms, loved by clients

    Software made for law firms, loved by clients

    We're the world's leading provider of cloud-based legal software. With Clio's low-barrier and affordable solutions, lawyers can manage and grow their firms more effectively, more profitably, and with better client experiences. We're redefining how lawyers manage their firms by equipping them with essential tools to run their firms securely from any device, anywhere.

    Learn More