Artificial Intelligence and the Law: Navigating AI in the Legal Industry

Download This Article as a PDF pdf download
Loading ...

When it comes to artificial intelligence (AI) and the law, a common refrain is that law cannot keep pace with the aggressive growth of technology. That may seem to be happening with AI technology and the law. AI is dominating the news. Multiple daily announcements cover AI impacting our daily lives, with no aspect seeming immune from the creep of AI.

In Australia, where regulation is evolving rapidly, similar concerns apply as local lawmakers and professional bodies evaluate how to manage AI responsibly across the legal and corporate sectors.

To learn how lawyers are already adopting AI tools in their daily work, read our guide on AI for Lawyers.

Can the law hope to keep up with the rapid adoption and impact of AI?

And if lawmakers are trying to match the aggressive pace of AI, how can lawyers keep track of a rapidly changing regulatory landscape?

Fortunately, there are reliable resources that can help you track developments in AI regulation and the legal industry. Keep reading to discover which regulators are shaping the future of AI in the law, who is tracking key legislation, and which trackers are worth following.

In Australia, where regulation is evolving rapidly, similar concerns apply as local lawmakers and professional bodies evaluate how to manage AI responsibly across the legal and corporate sectors.

What are the biggest AI trends in the legal profession?

The legal profession is undergoing rapid transformation, driven by AI advancements in law and legal practice, changing how legal services are delivered, researched, and managed. AI tools like Harvey AI and others are reshaping how legal professionals manage day-to-day work, with new solutions launching almost daily. AI in law firms is already streamlining tasks such as document drafting, legal research, and contract analysis. Lawyers can now draft contracts and other documents much faster, and predictive analytics help them forecast case outcomes based on historical data. This means less time spent on tedious work and more accurate results. For deeper insights into tools transforming legal research, see AI for Legal Research.

AI is also improving e-discovery by automatically sorting through large datasets to find relevant documents for litigation. Chatbots and virtual assistants are enhancing client interactions, answering common questions instantly, and helping with scheduling. When it comes to contracts, AI quickly spots key clauses and potential issues, making management more efficient.

As AI becomes more common, there’s a growing focus on ethical use, particularly around issues like bias and transparency. Overall, these advancements are making legal services faster, more accurate, and more efficient law firms and legal professionals. As AI in the legal profession becomes more common, there is growing focus on AI ethical use.

Ready to take your firm’s operations to the next level? Manage AI (formerly Clio Duo) can help! Discover our dynamic AI-powered partner, transforming the way legal professionals work.

What are the legal issues with artificial intelligence and the law?

With AI being used by government agencies, major and minor corporations, educators and students, and even people in your group chat, there is no legal issue that is not being impacted.

At a high level, all legal issues with AI revolve around three key criteria.

1. What action is being performed by AI?

If AI is making automated decisions that impact a person’s rights, health, or financial well-being, there will be legal issues raised. Already, unsupervised AI tools have been found legally deficient on decisions to fire employees.

2. What data underpins AI?

Generative artificial intelligence (genAI) tools rely on a vast amount of scanned data to train their algorithms. Key legal risks and challenges of generative AI include whether the data has been lawfully accessed (e.g. avoiding copyright infringement) and whether it is suitable for its intended use—such as being free from unlawful discriminatory bias.

To mitigate these AI legal issues, legal professionals must understand what data an AI tool was trained on and how that data may affect outputs in practice.

3. Who is using the AI technology?

Government agencies and regulated businesses are typically subject to stricter limitations on their use of AI compared to private organisations. For example, police forces using facial recognition AI may face legal and ethical issues, including concerns around human rights and proportionality, that private venue owners may not encounter.

These three points will need to be addressed whenever AI is being adopted. Credit decisions impacting home mortgage rates, advertising copy and product pricing, written submissions to agencies and tribunals, and more will each have to weigh how much legal risk comes with using AI.

Artificial Intelligence and the Law

Who is regulating artificial intelligence and the law in Australia?

Regulation of artificial intelligence is a rapidly evolving area both in Australia and globally.

Here’s an overview:

In Australia

AI regulation in Australia does not sit with a single regulator. Instead, it operates through a distributed, principles-based framework, overseen by the Australian Government and implemented by multiple regulators across different sectors.

At a federal level, Australia’s approach to AI governance is guided by the AI Ethics Principles, developed by the Australian Government. These are voluntary but influential, and focus on: human-centred values, fairness, privacy protection, reliability and safety, transparency, contestability, and accountability.

Key regulators and bodies involved include:

  • Office of the Australian Information Commissioner (OAIC)
    Oversees privacy, data protection and information handling. The OAIC plays a central role in AI governance through enforcement of the Privacy Act and guidance on automated decision-making, data use, and transparency obligations.
  • Australian Competition and Consumer Commission (ACCC)
    Regulates competition and consumer protection. The ACCC examines AI’s impact on market power, algorithmic pricing, transparency, and consumer harm, including through its Digital Platform Services Inquiry and work on data and digital markets.
  • Australian Securities and Investments Commission (ASIC)
    Regulates AI use in financial services and markets, with a focus on fairness, explainability, and accountability in automated decision-making. ASIC has issued guidance on the use of AI and algorithms in credit, advice, and financial products.
  • Australian Prudential Regulation Authority (APRA)
    Oversees risk management in banks, insurers and superannuation funds, including risks arising from AI and advanced analytics. APRA expects regulated entities to maintain strong governance, accountability and controls over AI systems.
  • Australian Communications and Media Authority (ACMA)
    Regulates broadcasting, telecommunications and online content. ACMA is increasingly focused on AI’s role in content moderation, misinformation, online safety, and digital communications.
  • eSafety Commissioner
    Plays a key role in regulating online safety risks, including those amplified by AI systems, such as harmful content, deepfakes and algorithmic amplification.
  • Department of Industry, Science and Resources (DISR)
    Leads whole-of-government AI policy, including the National AI Capability Plan and proposed reforms to strengthen AI governance, transparency and risk management across the economy.
  • Other sector-specific regulators and bodies also apply the government’s AI framework within their domains, including the Australian Law Reform Commission (ALRC), Therapeutic Goods Administration (TGA), Australian Health Practitioner Regulation Agency (AHPRA), Fair Work Ombudsman, and state-based regulators.

For practical ways to document your firm’s AI governance approach, explore our Law Firm AI Policy Template.

Globally

AI regulation globally involves multiple international organisations and national governments, each with its own approach. Some of the key global players include:

European Union (EU): The EU is at the forefront of AI regulation, passing the comprehensive AI legislation known as the Artificial Intelligence Act, aiming to manage risks associated with AI systems. The law applies extraterritoriality, meaning it will have a global impact on AI outside of the EU.

United Nations (UN): Through various agencies, the UN addresses AI’s impact on areas like human rights, privacy, and international security.

Organisation for Economic Co-operation and Development (OECD): The OECD has established AI Principles that many countries have adopted, promoting innovation while ensuring AI systems are designed in a way that respects human rights and democratic values. The OECD AI Policy Observatory (OECD.AI) reviews the over 1,000 AI policy initiatives that have been published by member states to implement the OECD AI Principles.

International Standards Organisations: Bodies like the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IEC) work on technical standards for AI systems.

National Governments: Many countries have their own regulatory frameworks and agencies responsible for overseeing AI development and its use within their territories. For instance, China has a governance framework for AI, focusing on promoting AI while ensuring security and ethical use.

Non-Governmental Groups

Non-governmental organisations play an increasingly important role in shaping the governance of artificial intelligence (AI) within the Australian legal sector, alongside government policy and existing legal and professional regulatory frameworks. As AI adoption in legal services accelerates, these organisations help address gaps where formal legislation, professional conduct rules or court guidance may lag behind technological change.

In Australia, organisations such as the Responsible AI Network (RAIN), CSIRO’s Data61, the Gradient Institute, the Australian Human Rights Commission, and the Law Council of Australia contribute to AI governance through research, policy submissions, ethical guidance and practical frameworks. Their work informs how AI tools are developed and used in legal practice, particularly in areas such as client confidentiality, privacy, professional responsibility, bias in automated decision-making, transparency, and explainability.

For law firms, these groups provide valuable guidance on how to adopt AI in a way that aligns with professional conduct rules, duties to the court, client expectations, and risk management obligations, helping practitioners balance innovation with ethical and regulatory compliance.

Artificial Intelligence and the Law

Are there laws for artificial intelligence?

Australia does not currently have a comprehensive AI-specific law equivalent to the EU AI Act. Instead, the use of artificial intelligence is regulated through a combination of existing laws, sector-based regulation, and voluntary AI governance frameworks.

How can I keep track of emerging AI regulation?

AI regulation can come from many sources. Keeping track of all the potential regulations requires you knowing which type of regulator applies to your concerns. For example, if you’re a lawyer representing businesses in a particular industry, following the actions of your regulators at the state and federal level is appropriate.

Fortunately, there are many groups and organisations tracking and publishing AI-related regulations. These trackers are updated periodically, with each having a particular focus.

To help you find the right AI regulation tracker, here are some trackers listed below by their focus area and coverage.

AI regulation trackers

International Association of Privacy Professionals’ (IAPP) Global AI Law and Policy Tracker

Tracker focus: Global regulation by national and international governments.

Last update at publication: February, 2024.

AI’s automated decision making capacity has already been regulated by many existing privacy laws. The IAPP’s AI Governance Center has been tracking the different frameworks and approaches taken by 24 jurisdictions to date, with more expected.

Brennan Center for Justice’s Artificial Intelligence Legislation Tracker

Tracker focus: U.S. Congress legislation.

Last update at publication: March 8, 2024.

The Brennan Center’s Artificial Intelligence Legislation Tracker looks at U.S. congressional bills introduced during the current 118th Congress that would do at least one of the following:

  • Impose restrictions on AI that is deemed high risk.
  • Require purveyors of AI systems to conduct evaluations of the technology and its uses.
  • Impose transparency, notice, and labeling requirements.
  • Create or designate a regulatory authority to oversee AI.
  • Protect consumers through liability measures.
  • Direct the government to study AI to inform potential regulation.

Data protection bills that significantly impact AI are also included in the tracker.

Currently, there are seventy-six bills listed on this tracker.

National Conference of State Legislatures’ (NCSL) Artificial Intelligence 2024 Legislation

Tracker focus: U.S. state legislature bills with any impact on AI.

Last update at publication: March 19, 2024.

The NCSL has many trackers for various topics regulated at the state level and since 2021, they’ve issued summaries of state legislation related to AI.

The NCSL tracker lists both pending and passed legislation. The list can be filtered to your particular state, with entries also being categorised by the focus of the bills. Categories cover a variety of topics, like election interference, health data, AI provenance of data sources, private rights of action, and more.

In the 2023 legislative session, at least twenty-six states and territories introduced artificial intelligence bills, and eighteen states and Puerto Rico adopted resolutions or enacted legislation. In 2024, forty states and territories have introduced AI bills to date, with eight states and territories already enacted legislation or resolutions.

Bryan Cave Leighton Paisner (BCLP) LLP’s U.S. State-By-State AI Legislation Snapshot

Tracker focus: U.S. state legislature bills narrowly focused on AI and automated decision-making–Laws addressing biometric data, facial recognition, and sector-specific administrative laws are omitted.

Last update at publication: February 12, 2024, with a quarterly update scheduled.

Law firm practice groups are great sources for information relating to their focus. BCLP’s Goli Mahdavi, Amy de La Lama, and Christian M. Auty (formerly a partner at BCLP and now a partner at Steptoe) are publishing AI-legislation tracking for the United States on behalf of their law firm.

Using a colour-coded data visualisation, you can see at a glance which states have proposed and enacted AI regulations. Further details can be found by expanding each state’s information.

Final thoughts on artificial intelligence and the law

AI may be the fastest-growing technology in history. While it may seem that such growth would outpace the legal system, the opposite is true. AI is facing a host of regulations, both proposed and enacted. Law firms researching AI will need to use legislation and regulation trackers to stay on top of this fast-paced area of law.

For more on how AI is transforming the legal profession, read Will AI Replace Lawyers?.

If you’d like to learn how to craft better AI prompts for legal work, explore ChatGPT for Lawyers: How to Write Better Prompts.

With Manage AI, Clio’s AI-powered partner, lawyers can use AI securely and ethically to increase productivity and efficiency. Embedded within Clio’s practice management system, Manage AI taps into your firm’s specific data, providing relevant, accurate results tailored to your needs. Book a demo today to discover how Manage AI can accelerate the way you work.