Artificial Intelligence and the Law: Navigating AI in the Legal Industry

Download This Article as a PDF pdf download
Loading ...

When it comes to artificial intelligence (AI) and the law, a common refrain is that law cannot keep pace with the aggressive growth of technology. That may seem to be happening with AI technology and the law. AI is dominating the news. Multiple daily announcements cover AI impacting our daily lives, with no aspect seeming immune from the creep of AI.

Can the law and AI evolve together, or will regulation lag behind the technology?

And if lawmakers are trying to match the aggressive pace of AI, how can lawyers keep track of a rapidly changing regulatory landscape?

Fortunately, there are reliable resources that can help you track developments in AI regulation and the legal industry. Keep reading to discover which regulators are shaping the future of AI in the law, who is tracking key legislation, and which trackers are worth following.

What are the biggest AI trends in the legal profession?

The legal profession is undergoing rapid transformation, driven by AI advancements in law and legal practice, changing how legal services are delivered, researched, and managed. AI tools like Harvey AI and others are reshaping how legal professionals manage day-to-day work, with new solutions launching almost daily. AI in law firms is already streamlining tasks such as document drafting, legal research, and contract analysis. Lawyers can now draft contracts and other documents much faster, and predictive analytics help them forecast case outcomes based on historical data (beware that AI cannot yet give legal advice and determine outcomes). This means less time spent on tedious work and more accurate results.

AI is also improving e-discovery by automatically sorting through large datasets to find relevant documents for litigation. Chatbots and virtual assistants are enhancing client interactions, answering common questions instantly, and helping with scheduling. When it comes to contracts, AI quickly spots key clauses and potential issues, making management more efficient.

As AI becomes more common, there’s a growing focus on ethical use, particularly around issues like bias and transparency. Overall, these advancements are making legal services faster, more accurate, and more efficient.

Ready to take your firm’s operations to the next level? Clio Duo can help! Discover our dynamic AI-powered partner, transforming the way legal professionals work–Learn more here.

What are the legal issues with artificial intelligence and the law?

With AI being used by government departments and agencies, major and minor corporations, educators and students, and even people in your group chat, there is no legal issue that is not being impacted.

At a high level, all legal risks and challenge with AI revolve around three key criteria.

1. What action is being performed by AI?

If AI is making automated decisions that impact a person’s rights, health, or financial well-being, there will be legal issues raised. Already, unsupervised AI tools have been found legally deficient on decisions to fire employees.

2. What data underpins AI?

Generative artificial intelligence (genAI) tools rely on a vast amount of scanned data to train their algorithms. Key legal risks and challenges of generative AI include whether the data has been lawfully accessed (e.g. avoiding copyright infringement) and whether it is suitable for its intended use—such as being free from unlawful discriminatory bias.

To mitigate these AI legal issues, legal professionals must understand what data an AI tool was trained on and how that data may affect outputs in practice.

3. Who is using the AI technology?

Government agencies and regulated businesses are typically subject to stricter limitations on their use of AI compared to private organisations. For example, police forces using facial recognition AI may face legal and ethical issues, including concerns around human rights and proportionality, that private venue owners may not encounter.

These three points will need to be addressed whenever AI is being adopted. Credit decisions impacting home mortgage rates, advertising copy and product pricing, written submissions to agencies and tribunals, and more will each have to weigh how much legal risk comes with using AI.

Artificial Intelligence and the Law

Who is regulating artificial intelligence and the law?

Regulation of artificial intelligence is a rapidly evolving area both in the UK and globally.

Here’s an overview:

In the UK

AI regulation in the UK does not stem from a single entity but rather through a network of regulators, all working under the watchful eye of the government. The UK regulatory framework is built on five cross-sectoral, non-binding principles: safety, transparency, fairness, accountability, and contestability. The key players include:

  • Information Commissioner’s Office (ICO): Oversees data protection and privacy issues related to AI. Publishes guidance and currently developing a statutory code of practice on AI.
  • Competition and Markets Authority (CMA): Regulates competition and consumer protection, focusing on AI’s impact on market dominance, transparency, and consumer rights. Publishes reports and guidance on AI foundation models and investigates AI partnerships for competition concerns.
  • Financial Conduct Authority (FCA): Regulates the use of AI in financial services, promoting fairness, transparency and accountability. Publishes guidance on AI-specific risks in financial markets. 
  • Ofcom: Regulates TV, radio, streaming and online safety. Recently published its strategic approach to AI focusing risks in media, communications and online platforms.
  • Digital Regulation and Cooperation Forum (DRCF): An umbrella group pulling together the main regulators to harmonise their approaches and share best practices. 
  • Central Function and Steering Committee: Established by the UK government through the Department for Science, Innovation and Technology (DSIT) to support coordination among regulators, provide central guidance and monitor the effectiveness of the regulatory framework.

Other sector-specific regulators interpret the government’s regulatory framework in their respective fields including Bank of England, Equality and Human Rights Commission (EHRC), Health and Safety Executive (HSE), Legal Services Board (LSB),Medicines and Healthcare products Regulatory Agency (MHRA), Office for Nuclear Regulation (ONR) Ofsted, Ofgem, and Ofqual.

Globally

AI regulation globally involves multiple international organisations and national governments, each with its own approach. Some of the key global players include:

European Union (EU): The EU is at the forefront of AI regulation, passing the comprehensive AI legislation known as the Artificial Intelligence Act, aiming to manage risks associated with AI systems. The law applies extraterritoriality, meaning it will have a global impact on AI outside of the EU.

United Nations (UN): Through various agencies, the UN addresses AI’s impact on areas like human rights, privacy, and international security.

Organisation for Economic Co-operation and Development (OECD): The OECD has established AI Principles that many countries have adopted, promoting innovation while ensuring AI systems are designed in a way that respects human rights and democratic values. The OECD AI Policy Observatory (OECD.AI) reviews the over 1,000 AI policy initiatives that have been published by member states to implement the OECD AI Principles.

International Standards Organisations: Bodies like the International Organisation for Standardization (ISO) and the International Electrotechnical Commission (IEC) work on technical standards for AI systems.

National Governments: Many countries have their own regulatory frameworks and agencies responsible for overseeing AI development and its use within their territories. For instance, China has a governance framework for AI, focusing on promoting AI while ensuring security and ethical use.

Non-Governmental Groups

Non-governmental groups play a significant and growing role in the regulation and governance of artificial intelligence (AI), complementing the efforts of governmental and international regulatory bodies. Their involvement is critical due to the fast-paced nature of technological innovation in AI, where traditional legislative processes may lag behind technological advancements. In the UK, groups like the Ada Lovelace Institute and We and AI play an important role in shaping AI governance, policy and best practices.

To regulate AI usage at your firm, it will be important to draft an AI policy—discover our AI template here.

Artificial Intelligence and the Law

Are there laws for artificial intelligence?

Yes, there are already laws that apply to artificial intelligence. The most recent and comprehensive is the EU AI Act., an important development for the legal industry and other highly regulated sectors.

What does the European Union AI Act do?

The EU AI Act establishes obligations based on potential risks and the level of impact of AI. Producers and users of AI technology will need to document risk management evaluation for the tools. Risk management reviews look in-depth at the three criteria listed above. Depending on the outcome of those reviews, AI tools may require users to take additional actions to reduce the risk from the tool.

The EU AI Act deems certain types of data and actions to always be high risk. Examples of high risk AI include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes. If AI is being used in these areas, the users must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight.

Generative AI models with general purpose (e.g. OpenAI’s ChatGPT) must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. More powerful AI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labeled as such.

EU citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights. These rights can also lead to a right of action in court if complaints are not addressed.

The EU AI Act comes into force twenty days after its publication in the Official Journal of the European Union, and is fully applicable 24 months after its entry into force. This is expected to be in 2026, though certain portions will become applicable earlier.

How can I keep track of emerging AI regulation?

AI regulation can come from many sources. Keeping track of all the potential regulations requires you knowing which type of regulator applies to your concerns. For example, if you’re a lawyer representing businesses in a particular industry, following the actions of your regulators at the state and federal level is appropriate.

Fortunately, there are many groups and organisations tracking and publishing AI-related regulations. These trackers are updated periodically, with each having a particular focus.

To help you find the right AI regulation tracker, here are some trackers listed below by their focus area and coverage.

AI regulation trackers

White & Case AI Watch: Global Regulatory Tracker — United Kingdom

Tracker focus: UK’s AI regulatory framework, including recent legislative proposals, government action plans, and sector-specific regulatory updates.

Last update at publication: 4th March 2025

Covers important developments such as the proposed binding measure on powerful AI models, the Digital Information and Smart Data Bill, and the UK’s approach to sector-specific AI regulation.

Herbert Smith Freehills Searchlight AI Legislation Tracker

Tracker focus: Legal, regulatory and policy developments affecting AI in major global markets including the UK.

Last update at publication: 10th April 2025

Tailored for law firms seeking a broad view of AI policy trends and legislative developments. By aggregating and tracking regulatory updates, Searchlight AI helps lawyers to stay compliant and aware of their obligations related to AI technologies.

Thomson Reuters Practical Law: AI Legislation Tracker (UK/EU)

Tracker focus: Selected legislative proposals regarding AI regulation in the UK and EU. 

Last update at publication: Not publicly available.

Useful for law firms needing to keep up with domestic and European developments. Integrated into the broader Practical Law Platform, the tracker is maintained by a team of experienced legal editors.

Bird & Bird AI Regulatory Horizon Tracker

Tracker focus: Comparative overview of AI regulation across 22 jurisdictions including the UK. 

Last update at publication: 27th October 2022

Valuable for firms with cross-border interests regarding the status of legally binding regulations, ongoing discussions, and non-binding guidance. The tracker offers a colour-coded table for a handy overview of regulatory status, highlighting where legally binding rules exist, where discussions are ongoing, and where only non-binding guidance is available. 

LexisNexis UK Artificial Intelligence Tracker 

Tracker focus: Key dates and information about legal issues surrounding AI development and regulation in the UK.

Last update at publication: April 2025

Designed to support law firms in monitoring compliance and risk management. Offering both essential updates and expert commentary, the tracker offers a reference for lawyers seeking to understand and adapt to the legal implications of AI in the UK.

Akin Gump AI Law & Regulation Tracker

Tracker focus: Searchable database of AI legal and regulatory developments in major international regions including the UK. 

Last update at publication: 4th April 2025

Curated by a cross-practice team of legal experts, the tracker provides in-depth legal analysis and timely updates. It not only tracks new laws and regulations but also offers insights into industry standards and best practices for safe and reliable AI implementation.

Free AI Course: Build your legal AI expertise with our free Legal AI Fundamentals Certification where you’ll learn from real AI experts how to write better prompts, identify the best AI tools, and how to introduce AI into your daily work.

Final thoughts on artificial intelligence and the law

AI may be the fastest-growing technology in history. While it may seem that such growth would outpace the legal system, the opposite is true. AI is facing a host of regulations, both proposed and enacted. Law firms researching AI will need to use legislation and regulation trackers to stay on top of this fast-paced area of law.

With Clio Duo, Clio’s AI-powered partner, lawyers can use AI securely and ethically to increase productivity and efficiency. Embedded within Clio’s practice management system, Clio Duo taps into your firm’s specific data, providing relevant, accurate results tailored to your needs. Book a demo today to discover how Clio Duo can accelerate the way you work.