The EU AI Act will put new obligations on UK law firms who trade with Europe – here’s how to work out what the legislation means for you
Zain Ali | 24 September 2024
The European Union’s Artificial Intelligence Act (EU AI Act) is the world’s first piece of AI safety legislation. Three years in the making, the Act offers a comprehensive framework for regulating the development and use of AI systems within the EU.
The Act will have implications for companies of all stripes, not least legal firms. The European legal technology market is projected to grow at a CAGR of 8.9% between 2024 and 2030.
There is a huge and growing demand among legal service providers for AI solutions that can streamline and optimise their working lives. Firms aren’t just using third-party tools; they’re increasingly choosing to develop their own bespoke, proprietary models. Here in the UK, seven out of the top 20 law firms have already pursued this opportunity.
Four years since the UK left the EU, the bigger question of how far its economy should stray from the bloc’s regulatory orbit remains unsettled. Fortunately, the EU AI Act makes British companies’ obligations in the field of AI quite clear.
Pre-order our Full Report on the EU AI Act
The Extraterritorial Scope of the EU AI Act
One of the most important clauses in the EU AI Act is around its extraterritorial implications. Article 2 of the Act states that it applies to:
- Providers placing on the market or putting into service AI systems in the Union, regardless of whether those providers are established within the Union or in a third country;
- Users of AI systems located within the Union;
- Providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union.
Simply put, if a UK law firm develops an AI system that affects even a single person in the European Union, or uses AI tools developed in the EU, it will be subject to the Act’s requirements. Being based outside the EU will not serve as a defence.
The Act defines an AI system broadly as “software that…can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.
Any firm that leverages an algorithmic-based tool must pay close attention to where its outputs might be felt. If you use such technologies when servicing any EU-based clients, you will be subject to the legislation. The same applies if you lease your systems for use by a European company.
Even if a firm uses a LegalTech AI model developed by an EU-based company, such as Ireland’s Smarter Contracts or the Netherlands’ DoNotSign, they will still have to abide by some duties.
The nature of your compliance obligations all depends on where you sit along the EU AI Act’s designated value chain.
Understanding the EU AI Act Value Chain
The EU AI Act describes a complex value chain of stakeholders in AI systems, each with specific responsibilities. Locating yourself on this value chain will help you determine what you have to do to stay compliant.
Providers
Providers are those who develop AI systems or have them developed with a view to placing them on the market or putting them into service under their own name or trademark. UK law firms that develop their own AI tools for use in the EU market will be considered providers and will be subject to the most stringent obligations, including:
- Establishing and maintaining a risk management system
- Ensuring high-quality datasets
- Preparing technical documentation
- Logging system activity
- Ensuring transparency and providing clear instructions to users
- Implementing human oversight measures
- Achieving required accuracy, robustness, and cybersecurity standards
- Registering ‘high-risk’ AI systems in an EU database
Manufacturers
Manufacturers are those who place on the market or put into service AI systems under their own name or trademark. UK law firms that incorporate AI systems into their own branded products or services for use in the EU will be considered manufacturers and will be subject to similar obligations as providers.
Deployers
Deployers are those who use AI systems in the context of their own professional activities. UK law firms that use third-party AI tools in their work with EU clients will be considered deployers and will have reduced obligations compared to providers and manufacturers. These include:
- Ensuring human oversight
- Monitoring system operation for risks
- Informing users when they are interacting with an AI system
- Suspending use of a system if risks are identified[1]
Importers and Distributors
Importers are those who place on the EU market AI systems that bear the name or trademark of a provider established outside the EU. Distributors are those, other than the provider or the importer, that make an AI system available on the EU market without affecting its properties. UK law firms that import or distribute AI systems in the EU will need to ensure that:
- The appropriate conformity assessment procedure has been carried out by the provider
- The provider has drawn up the required technical documentation
- The system bears the required conformity marking and is accompanied by required documentation and instructions
Authorised Representatives
UK providers of AI systems who do not have an establishment in the Single Market will need to appoint an Authorised Representative (AR) tasked with demonstrating their compliance to EU authorities.
Assessing High-Risk AI Systems
The EU AI Act is built around a four-tier risk framework, placing the most stringent compliance on systems that could pose a threat to people’s health, safety or fundamental rights.
Annex III of the Act identifies eight areas where AI systems are considered high-risk, including law enforcement, migration, asylum and border control management, and administration of justice and democratic processes.
This means that AI systems used by many UK law firms, particularly those in the immigration and criminal defence sectors, are assumed to be automatically high-risk.
This might worry firms who use AI primarily to automate manual processes, enhance research and speed up their operations. The productivity gains might not seem worth the extra compliance obligations of running a high-risk system. Thankfully, the Act provides a significant degree of leeway to circumvent these extra requirements.
Annex III stipulates that a system used in typically high-risk contexts can escape that designation if it “does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.”
To be exempted from the high-risk classification, an AI system must meet at least one of the following conditions:
- It is intended to perform a narrow procedural task
- It is intended to improve the result of a previously completed human activity
- It is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
- It is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.
It should be noted that a system that profiles individuals will always be deemed high-risk, even if it meets any of these other criteria.
UK law firms who trade in Europe will need to carefully assess their systems against these criteria to determine whether they can claim an exemption from the high-risk classification. If not, they will need to comply with the Act’s elevated requirements for high-risk systems.
How to Ensure Ongoing Compliance
Compliance with the EU AI Act is not a one-time exercise but an ongoing process. Providers and manufacturers must regularly review their AI systems in line with evolving guidance from the European Commission. Deployers must continuously monitor their use of AI systems and inform affected individuals that an AI system is in operation.
UK law firms will need to establish robust compliance frameworks to ensure they stay aligned with the Act’s requirements over time. This may involve appointing dedicated personnel to oversee AI governance, conducting regular risk assessments, and maintaining detailed documentation.
UK law firms with exposure to the EU have very little to fear from the new legislation. While the EU AI Act’s extraterritorial reach and complex value chain will add some new compliance burdens, its risk-based approach and exemption criteria provide plenty of flexibility.
The best way to prepare for its implementation is to get to grips with the Act’s stipulations and exemptions. In most cases, firms will be able to continue harnessing AI’s huge productivity benefits without triggering onerous obligations.
Want to understand your compliance obligations in more detail? Sign up here to receive our upcoming report on what the EU AI Act means for legal service providers the moment it’s released.