Services A-Z     Pricing

AI Regulation – where next for law firms?

A version of this blog first featured as an article in the October 2024 edition of The Law Society’s Legal Compliance magazine.

13 November 2024

On 17 July 2024, in his introduction to the King’s Speech, Prime Minister Sir Keir Starmer signalled the UK’s need to “strengthen safety frameworks” and to “harness the power of artificial intelligence”. This is perhaps an unsurprising position given that (according to the US International Trade Administration), the UK AI market is expected to balloon from £16.8bn to £801.6bn by 2035.
 

Our outgoing government had adopted an agile and flexible approach towards the regulation of AI, focusing on innovation through the development of emerging technologies, to draw upon the perceived benefits of AI. Even after the success of the first global AI Safety Summit hosted by the UK last November, the UK government held back from introducing any substantial AI legislation, instead relying on the UK AI white paper principles and leaving regulatory responsibility to be sector led.

By contrast, Labour’s manifesto pointed to a move away from the Conservatives’ laissez-faire approach, instead proposing a number of changes including the creation of a National Data Library and a Regulatory Innovation Office, to address the fact that regulators are “currently ill-equipped to deal with the dramatic development of new technologies”. In a more targeted move, the manifesto also asserted that the safe and responsible use of AI must coincide with the introduction of “binding regulation”, specifically for the developers of the “most powerful AI models”. However, there was no discussion as to what specifically this would entail.

Following the election, media outlets including the Financial Times ran articles focusing on these elements of the manifesto and forecasting the presence of an official ‘AI Bill’ in the King’s Speech. The speech did directly reinforce Labour’s manifesto promise to introduce “appropriate legislation” to place “requirements on those working to develop the most powerful artificial intelligence models”, but there was no reference to a standalone bill. It might be thought that there is general legislation that could act as the launchpad for the onward regulation of AI. As such, the position of the new government remains - for the moment - largely unchanged from that of its predecessors. Of course, this might not be a reliable indicator as to how the government will continue to develop its approach to the regulation of AI in the coming years, but the pressure to introduce some form of legislative measures will only continue to mount. Political figures such as Sir Tony Blair continue to espouse the potential of AI, with the Tony Blair Institute recently claiming that integrating AI directly into the government’s functions could save up to £40bn annually.

Regulatory Landscape
 

Solicitors Regulation Authority

The Solicitors Regulation Authority (SRA) published its Risk Outlook Report in November 2023, in which it was noted that 75% of the largest solicitors’ firms were already using AI, and that over 60% of large law firms were, at the very least, exploring the potential of systems powered by generative AI.

The Report provides an overview of the many uses of AI in the legal sector (including profiling, searching and risk identification). It also outlines the various opportunities and benefits that arise from its use, namely increasing speed and capacity for administrative tasks (thereby reducing costs), as well as promoting transparency in decision-making and skills development. Yet these benefits have to be considered against the very real risks that are associated with the use of AI. The Report concludes that predicting and mitigating against these risks is crucial if law firms want to be able to safely and effectively embed the use of AI into their systems and processes. Key components to this being a success will consist of having up-to-date security measures, practice-wide awareness and accountability, and transparency of usage.

 

Bar Council

Similarly, at the turn of this year the Bar Council produced a document detailing the key considerations for the use of AI. The guidance explained the key issues associated with the use of AI, including hallucinations (defined in this context as incorrect or misleading results that AI models generate) and information disorder. It also serves as an educational companion to the SRA’s Risk Outlook on the logistics of AI and its functioning, while also exploring the legal ramifications of AI misuse, namely:

  • breach of contract;
  • breach of confidence;
  • defamation;
  • data protection infringements;
  • infringement of IP rights; and
  • reputational damage.

Commenting on the guidance, Chair Sam Townend KC asserted that “any use of AI must be done carefully to safeguard client confidentiality and maintain trust and confidence, privacy, and compliance with applicable laws”.

 

Legal Services Board

The statutory guidance from the Legal Standards Board issued in April 2024 also reflects the importance of a risk-based approach to the implementation of AI in the legal sector. It also sends a strong message that as part of their focus on innovation, the role of regulators lies in the creation of an environment that both promotes responsibility and pushes the boundaries in the uses of AI.

In particular, the LSB prioritises the use of AI in improving the diversity and reach of legal services, specifically in relation to cost-cutting and efficiency, thus allowing the legal sector to “widen access to justice and reduce unmet legal need”. The guidance covers the evolving need to adapt the way in which legal professionals are trained, so that they can proactively develop their knowledge of these new technologies, and are open to experimenting with AI in the production of innovative solutions.

 

Law Society

Finally, the Law Society has recently updated its guidance on the use of generative AI in the legal sector, in response to changes to the regulatory and policy landscape on the technology. The guidance emphasises the importance of balancing the “benefits of deploying AI technologies while establishing a clear delineation of the human role and accountability within the AI lifecycle”. It also provides a useful checklist for lawyers who are considering using generative AI in their practices, which includes the need to ensure that all usage aligns with their professional obligations under the SRA Principles, the Code of Conduct, and wider requirements.

The guidance also explores the legal risks associated with the use of generative AI, including data protection and intellectual property considerations (owing to an ongoing lack of confidentiality across a few of the most popular AI tools and systems), as well as cybersecurity and broader ethical issues, such as bias, hallucinations and discriminatory or exclusionary practices.

In mid-September, the Law Society also published its AI Strategy. Three long-term outcomes are outlined:

  1. Innovation: AI to be used across the legal sector in ways that benefit both firms and clients in legal service delivery. 
  2. Impact: an effective AI regulatory landscape that has been informed and influenced by the legal sector. 
  3. Integrity: the responsible and ethical use of AI has been used to support the rule of law and access to justice.

Over the course of the next few months, the Law Society plans to continue to influence, lead and shape regulatory and policy positions on AI for the legal sector, to widen its resource offering to identify and address the risk, challenges and ethics of AI.  It will also publish research on the impacts of AI on its members.  It is therefore advisable to keep an eye on any new publications by the Law Society and consider what this means for the Firm and solicitors at the Firm.

 

Other Jurisdictions
 

United States

The US takes a relatively light-touch approach to AI regulation.  Regulatory activity is more pronounced at state and local levels, resulting in a more decentralised system. Efforts to establish a unified federal approach may hinge on the recent results of the 2024 Presidential Election.

United States Senators Edward J. Markey and Mazie Hirono have proposed the Artificial Intelligence Civil Rights Act of 2024 (AI Civil Rights Act), which is comprehensive legislation aimed at regulating the use of algorithms in decisions that impact civil rights, preventing AI bias, and ensuring transparency and accountability in AI systems.

The AI Civil Rights Act aims to:

  • Regulate AI algorithms used in consequential decision-making, e.g., employment, banking, healthcare, the criminal justice system, public accommodations, and government services.
  • Prohibit the use, sale or promotion of algorithms that discriminate based on protected characteristics.
  • Require that developers, through independent auditors, conduct pre-deployment evaluations and post-deployment impact assessments to identify, address, and mitigate any potential biases or discriminatory outcomes from AI systems.
  • Increase compliance by disclosing to individuals that AI algorithms used in consequential decision-making are audited to comply with the AI Civil Rights Act.
  • Increase transparency by providing individuals with more information about decisions made by AI.
  • Authorise enforcement by empowering the Federal Trade Commission (FTC), state attorney generals, and individuals to enforce the provisions of the AI Civil Rights Act.

It is yet to be seen how the AI Civil Rights Act, if enacted, will interact with pending and existing state and international law, including California's proposed AI Safety Bill, the EU AI Act, and the recently signed Framework Convention on AI. Businesses deploying AI should continue to prepare for enhanced compliance obligations as the technology and regulatory landscape evolves.

On 24 October 2024, the White House published a Memorandum on Advancing U.S. Leadership in Artificial Intelligence. This document outlines the U.S. Government's strategy for maintaining its leadership in AI development while ensuring national security and protecting democratic values. However, now the outcome of the recent Presidential Election is known, a new approach may be adopted, potentially reshaping the direction of AI policy in the United States. In the meantime though, various professional standards bodies have stepped in to provide sector-specific guidance to address gaps in the absence of federal AI laws.

American Bar Association

The American Bar Association (ABA) published its first formal opinion (Formal Opinion 512) in July 2024 on generative AI within the legal sector. Adopting a structure which references the ABA’s Model Rules of Professional Conduct, specifically in regards to competency, confidentiality, communication and disclosure, ethical responsibilities of honesty towards the court, and supervisory responsibilities, the ABA outlines the inevitable difficulty of attempting to provide relevant guidance on a topic as fast moving as generative AI. It points out that regulators must strive to constantly update and rework their guidance in line with an evolving situation.

Formal Opinion 512 provides a number of case studies highlighting the potential ethical issues surrounding the use of generative AI, alongside advice about how to address them, such as staying informed about:

  • how AI tools function;
  • ensuring that all outputs are critically and analytically reviewed for accuracy;
  • gaining informed consent from their client if the input relates to their representation; and
  • maintaining judicious and reasonable judgements as to the calculation of fees when services are provided using AI.

Launched in August 2023, the ABA’s Task Force on Law and Artificial Intelligence aims to address the legal challenges of AI, and a year on it published a Year One Report on the Impact of AI on the Practice of Law. The report concluded that one of the principal reasons for the slow adoption of AI in the legal sector concerns the uncertainty surrounding the rules for professional conduct in respect to AI, and specifically the disciplinary regime, following a number of well-publicised cases demonstrating the improper use of the technology. In response, the authors have included reference to guidance from a number of different US states, as an attempt to alleviate these concerns and encourage a more informed approach to the use of this technology.

In a similar vein, the SRA has supported Law Tech’s regulatory sandbox within which innovative ideas can be tested in a safe space to promote technological innovation while avoiding regulatory contraventions.

 

European Union

For those with business in the EU, the EU AI Act (which came into force on 1 August 2024) may have considerable implications, since non-compliance will be met with substantial financial penalties:  between €7.5m (or 1.5% of worldwide annual turnover) and €35m (or 7% of worldwide annual turnover).

Regarded by many as the world’s “first comprehensive regulatory framework for AI”, it operates by categorisation of risk at different levels, establishing rules for general-purpose AI models (GPAI), as well as prohibitions on certain AI practices, including the exploitation of people’s vulnerabilities. The regulatory timeframe for the AI Act is not straightforward but it looks as follows:

  • the prohibitions on AI practices come into effect at six months;
  • the rules for GPAI will come into effect at 12 months;
  • the rules for high-risk AI systems will come into effect at 24 months; and
  • the rules for AI systems that are products regulated under specific EU laws will apply at 36 months.

Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (AI Convention)

The EU, UK and US are amongst those who have signed this new international treaty on AI that sets a global standard for how both risks and opportunities relating to AI should be handled. It is the first treaty of its kind in the world and its focus will be governing how AI systems are developed and operated in those countries which have signed up to the treaty, allowing for innovation but also safeguarding fundamental rights and values.

EU AI Pact

On 25 September 2024, the European Commission announced that over 100 companies have signed the EU AI Pact to voluntarily adopt the principles of the EU AI Act ahead of it coming into effect.   The voluntarily pledges call on participating companies to undertake at least three core actions:

  1. Implementing an AI governance strategy to encourage AI adoption within the organisation and work towards future compliance with the EU AI Act. 
  2. Mapping high-risk AI systems to identify AI systems likely to be classified as high-risk under the EU AI Act. 
  3. Promoting AI literacy and awareness amongst staff to ensure ethical and responsible AI development.

In addition to these core actions, more than half of the signatories committed to additional pledges, including ensuring human oversight, mitigating risks, and labelling certain types of AI-generated content, such as deepfakes.

 

International Bar Association

On 30 September 2024, the IBA and the Centre for AI and Digital Policy jointly published a landmark report titled "The Future is Now: Artificial Intelligence and the Legal Profession.” This report underscores the transformative potential of AI in legal practice, while also emphasising the need for responsible adoption of these technologies.  The report calls on the legal community to proactively embrace AI, ensuring that its implementation is guided by principles of fairness, accountability, and transparency.

The IBA has made the following recommendations for AI adoption in the legal profession:

  • Promote AI Adoption: Support smaller firms by providing access to AI tools, training, and financial incentives to bridge the technology gap with larger firms.
  • Enhance Governance: Establish guidelines for AI governance focused on data security and privacy, encouraging law firms to develop aligned AI policies.
  • Support Structural Changes: Offer guidance on organisational adjustments for effective AI integration, including fee structures and hiring practices.
  • Facilitate Training: Develop AI literacy training programs for legal professionals, covering ethical implications and practical tool usage.
  • Encourage Consultation: Include diverse stakeholders in the AI regulatory process to create balanced and informed regulations.
  • Promote Consistency: Work with regulators to create coherent AI regulations that minimise fragmentation and address cross-border issues.
  • Update Ethics: Revise ethical guidelines to include provisions for AI use, such as standards for AI-generated work and disclosure obligations.
  • Foster Collaboration: Encourage international collaboration among legal professionals to establish global AI standards and share best practices.

 

What does this mean for law firms?

Generative AI remains disruptive and its trajectory continues to be fast-paced; in fact, it seems to be accelerating. The government and sector-led regulators – including those overseeing regulation of law firms and those working with them - are still trying to navigate how best to strike the balance in its use. All law firms will have to determine how to balance the opportunities generative AI can create with the ongoing need to address the risks and dangers it can pose if used without effective guard rails in place.

There is so much for firms to get to grips with, especially for those with a global practice where the various approaches in the US, EU, UK and beyond will all need consideration. AI is already extremely valuable for law firms in process-driven areas of their business but when it comes to the use of generative AI, both internally and client-facing, firms must be alive to the fact that legal professionals have a crucial human verification role in a firm’s use of this technology.

As a result, firms must ensure that they adopt a robust risk management strategy when it comes to the use of generative AI and seek to approach key decisions about its use in the same way that they would approach other key decisions from a risk management perspective. Bearing in mind the messaging from the SRA Enforcement Strategy is helpful: be accountable; be able to justify the decisions you make; and be able to demonstrate the underpinning processes in coming to those decisions.

Further Information

If you have any questions or concerns about the topics raised in this blog, please contact Jessica Clay or any member of our Regulatory team.

 

About the Author

Jessica Clay is a partner in the legal services regulatory team. Jessica has a substantial practice advising law firms, including magic circle, global and boutique law firms, as well as partners and others working within firms and lawyers working in-house. Jessica also advises alternative legal services providers.

 

Share insightLinkedIn X Facebook Email to a friend Print

Email this page to a friend

We welcome views and opinions about the issues raised in this blog. Should you require specific advice in relation to personal circumstances, please use the form on the contact page.

Leave a comment

You may also be interested in:

Skip to content Home About Us Insights Services Contact Accessibility