Blog
From Certificates to Belief Statements: The CPS and the Limits of Forum Bar Intervention
Rebecca Niblock
The government recently announced that it will be hosting the first global summit on artificial intelligence (AI), with a focus on the safe use of AI. This could not come at a more important time as the world seeks to unpick the conundrum of AI both in terms of the challenges it throws up and opportunities it presents.
It goes without saying that advancement of AI and its fast-paced trajectory will require agile and progressive leadership across all sectors to protect us from harm while enabling us to reap the benefits it can provide. How law firms start to grapple with this challenge is no exception.
When thinking about how to use AI effectively and safely within a law firm, it is key to identify and manage the associated risks, while not stifling innovation.
The Solicitors Regulation Authority (SRA) is consulting on its business plan for 2023-24 and one of its priorities continues to be supporting innovation and technology, particularly that which improves the delivery of legal services and access to them.
The SRA recognises that fast-moving AI can drive new risks and challenges for regulation and, as part of its work, it will consider the effectiveness of introducing a new regulatory sandbox, enabling those it regulates to test innovative products and initiatives in a safe space.
In terms of the SRA’s regulatory reach, the hook in both codes of conduct is an ongoing requirement on individuals and firms to provide services to clients competently, taking into account their specific needs and circumstances.
For those with management responsibilities, the SRA requires them to ensure the same in respect of those they supervise. Furthermore, law firms must identify, monitor and manage all material risks to their business, of which generative AI is arguably becoming a frontrunner.
In England and Wales, the use of AI would fall within the broad competency requirements; however, it is worth noting that in the US, comment 8 to the American Bar Association’s (ABA) model rule of professional conduct 1.1 was added almost a decade ago to address technological competency.
This says: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.”
As of March 2022, 40 US states had adopted an express duty of technological competence for lawyers, with many of the remaining states taking the view that this duty is already implicit.
The ABA Commission on Ethics 20/20 clarified that the reference was added owing to the “sometimes bewildering pace of technological change”.
We are now a decade on and in a very different space with AI. This begs the question: do regulators, in our case the SRA, need to be doing more? Will the use of AI within law firms eventually be drafted into the codes of conduct?
The government’s recent white paper confirms that there is no plan to give responsibility for AI governance to a new single regulator but that existing sector-specific regulators will be supported and empowered to produce and implement context-specific approaches that suit the way AI is used in their sector.
The white paper also outlines five principles regulators should have in mind in terms of the safe and innovative use of AI: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
Generative AI will prove valuable for law firms in process driven areas such as due diligence, regulatory compliance (including client on-boarding processes) and contract analysis.
However, in terms of managing associated regulatory and reputational risks, firms must not lose sight of the fact that legal professionals have a fundamental oversight role in a firm’s use of this technology. PwC recently referred to this approach as “human led and technology enabled”.
Firms will need to adopt a robust ethical framework to underpin all key decision-making processes, including those pertaining to the use of AI, heeding key pointers in the SRA enforcement strategy – for example, being accountable and being able to justify decisions that are reached and being able to demonstrate the underpinning processes and factors that have been considered.
Finally, the following will stand a firm in good stead when it comes to the use of generative AI:
(i) do not input any confidential or commercially sensitive information into an open AI large language model; and
(ii) scrutinise and verify the information the model generates as we know that these AI models can produce incorrect content which appears to be convincingly accurate.
This combined with the government principles and a risk-based approach taking on board the SRA’s stance set out in the enforcement strategy is a very good starting point.
This article was first published on Legal Futures on 20th June 2023.
If you have any questions regarding the information above, please contact Jessica Clay in our Regulatory team.
Jessica Clay is a partner in the legal services regulatory team. Jessica has a substantial practice advising law firms, including magic circle, global and boutique law firms, as well as partners and others working within firms and lawyers working in-house. Jessica also advises alternative legal services providers. She advises on a range of SRA-related matters, including authorisation and changes to business structures, reporting obligations, compliance and ethical standards, with a focus on matters relating to counter-inclusive behaviours and how these impact law firm culture. Jessica has recently advised firms and individuals in respect of allegations relating to AML and counter-inclusive conduct, including sexual misconduct, which are being investigated by the SRA.
We welcome views and opinions about the issues raised in this blog. Should you require specific advice in relation to personal circumstances, please use the form on the contact page.
Rebecca Niblock
Jemma Brimblecombe
Charles Richardson
Skip to content Home About Us Insights Services Contact Accessibility
Share insightLinkedIn X Facebook Email to a friend Print