Services A-Z     Pricing

AI and access to justice: Deepening the divide?

18 November 2025

This article was first published in The New Law Journal in August 2025.

Frontline legal services have the most to gain from artificial intelligence, but also face unique challenges in its provision.

The judgment of the Divisional Court in R (Ayinde) v London Borough of Haringey [2025] EWHC 1383 (Admin) has generated significant interest within the legal community. Although the court determined that the reliance upon ‘fake’ citations did not justify commencing contempt proceedings on the specific facts of these two cases, the court’s concern was clear. As Dame Victoria Sharp P said in her judgment: ‘There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence [AI] is misused.’

Although the headlines in the legal press about these cases have focused upon fake case citations, the risks posed by AI within litigation are much broader. Generative AI is now embedded in many publicly available search tools, as well as legal research platforms. Fake citations may be linked to fake judgments; inaccuracies in emphasis or omissions of key points emerge from AI-generated summaries; and for many people, the layers of potential inaccuracy are not well understood. Further, where generative AI is readily and publicly available, these errors may be introduced by any member of the legal team, including the client. These issues have played out in courts all over the world, and the Divisional Court highlighted some of these incidents in an appendix to the judgment.

The Ayinde case, on which we advised, involved a local law centre representing a homeless client in judicial review proceedings. For law centres providing legal support to the most vulnerable in our community, the potential advantages provided by AI are significant. As are the risks. Can these ‘frontline legal services’ harness the benefits? And what are the challenges?

This highlights a broader issue—as the sector rushes to deal with the opportunities and risks of AI, do we risk deepening the divide between the ‘haves’ and the ‘have-nots’ of the legal sector?

The approach of the regulators

The UK government has taken a firmly ‘proinnovation’
approach to AI and considers it is best regulated by specialist regulators, given their sector-specific knowledge. That puts the onus on the Solicitors Regulatory Authority (SRA) and Bar Standards Board (BSB) to issue further guidance in order to regulate the use of AI in the sector.

For some years, legal regulators have seen technological innovation as one means to address the sizeable unmet need for legal support. The SRA has identified the opportunities to use technology to address ‘access deserts’—for example, developing pro bono platforms and bespoke case management systems within university legal advice services. In statutory guidance issued in April 2024, the Legal Services Board (LSB) specified that ‘regulators should adopt an approach to the promotion of technology and innovation for improving access to justice and addressing unmet legal need that puts the public interest and the interests of consumers first’.

The Master of the Rolls also acknowledged earlier this year that generative AI should be used to promote and improve access to justice and improve the quality of decision making. Further, at a conference in 2024 he stated:

‘Many fear, as I have said, that they pose threats to the way things have always been done. And they really do. But the simple fact is that we would not be properly serving either the interests of justice or access to justice if we did not embrace the use of new technologies for the benefit of those we serve.’

The Ministry of Justice’s commitment to the growth of legal tech is also evident through its recent funding for LawTechUK to encourage the greater use of AI and technology in legal services.

There is therefore clearly an appetite to harness the potential of AI to deliver access to justice.

 

Innovation within the legal industry

The legal technology industry has not stood still. Generative AI has been swiftly integrated within traditional legal platforms such as LexisNexis and Thomson Reuters. Further, significant numbers of tools and platform are being independently developed to deal with broader and more bespoke legal needs.

There is a kaleidoscope of options at varying stages of development and maturity available. For any provider of legal services, this is a difficult landscape to navigate. But with adequate resources, time and (increasingly) the hiring of technology and innovation specialists within organisations, that problem is a surmountable one for large organisations. Particularly where established industry knowledge platforms rely upon reputable data sources and mature information governance, it is possible to establish appropriate confidence in the safety and accuracy of these tools. Big Law therefore stands to benefit from the efficiencies of AI while being able to manage the risks.

However, the solutions are less obvious for frontline legal services. They are usually publicly funded organisations and/or charities operating with limited and uncertain resources. Licensing and subscription fees put available platforms in the market beyond the reach of many on the frontline. And the odds of such organisations being able to hire specialists to capitalise on the opportunities of AI are low.

The limited guidance so far has favoured a ‘one size fits all’ approach. There is, in our view, a need for more direct support and innovation with an eye on the challenges unique to frontline legal service providers.

 

Challenges on the frontline

These challenges fall broadly into the following different categories. While not all of them are unique to frontline service providers, these challenges are often heightened in that environment:

  1. Literacy and training

At every level of an organisation, from senior leaders to new starters, technological literacy is required to even discuss the opportunities and risks of generative AI. The provision of informed training on specific tools needs to be delivered in a safe and effective way. Whether frontline service providers have the resources to provide such training on an ongoing basis depends entirely on the state of their funding.

A number of these organisations also operate with a mix of legally trained (and regulated) members of staff and other individuals like paralegals, case workers and advisers who work on a range of issues but are not legally qualified. It is essential therefore that their resources stretch to train these individuals too.

  1. Information governance and bias

Most generative AI tools operate using large language models trained upon large datasets. Data processing at every stage of the lifecycle should be understood to ensure that confidential and privileged information is used in line with professional obligations and the UK General Data Protection Regulation.

Further, bias in outputs is rooted in these datasets, which may be significantly weighted towards certain sectors of the community, practice areas or public views. This may be a particular issue when serving underrepresented sectors of the community.

  1. Assessing and onboarding

Given the vast array of independent tools available, selecting the right tool to meet the needs of the organisation becomes challenging, especially when considering questions of cost, compatibility and complexity.

  1. Accuracy

Hallucinations are an inherent risk of all generative AI. While tools are available to check ‘fake’ citations, it is more challenging to ensure that summaries or responses have a sound legal basis without actually undertaking the legal research. Where resources are tight, there is always the risk of over-reliance upon seemingly sensible legal outputs.

  1. Lack of regulatory guidance

Ayinde contains some observations on the obligations on lawyers to check case citations, and more generally to ensure the accuracy of material that is put before the court—whether or not they originally produced it. In that context, the development of AI citation checker tools is a helpful development, although these tools inherently carry risks in terms of processing confidential client information.

This serves as a starting point, but the scope of professional obligations and duties remains unclear in the absence of fuller regulatory guidance.

It is clear that frontline legal services operate in a far riskier environment than Big Law".

 

Meeting the challenge

It is clear that frontline legal services operate in a far riskier environment than Big Law. Generative AI may be unseen and unchecked, or implemented without a co-ordinated strategy, adequate information governance or training. This creates the risk that these frontline services, which could arguably benefit significantly from generative AI, fall further behind.

Law firms and chambers with deeper pockets are better funded to harness technology while also managing the risks by investing in best-in-class AI tools and specialist staff.

In response to the Ayinde case, the Law Centres Network has distributed guidance and webinars on the safe use of AI, updated its legal supervision and governance guidelines to members; and organised training on compliance and professional ethics. But to fully bridge the divide, the support of the entire legal ecosystem is needed, including law firms, regulators, academics, the judiciary and government, as well as charities, NGOs and not-for-profit organisations and suppliers with technological expertise.

The growing focus on ‘Justice Tech’ is a step in the right direction, and there are some positive steps being taken within the legal tech industry to develop generative AI tools to address issues specific to frontline services—for example, tools to enable the identification of legal rights, provision of chatbot-style family law advice and the completion of forms and applications. The Nuffield Foundation has a research project to develop a portal which provides predictive feedback for self-representing individuals involved in summary judgement proceedings.

 

Conclusion
 

If the potential of generative AI is unlocked within a framework of regulation in a safe way for use by frontline legal services, this could lead to a swathe of services being delivered effectively—it could allow for the swift identification of legal issues, assessment of legal grounds, analysis of precedent and accurate advice on prospects of success. However, the entire legal community needs to play a role in making this possible. The message from the fake cases judgment is clear that AI cannot be a substitute for human oversight and intervention—but that is not to say AI should not be an invaluable tool for the provision of frontline legal support to some of the most vulnerable in society.

This article was first published in The New Law Journal in August 2025.

about the authors 

Emily is a partner within the Public Law team specialising in information law, inquests, inquiries and internal investigations. Her background in criminal and regulatory proceedings, both defending and prosecuting, equips her to fully support clients involved in complex investigative processes. She is described as “precisely the kind of solicitor a client wants when the going gets tough” (Legal 500 UK 2021).

Sahil is a senior associate in the public law team. His practice covers all aspects of public law from judicial reviews to public inquiries, with particular expertise in environmental and climate change judicial reviews, planning challenges, human rights-based challenges, and public procurement litigation.

 

A Plethora of Public Inquiries

This article was first published by New Law Journal on 4th August.

The Terms of Reference for the Scottish Covid-19 Inquiry

As we await the publication of the terms of reference for the UK wide Covid-19 Inquiry, in this blog I consider the key features of the recently published terms of reference for the Scottish Inquiry into the Covid-19 pandemic.

The Covid-19 Inquiry – the importance of the terms of reference

Any day now the Covid-19 Inquiry will publish draft terms of reference. This will be a significant event.  Once agreed, the terms of reference will determine the scope and length of the inquiry which is due to begin its work in the Spring.  In turn this will have a direct impact on how valuable the inquiry turns out to be.  

The right to equality in fertility treatment

A same-sex couple have commenced a significant test case against a branch of the NHS fertility sector for discrimination against them on grounds of their sexuality. 

Court considers that intransigent public inquiry witnesses will often give evidence once they have been compelled to attend

In a 16 November 2021 blog, I described how refusing to give evidence to a public inquiry might play out. Another new case, Chairman of the Manchester Arena Inquiry v Romdhan [2021] EWHC 3274 (Admin), reinforces my view. Potential witnesses in next year’s coronavirus (Covid-19) inquiry take note.

 

Essential Planning for the COVID Inquiry - Sophie Kemp provides insight for the Carer

Given a judge-led inquiry into how the Scottish Government handled the COVID pandemic will start before the end of this year, many are anxiously awaiting news of the Government’s promised UK- wide public inquiry.

Back in May 2021, No 10 committed to that inquiry starting in Spring 2022. Yet months on, details are scant. Who will Chair it? What are its terms of reference? Yes, there may be six months to go, but vital questions remain before any inquiry of this national significance and stature begins.

Regulation and Uptake of the COVID-19 Vaccine

The government has now approved the supply of the Pfizer-BioNTech COVID-19 vaccine. The reason they have been able to do this so quickly is because they have taken advantage of the temporary authorisation regime laid out by the Human Medicine Regulations of 2012 and 2020. The 2012 Regulations were updated in 2020 specifically to facilitate the smooth rollout of the COVID-19 vaccine. In the public consultation preceding the introduction of these updated regulations, several respondents raised concerns regarding unlicensed vaccines and immunity from civil liability. In practice, very little is known about these regulations and their application. This article seeks to shed some light on the temporary authorisation regime and suggest a means of alleviating concerns in the context of “vaccine hesitancy”.

Parliamentary scrutiny in the time of Coronavirus

As a new nationwide lockdown comes into effect, Stephen Parkinson and Charlie Roe from our Public Law team, consider the often limited role of Parliament in scrutinising restrictive regulations throughout the COVID-19 pandemic.

The inquest process during COVID-19 restrictions

Inquest proceedings, like other legal proceedings in the UK, have been significantly affected by social distancing restrictions and advice arising from the COVID-19 crisis. This blog looks briefly at the impact of the Coronavirus Act 2020 on proceedings, and examines the Chief Coroner’s guidance notes to coroners working during the crisis.

The future public inquiry into COVID-19

The devastation wrought by COVID-19 has led to profound questions about the UK government’s response to the pandemic. Calls for a public inquiry are continuing to mount and are likely to prove difficult to resist. This blog considers the framework for such inquiries, and the key issues likely to form the core of its terms of reference.

COVID-19 and contact tracing apps: A test of public confidence in data privacy?

Dominic Raab announced last week that the current UK lockdown would last for at least another three weeks. These restrictions are unlikely to be relaxed until a large scale plan is in place to track and restrict the spread of the virus. Part of this plan will involve the use of the NHS “contact tracing” app, which we have been told is in an advanced stage of development.

You may also be interested in:

Skip to content Home About Us Insights Services Contact Accessibility