The use of generative and agentic AI in audit is increasing rapidly as accountancy firms seek to improve efficiencies in audit engagements. The development of regulatory guidance has however largely trailed behind the pace of innovation, with little formal guidance on this topic issued since last July when the FRC published its “landmark” guidance on AI in audit. That guidance was an important first step in providing a “coherent approach” to AI deployment, and provided insight into the documentation requirements for AI tool development that the FRC expected to see.
Late last month however, the FRC published further guidance focused on identifying and mitigating risks arising from the use of generative and agentic AI in audit engagements (“the March Guidance”). In what the FRC describes as “the first from any audit regulator globally on generative and agentic AI”, the March Guidance sets clearer expectations about where responsibility and regulatory scrutiny will sit as AI adoption and rollout increases.
Three Risks to Audit Quality
The March Guidance categorises AI‑related risks to audit quality into three areas:
- First, the FRC identifies the risk of deficient outputs, where the output of an AI tool itself might be flawed, incomplete or inappropriate, but is then subsequently relied on in the audit. This could be due to an issue with the inputs, or with the performance of the system itself. Categories of deficient outputs include fabricated material (hallucinations), missing information that should have been included (omissions), misrepresentations of information (distortions), and faulty reasoning.
- Second, the FRC addresses the risk of misuse of outputs, where an AI tool produces an appropriate output, but it is misinterpreted or misunderstood by the user, leading to the output being inappropriately relied on during the audit. The guidance recognises that explainability will vary by tool and use case, but stresses that auditors must understand outputs sufficiently to evaluate them critically.
- Third, the FRC highlights the risk of non‑compliant methodology. This arises where an audit firm’s methodology permits approaches that might not meet established auditing standards. This could be the case especially when such methodology involves new forms of audit procedure or different approaches to audit as a result of reliance on a new AI tool.
Mitigating AI Risks
The March Guidance sets out possible mitigations across all three risk categories. While it ultimately provides two illustrative examples of how a hypothetical audit firm might consider the risks posed to audit quality by the use of an AI tool, the nature and extent of mitigating activities to be implemented in each category will remain a matter of professional judgement. The overall goal for firms should be to achieve the appropriate level of confidence in the quality of any AI output, such that overall audit quality is maintained throughout the engagement.
The mitigations outlined in the March Guidance (set out below) certainly serve as a useful checklist for all firms seeking to deploy generative and agentic AI in audit engagements.
- Firms should have safeguards in system design and development. These could include designing a more detailed workflow for an AI system. Firms should consider what tasks will the system be set, and how will it approach them? What are the right points in the process to build in review activities (whether by humans, other Large Language Models (LLMs), or rules-based protocols)? Can the risk of deficient outputs be mitigated by splitting a complex task into less complex ones, effectively distributing the cognitive load across multiple steps or components? Can one LLM review the work of another, or the work product of multiple LLMs be synthesised? Considering such questions will assist in ensuring that the risks of deficient system outputs are appropriately mitigated.
- Firms need to also have appropriate approval and certification processes. This might include testing that any AI tool consistently produces outputs which are appropriate for the intended purpose. These could also include implementing a process to monitor the performance of the tool, including identifying any unexpected behaviours and how this might affect audit quality.
- Firms should have robust training and governance arrangements. Employees must know how and when to use an AI tool. Training in particular on the quality of prompts (both in terms of task specification and provision of supporting information), as well as how to review/oversee the work of the tool and identify deficiencies, should be prioritised to ensure that audit quality is maintained. Moreover, firms should have clear policies in policies in place regarding when and how such AI tools should be used.
- Finally, the need to have human review and oversight cannot be overlooked. Staff members who will be reviewing outputs need to have the appropriate competence to identify potential deficiencies in outputs. They must also be mindful of the need to apply professional scepticism, have knowledge of the main risks of output deficiency for the particular tool and its use case, and be alive to their own risk of automation bias.
Conclusion
As compared with last year’s guidance, the March Guidance presents a more detailed and evolved framework for firms seeking to deploy AI tools in the context of audit engagements. While characterised as a codification of good practice, the FRC notes that the March Guidance will also provide a “conceptual foundation for future FRC work in this area”. Firms would therefore be well placed to carefully consider whether their current practices are aligned with the FRC’s guardrails in the guidance.
Ultimately however, what is clear from the March Guidance is that accountability remains unchanged. In line with ISQM (UK) 1 and ISA (UK) 220, firms and the engagement partner remain fully responsible for audit quality, regardless of how advanced or autonomous any AI tool has become.
About the Authors
Ian Ko is a Senior Associate in the Regulatory team at Kingsley Napley LLP. He specialises in advising professional services firms and individuals, particularly in the accountancy and audit sector, who are subject to investigations and enforcement proceedings, as well as those seeking advice about regulatory compliance, particularly in the use of AI.
Ananta Singh is a Trainee Solicitor in the Regulatory team at Kingsley Napley LLP.
Share insightLinkedIn X Facebook Email to a friend Print