Technology in the accountancy and legal sectors – what are the regulators doing? The long read…
AI has been a hot topic in technology trends for the last few years with regular news features on the increased use of AI, whether that is driverless cars, robots who do your cleaning or the more run-of-the-mill automation tools being developed and rolled out by businesses to analyse large data sets and thereby standardise and speed up routine processes.
Over the last five years or so, while the use of AI has increased across both the legal and accountancy sectors, this has not necessarily been as ground-breaking on the innovation front as first anticipated. The adoption of automation in legal services has been predominantly driven by cost and competition, and the associated drive for efficiency and consistency. AI is now, for example, often used for data and document review in litigation and AI also features in predictive technologies and in due diligence work. It is often, unsurprisingly, the first choice for repetitive, process-intensive legal work. In other areas of practice, such as corporate and transactional work, document assembly tools are now increasingly used for contract drafting with a number of larger law firms using Kira Systems – a machine learning contract review software. The 2020 ILTA-Blickstein Group Legal AI Tool Usage Survey found that the AI tools most relied upon are those considered to be ‘integral to daily operations’. And these are being rolled out by leading legal technology brands, who have re-worked their existing resources to now also feature elements of AI capability. Products that immediately come to mind and ones with which we are familiar are Relativity and the HighQ portal developed by Thomson Reuters.
In the accountancy sector, AI is most often used in processes aimed at detecting fraudulent activity and money laundering and is certainly becoming a trend in audit processes. This comes as no surprise given that AI lends itself to automated processes and provides a layer of consistency and efficiency to any such process.
Recognising the increased use in AI, earlier this year the Financial Conduct Authority (FCA) announced that it was establishing, with the Bank of England, the Artificial Intelligence Public-Private Forum (AIPPF). Its first meeting was held in October. Its purpose is to develop a constructive dialogue with the public and private sectors in order to better understand the use and impact of AI in the financial services sector. The Forum aims to share information and understand the challenges of using AI, as well as the barriers to deployment and any potential risks or trade-offs.
Key to this, in our view, is breaking through the divisive line, where on either side sit two rather different schools of thought. On the one side, AI can be seen as a way to inject agility into a process in that it can be used to switch rapidly between tasks, keep a track on progress made and also highlight or predict any challenges or hurdles that might arise. On the other side of the line is the firmly held view that the most certain way to achieve the desired outcome – in this context, the delivery of professional services - is to rely upon the advanced cognitive functions of the human brain and its analytical and problem solving capabilities and emotional intelligence, plus the font of knowledge and insight a subject matter expert is likely to be able to draw upon.
But surely a compromise can be found in all of this? There must be a position whereby AI can build upon extant human intelligence, by recognising patterns and anomalies in large amounts of data – which is key, for example, in detecting fraud. AI can also scale and automate repetitive tasks in a more predictable way than several different human brains undertaking that same task – including complex calculations, for example, to help recognise and manage risk.
Customer due diligence and fraud detection represent areas of risk for any firm operating in the accountancy or legal sector. This is an area of increasing focus for AI technology and could help firms to reduce and manage their risks more efficiently. Onfido is one example of a company offering AI solutions for customer ID checks and verification, which is a key element to on-boarding clients and minimising the risks of money laundering. The product uses image recognition in order to verify a customer’s identity using a photo and identity documents. The aim is to simplify the process for the customer, save time and enable firms to comply with their regulatory obligations in terms of customer due diligence processes in a more cost-effective and consistent way.
Tailored anti-money laundering (AML) solutions are also popping up in the sector. AML compliance processes often involve high levels of manual, repetitive, data-intensive tasks. Banks and companies are increasingly using automated processes to perform analytics and identify risk scores, based on algorithms. It was announced last year, for example, that HSBC was trialling a new AI system to spot “odd behaviours” in order to tackle financial crime.
In addition to customer due diligence and AML processes, firms are now thinking about how AI and technology can assist them in other areas of work – and this is perhaps where increasingly, innovation comes into play.
For example, there have been recent AI developments for firms looking to tackle sexual misconduct in the workplace. Law firm culture has been in the spotlight of late – firms are expected to take more responsibility for the actions of its employees and allegations of misconduct must be taken seriously.
Programmers are developing “bots” that are able to identify digital bullying and sexual harassment. Known as "#MeTooBots", the bots can monitor and flag communications between colleagues. The bot uses an algorithm trained to identify possible instances of bullying or other inappropriate conduct, including sexual harassment, in company documents, emails and chat/messaging functions. Data is analysed for various indicators that determine how likely it is to be a problem, with anything the bot reads as being potentially problematic then sent to a lawyer or Human Resources to investigate. The bot is programmed to look for anomalies in the language, frequency and timing of communications. AI firm NexLP’s platform is already used by more than 50 corporate clients, including law firms in London.
Another chatbot aimed at dealing with issues arising in the workplace is Spot. Spot has created a chatbot that allows employees to anonymously report issues they believe amount to misconduct, including sexual harassment allegations. Recognising that often sexual harassment goes unreported, Spot allows employees to record their accounts in their own time and report on an anonymous basis. Spot is based upon a basic AI model that uses natural language processing to interact with the user. This enables it to give advice and ask questions, which should help to further an investigation into the alleged harassment. Spot already has numerous companies listed as its customers, including online bank Monzo, which suggests this is a growing market.
Clearly, there are benefits to these initiatives for firms – any attempt to reduce, tackle or prevent sexual misconduct or other harassment in the workplace is fundamental in seeking to promote a good working culture. Having such initiatives in place may also assist firms in working with their regulators should any such issues arise. However, there is likely to be a risk in an employer relying too heavily on such initiatives and there could always be workplace bullying or sexual harassment occurring which cannot be captured by a reporting tool or a bot. How can, for example, a #MeTooBot assist with instances of sexual harassment which take place entirely offline, say at work events? There is still scope for these issues to be captured in the aftermath when those involved might share what has happened via online platforms, be this via email or messaging apps. Equally, in the current COVID-19 world we live and work in, face-to-face social gatherings could be a thing of the past or at the very least, kicked into the long grass. So perhaps this creates an opportunity for AI to really develop and come into its own in this area?
Commentators have also noted that there remain limitations to the technology itself, for example AI is not yet capable of a broader cultural understanding and will not display a level of insight which a human expert in their field would. There is also a risk that the technology might miss the harassment altogether or indeed, the technology could be oversensitive, or that those in the know may also learn how to dupe the software. However, as artificial emotional intelligence develops, some of these limitations may become moot points. Nonetheless, it is still very early days and the extent to which firms will benefit or will be able, or more importantly choose, to rely upon this technology remains to be seen.
If you have any questions or concerns about the content covered in this blog, please contact a member of the Regulatory team.
Jessica is a Senior Associate with extensive experience specialising in legal services regulation. Jessica’s work in this sector focuses on advising her clients in relation to complying with regulatory obligations, better understanding the importance of legal ethics within regulation, regulatory investigations and public law matters, including reviewing regulatory frameworks and decision making processes. Outside the legal services sector, she acts both for and against the regulators of the accountancy and actuarial professionals.
Skip to content Home About Us Insights Services Contact Accessibility