In this blog series, we will review the key proposals for reform of data protection law within the Government’s consultation paper ‘Data: A New Direction’. We will consider how far the Government will stray from the current path and signpost some potential pitfalls and practicalities for consideration along the way.
The GDPR makes no explicit reference to Artificial Intelligence. The omission is perhaps unsurprising given the elusiveness of an agreed definition and the intention for the legislation to be technology neutral. ‘Data: A New Direction’ contemplates reforming the UK’s data protection regime to deal head on with AI and machine learning. Given that the government’s 10 Tech priorities include both “unlocking the power of data” and “unleashing the transformational power of tech and AI” it’s perhaps unsurprising that they are looking to put an AI slant on data protection reforms.
Whilst the GDPR does not mention AI it does deal with automated processing, profiling and decision-making that involves personal data; and sets out a series of steps that need to be followed when this is being carried out.
The GDPR puts a specific transparency obligation on controllers when they are carrying out automated decision-making. Article 22 restricts decision-making based solely on automated processing, which produces legal, or similarly significant, effects. The GDPR only permits such decision making in certain circumstances. The Data Protection Act 2018 (section 14) permits UK controllers to make those decisions in a wider range of circumstances providing additional safeguards are met. The UK reserved the right to introduce secondary legislation to further protect data subjects affected by automated decision-making but has not yet done so. When the UK incorporated the GDPR into UK law after it left the EU, no changes were made to the sections dealing with automated processing and decision-making.
Simplification of Regulations
‘Data: A New Direction’ describes navigating and applying relevant data protection provisions as a “complex exercise” for organisations looking to develop or deploy AI tools. The consultation contemplates that the data protection regime generates confusion and impedes the uptake of AI tools. There is a particular focus on whether the concept of fairness is clear in the data protection context or needs further clarification. The consultation paper then takes a deeper dive into certain provisions of the data protection regime affecting AI systems.
Monitoring Bias in AI Systems
The government has invited views on whether organisations should be permitted to use personal data more freely, subject to appropriate safeguards, for the purpose of training and testing AI responsibly. The government is exploring the possibility of allowing further use of special category data (which includes data relating to race and sexuality) and criminal convictions data to allow organisations to monitor, detect and correct bias in their systems. As the consultation acknowledges, such monitoring is arguably already permitted by schedule 1, paragraph 8 of the Data Protection Act 2018. The consultation has invited views on whether a new condition would create clarity; and whether organisations should be able to process individual’s data for the purpose of bias monitoring, detection and correction in relation to AI systems without balancing legitimate interests against the rights of data subjects.
Automated Decision-Making and Article 22 GDPR
As explained above, Article 22 GDPR restricts decision-making based solely on automated processing, which produces legal, or similarly significant, effects. The government has sought evidence on whether there needs to be clarification of this provision, and whether it is “future-proof”. It also sought responses to the recommendation of the Taskforce on Innovation, Growth and Regulatory Reform that the Article should be removed altogether. The removal of a key safeguard set out in the GDPR may appease some businesses, but the resulting divergence from the EU data protection regime could put data flows between the UK and member states at risk.
Public Trust in AI
The government recognises that “public trust is critical to the adoption of AI and consequently the growth of the AI sector”. The consultation looks into whether reform is needed to encourage those dealing with automated decision-making to improve transparency and demonstrate accountability.
There is a tension within the government’s consultation about whether we should be looking to approach AI carefully or “unleash” it. On the one hand it acknowledges that there are issues with bias developing in systems, the concept of fairness in data protection, and public trust in automated decision making systems. On the other it asks for views on a significant reduction in safeguards when automated processing is used to make decisions.
‘Data: A New Direction’ recognises that AI may not involve personal data at all. Reform to data protection legislation will not result in a comprehensive set of rules for those operating AI systems. This would require a much broader reform package dealing with the way organisations collate, utilise and deploy all types of data. Organisations whose AI systems use personal data are unlikely to see radical change but may see some moderate reforms to how those systems can be tested and deployed.
If you have any questions or concerns about the topics raised in this blog, please contact our Public Law team.
ABOUT THE AUTHOR
Fred is a senior associate within the Public Law Department and International Crime Group. His clients have included businesses, trade associations, religious institutions, schools, education providers, charities, and private clients including high net worth individuals, and senior political and business figures.