Blog
The International Data Insights Report: Trends in international arbitration
Mark Fallmann
The Regulatory Regimes
The European Union's AI Act and the UK's AI Bill in the House of Lords represent two significant legislative efforts aimed at regulating artificial intelligence (AI) within their respective jurisdictions. Both frameworks seek to ensure ethical AI development and usage while fostering innovation, yet they diverge significantly in their regulatory approaches and implications.
The EU's AI Act, proposed in April 2021, is a comprehensive regulatory framework that categorizes AI systems based on their risk levels: unacceptable risk, high risk, and low or minimal risk. The Act bans AI applications deemed to pose an unacceptable risk to fundamental rights and safety, such as social scoring by governments, and imposes stringent requirements on high-risk AI systems. These requirements include rigorous risk management, robust data governance, transparency mandates, and provisions for human oversight. Non-compliance can result in substantial fines, up to 6% of a company’s global annual turnover, reflecting the EU's stringent regulatory philosophy aimed at providing thorough protections.
In contrast, the UK's AI Bill adopts a more flexible, principles-based approach. It seeks to establish the UK as a global AI leader by promoting innovation while ensuring ethical standards are maintained. The Bill is less prescriptive than the EU’s AI Act, emphasising adaptability to rapid technological advancements. It proposes the creation of a national AI ethics committee to offer guidelines and oversight, rather than enforcing detailed rules. This approach provides businesses with more leeway to innovate, reducing the regulatory burden compared to the EU’s framework.
Impact on Businesses Operating in One or Both Jurisdictions
Businesses looking to operate in the EU face significant regulatory challenges due to the stringent requirements of the AI Act. Companies must invest heavily in compliance, particularly if their AI systems are classified as high-risk. This involves implementing comprehensive risk management systems, maintaining rigorous data governance practices, ensuring transparency, and establishing human oversight mechanisms. The cost of compliance can be substantial, creating barriers to entry, especially for smaller firms and startups. However, adherence to the AI Act can also enhance a company's reputation and marketability within the EU, signalling a commitment to high safety and ethical standards.
Operating in the UK, under the AI Bill, offers a more business-friendly environment with lower compliance costs and regulatory hurdles. The principles-based approach allows for more agile development and deployment of AI technologies. Businesses can focus on aligning their operations with the ethical guidelines provided by the national AI ethics committee, ensuring flexibility in their AI practices. However, the less prescriptive nature of the UK's approach may introduce uncertainties regarding compliance standards, requiring businesses to stay vigilant and adaptive to evolving guidelines.
Businesses aiming to operate in both the EU and the UK must develop a strategic approach to navigate the differing regulatory landscapes. A dual compliance strategy is essential. This involves establishing comprehensive compliance programs to meet the EU’s stringent requirements while adopting flexible, ethical AI practices to align with the UK's principles-based approach.
Key strategies include:
By adopting these strategies, businesses can best position themselves to manage effectively the impact of both AI regulatory regimes, ensuring compliance whilst fostering innovation and maintaining competitive advantage in both markets.
If you have any questions regarding this blog, please contact our Corporate, Commercial and Finance team.
Mark Fallmann
Laura Phillips TEP
Julie Matheson
Skip to content Home About Us Insights Services Contact Accessibility
Share insightLinkedIn X Facebook Email to a friend Print