Insights

Regulation of AI Systems in the UK and EU

Regulation of AI Systems in the UK and EU

Digital Speaks Series

Aug 11, 2023
Download PDFDownload PDF
Print
Share

Artificial Intelligence (AI) is one of the most transformative technologies ever created. At this early stage, the opportunities to implement AI solutions in business and everyday life seem endless.

AI presents both opportunities and challenges. When deployed correctly, AI will have the ability to drive rapid, sustainable growth and improvements in efficiency. However, deployment raises legal and ethical questions that organisations are struggling to solve, while national regulators (and governments) are adopting vastly differing strategies to address these issues.   

In this instalment of BCLP’s Digital Speaks, we discuss the diverging approaches to the regulation of AI systems in the UK and the EU, and provide some tips for businesses aiming to navigate the complex regulatory landscape.


Drawing from a multi-disciplinary team, our artificial intelligence lawyers advise clients about the application, legal risks and regulatory implications of these complex, cutting-edge technologies.

Hi, Jack.

Hi, Sasha.

Thanks for coming today to talk to us about the regulation of AI. Why don't you tell us a bit about yourself?

My name's Jack. I'm a third year associate in in the data privacy team at Bryan Cave Leighton Paisner. And I'm also increasingly working on issues to do with the regulation of artificial intelligence. And what about you? 

I'm Sasha. I'm an associate in in the tech and commercial team. I focus on advising on commercial contracts, but also intellectual property. Can you tell me a little bit about how AI is being regulated in in the EU?

Well, the EU AI Act is a regulation that that is currently making its way through the EU's legislative process. As a regulation, it will apply directly in in all 27 EU member states. It ascribes a risk based categorization for AI systems depending on how much of a threat they pose to the rights and freedoms of individuals. At the top of that, you have systems such as those that that use subliminal techniques to distort the behaviour of individuals in in harmful ways. Under that, you have high risk systems which are subject to quite extensive obligations in in relation to governance and accountability and then then below that that you have limited risk systems in in respect of which transparency obligations will be the main regulatory obligations. For instance, Chat Bots would be a good example of a limited risk system and then then below that you have minimal or no risk systems and no regulatory obligations are set to apply to those particular systems under the act.

So what approach is being taken in in the UK to AI regulation?

Well, in in stark contrast to the EU approach, the UK has unveiled a white paper that that details five cross sectoral principles that that will be enforced by existing sectoral regulators. For the moment, there's no intention to place those principles on a statutory footing, but the government intends to monitor the effectiveness of the rollout of its regulatory regime and potentially this this will change the principles. Quite broad. So they cover things like safety, security and robustness and fairness.. And so we're hoping that that there's going to be more clarity provided as to what that that really means.

What practical tips should businesses be thinking about?

Well, firstly, if your business is also likely to be subject to the EU's AI Act, given that that it's a more prescriptive set of regulatory obligations, we would advise using that that as your gold standard due to the uncertainty certainty of what the UK's principles actually entail for the moment. And then then the second tip would be to take data privacy very seriously. From the outset, data privacy issues crop up throughout the process of development and training and even deployment of AI systems. And so our biggest recommendation would be to check on and consult existing regulatory guidance. So, for instance, instance, ICO and the Camille in France have published detailed guidance on how you can achieve regulatory compliance in in that that area.

So what about timelines?

Well for the Use AI Act that's scheduled to come into effect around late 2023, maybe early 2024. There's still some uncertainty with regards to that, but then then it will be followed by a two year transition period before organizations are required to fully comply with every obligation in in the regulation. And then on the UK side, the public consultation for the UK's white paper on AI closed in in late June, and so we're keenly awaiting any more news as to how that that regulatory framework evolves.

Thanks very much.

Thanks very much, Sasha.

Meet The Team

Meet The Team

Meet The Team

This material is not comprehensive, is for informational purposes only, and is not legal advice. Your use or receipt of this material does not create an attorney-client relationship between us. If you require legal advice, you should consult an attorney regarding your particular circumstances. The choice of a lawyer is an important decision and should not be based solely upon advertisements. This material may be “Attorney Advertising” under the ethics and professional rules of certain jurisdictions. For advertising purposes, St. Louis, Missouri, is designated BCLP’s principal office and Kathrine Dixon (kathrine.dixon@bclplaw.com) as the responsible attorney.