In 2021, the European Commission submitted a draft regulation to regulate the use of artificial intelligence ('AI')On 6 December 2022, agreement was reached on this new European 'AI regulation'. This legislation is expected to be introduced in autumn 2023. All European Union (EU) member states will then be required to implement the KI regulation.

High time to dive deeper into this upcoming regulation. In this blog, you will read about the purpose, intent and implications of the new regulations.

Definition of AI

The preliminary definition used by the European Commission is freely translated as follows: an application of AI is a software that can generate output, such as content, predictions, recommendations or decisions affecting the environment being interacted with, for a given set of human-defined objectives.

Think of the software you use for your personal assistant Siri, self-driving cars, facial recognition to unlock your phone or showing personalised suggestions by Netflix or Spotify. Due to the broad wording of the definition, the KI regulation will not only apply to just software products, but also to products that only partially rely on KI. The new regulations will therefore have a major impact.

This is partly because every provider and user of applications of AI within the EU, has to comply with these regulations. If a provider or user is outside the EU, the regulations will still apply if the output of the application of AI is used in the Union.

What is the purpose of the new regulation?

The aim of regulation, in short, is to ensure that applications of AI are safe and do not violate existing (EU) legislation. Think of preventing discrimination and protecting fundamental rights. At the same time, the regulation should also ensure legal certainty to facilitate investment and innovation in AI.

What is the intent of the regulation?

To achieve these goals, a set-up based on the risk posed by the application of KI has been chosen. This distinguishes between applications of KI that:

  1. pose an unacceptable risk;
  2. involve high risk;
  3. involve a specific manipulation risk; or
  4. involve low or minimal risk.
AI at unacceptable risk

Applications of KI that fall into the first category (unacceptable risk) are prohibited. In simplified terms, the main applications that fall under this are applications of KI that 'materially distort' or misuse behaviour of individuals. Consider the social scoring principle from China, where the government gives each citizen a score and it becomes higher or lower in response to certain things someone does.

High-risk AI

Applications of AI from the second category are subject to mandatory regulations, including the implementation of a risk management system throughout the life cycle of the application. Thus, this is not a one-off obligation. In addition, high-risk applications of AI are subject to conformity assessment before use is allowed and human supervision of the systems must remain possible.

Besides a general formulation of 'high risk' applications of KI, the annex of the KI regulation lists eight sectors/areas where high risk is assumed for certain processes. These include KI in the areas of:

Furthermore, the obligations differ for each organisation involved in the application of AI. For instance, different obligations apply to providers, product manufacturers, importers, distributors, users and other third parties involved in the KI value chain.

AI with specific manipulation risk

A transparency requirement applies to applications of AI that do not involve a high risk, but do involve a specific manipulation risk. Examples include deep fakes, where images of people are created or modified (e.g. their emotion or facial expression) by means of AI. If these applications are used, this will have to be made known to users.

AI with low or minimal risk

Applications of AI that involve low or minimal risk are in principle allowed. However, the AI regulation does aim for the mandatory rules, which apply to the second category of applications of AI, to be applied voluntarily to low or minimal risk applications of AI.

Supervisory body

The new regulations also reveal that a European regulator will be created. Among other things, this regulator will be in charge of ensuring compliance with the AI Regulation and will cooperate with national regulators. A comprehensive system will also be put in place for penalties that can be imposed for regulatory violations. Fines can amount to 6% of global turnover.

What are the implications of the AI regulation?

The expanded definition of applications of KI will cause major impact in almost all sectors (and not only within digital services). For healthcare institutions, non-profit organisations and government agencies, the KI regulation will also raise the necessary questions and force action. Getting the classification of applications of KI right is crucial. The classification of applications of KI into categories two, three or four has important implications for the measures to be taken. It is therefore advisable to take stock in good time of whether the new regulations will have an impact on your organisation and whether your organisation is ready for this. We will of course be happy to help you with this.

Questions or contact

If you need help setting up your organisation or have questions about the new AI regulation, contact one of our lawyers from Team IP/ICT/Privacy.

This article was written by