New EU Regulation on Artificial Intelligence is on its way

Back in April 2021, the European Commission presented a proposal on what could be the first targeted EU legislation in the field of Artificial Intelligence. This so-called AI (Artificial Intelligence) Regulation aim to ensure the EU citizens’ reliance in the use of AI systems.
News
IT and Technology

Structure and scope of the AI Regulation

Overall, the AI Regulation sets out four different types of regulations, which can be characterised in general as follows:

  1. Prohibition of certain AI systems
  2. Special requirements for the use of certain AI systems
  3. Transparency requirements for certain selected AI systems that interact with humans
  4. So-called “code of conducts” for those systems, that are not characterised as high-risk systems

According to Article 3 and Annex 1 of the Regulation, an AI system is defined as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. Thus, the definition is quite broad and leaves some uncertainty as to which systems will be covered by the regulation. Uncertainty can arise even if a system/software solution uses AI techniques only to a very limited extent.  

However, the European Commission wish that the definition of AI systems should be continuously updated so that the definition follows the technological development. Therefore, Article 4 of the Regulation provides such a possibility.

In more specific areas, it is stipulated that the AI Regulation shall not apply. This applies e.g. in the case of AI systems which are exclusively designed or intended to be used for military purposes. The same applies to self-driving cars. The purpose for such areas is to be regulated separately and more specifically than the general regulation in the AI Regulation.

Strict requirements for high-risk systems

The main focus of the AI Regulation is to regulate the so-called high-risk systems listed in Annex 3 to the Regulation. These systems are further divided into 8 categories consisting of biometric identification, critical infrastructure management, HR, law enforcement, migration and asylum, education, certain “essential benefits” (including welfare benefits, credit assessment, etc.) and finally, systems in the administration of justice.

The Regulation contains several special requirements for (ongoing) quality assurance and transparency for high-risk systems. Systems, that fall into the category of high-risk systems must also be registered in a special EU database and be CE-marked.

In general, all systems that are in some way intended to interact with (natural) persons, must be designed in such a way that the persons are informed that they are dealing with an AI system, e.g., in cases where a “chatbot” or similar automated chat function is used. Moreover, there is a special duty to provide information if the system manipulates images, sound, or video, including the use of “deep fake” videos.

The Regulation also contains an actual prohibition on certain types of AI systems. The banned AI systems are i.e. systems that are decidedly manipulated or physically or mentally harmful to humans, especially if the system is intended for use by children, systems that use “social scoring” using surveillance and certain forms of facial and personal recognition.

Geographical scope of the Regulation

According to Article 2, the AI Regulation is intended to apply to all suppliers, who in some way deal with an AI system within the EU. It is thus irrelevant whether the supplier in question is domiciled outside the EU. In addition, the Regulation also covers all users of comprehensive AI systems, if they are domiciled in the EU.

The extensive group of people is intended to be both suppliers, importers, distributors, and users. This seeks to regulate all links in the chain that deal with AI systems. Under a collective term, the broad circle of people is characterized by being “operators” of AI systems. Again, stricter requirements are set for the use of high-risk systems, after which special suppliers are subject to special requirements. Chapter 2 of the Regulation states that transparency and security are examples of these stricter requirements.

From the above delimitation it can be concluded that the Regulation intends to regulate marketing, sale and use of AI systems. Hereby, the actual development of AI systems is not covered by the Regulation. Thus, there will be nothing to prevent an AI system from being developed in violation of the Regulation within the EU, as long as the system is not marketed, sold, or used within the EU.

Regarding GDPR

This Regulation is intended to apply alongside the General Data Protection Regulation (GDPR). The requirements of the GDPR must thus (also) be complied with when using AI systems.

In addition to the general principles in Article 5 of the GDPR, there are a number of provisions in the GDPR that will be of particular relevance to AI systems, including the requirements of transparency and duty of disclosure (Article 13-15), ban on the use of automatic individual decisions (profiling) (Article 22), requirements for privacy by design and privacy by default (Article 25), and the requirement for the preparing an impact assessment (DPIA) for the implementation of high-risk systems for data subjects (article 35).

Prospects of high fines

According to the draft, breaches of the rules on prohibited AI systems can be punished with a fine of up to 6% of a company’s global turnover or EURO 30.000.000,00 – depending on the greatest amount.

Thus, an even higher level of fines is planned, than what is known from the GDPR.

Expected commencement

There has not yet been set a date for the commencement of the AI Regulation. On 21 June 2021, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) issued a joint statement on the AI-Regulation, welcoming the proposal to regulate AI systems, including prohibition on the use of certain high-risk systems, but at the same time, certain inconveniences in the proposal, e.g. related to the interaction with the GDPR. The full statement can be read here.

Lund Elmer Sandager’s comment

It is to be expected that it will take some time to adjust the proposal for the Regulation before a final version is ready to be implemented. However, there is little doubt that the regulation will be adopted, and, in a version, that is not far from the current draft.

For that reason, it would be a good idea already at this point, to familiarize yourself with the upcoming rules if your company works with AI systems.

At Lund Elmer Sandager, we naturally follow the further development closely.

If you have any question regarding the proposal, please do not hesitate to contact our legal expert in this field, Attorney Anders Linde Reislev.