The European Parliament adopts the Artificial Intelligence Act
Insights|April 29, 2024
The European Parliament approved the Artificial Intelligence Act (AI Act) on 13 March 2024, providing the first set of rules in the world to regulate AI in a comprehensive manner. As a part of a broader set of policy measures, the AI Act aims to ensure the safe use of AI applications and the protection of fundamental rights while still leaving scope for innovation (see e.g., the related AI Innovation Package). The AI Act creates obligations based on the possible risks that the use of artificial intelligence systems poses.
As promised in our article on the European Union’s Digital Decade Strategy (available here), we will provide a series of Roschier Insights Articles on the coming avalanche of EU legislation regulating digitalization and data. In this article, we provide a brief overview of the adopted version of the AI Act. For more details on the background to the AI Act, please refer to the EU Data Economy & Digitalization section on our website (Artificial Intelligence Act).
A risk-based approach
Unacceptable risk systems
The rules approach artificial intelligence (AI) based on the risks it poses to society – the greater the risk, the stricter the rules laid down for the use of the respective AI system.
Further, pursuant to Article 5 of the AI Act, the utilization of certain AI systems that pose risks to citizens’ rights are prohibited. Such prohibited AI systems are referred to in the preamble of the AI Act as “unacceptable AI practices“. Systems included within the scope of the prohibition cover, inter alia, the following systems:
- AI systems that, inter alia, exploit any of the vulnerabilities of a natural person due to their age or economic situation with the objective of materially distorting the behavior of that person in a manner causing significant harm to that person;
- AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage; and
- AI systems that make inferences based on the emotions of a natural person e.g., in the workplace, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.
High-risk systems
In addition to the prohibitions laid down in the AI Act against unacceptable AI practices, the AI Act also defines obligations under Chapter III for so called “high-risk AI systems”, stemming from their potential significant harm to health, safety, fundamental rights, the environment, democracy, and the rule of law. Examples of high-risk AI applications encompass systems related, inter alia, to:
- critical infrastructure, employment, and education;
- essential private and public services such as healthcare and banking; and
- justice and democratic processes like influencing elections.
The high-risk AI systems must comply with a number of requirements and obligations, such as conducting risk assessments, mitigating risks, maintaining usage logs and quality management systems, upholding transparency and accuracy standards, and ensuring human oversight. Among other things, these obligations are necessary to effectively mitigate the risks to health, safety and fundamental rights. Further, the respective parties will retain the right to lodge complaints concerning AI systems and to receive explanations regarding decisions influenced by high-risk AI systems that impact their rights.
Finally, the high-risk AI systems may be marked by CE marking indicating that the AI system is in compliance with the relevant requirements. Such a marking should be affixed visibly, legibly and indelibly. Where that is not possible due to the nature of the high-risk AI system, CE marking should be affixed to the packaging or to the accompanying documentation, as appropriate.
Limited-risk systems
If the AI system does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, it will not be considered a “high-risk system”. Assessments of whether an AI system poses a significant risk of harm, or whether it could be considered as a “limited-risk system”, must be carried out based on the classification conditions laid down in the AI Act. Further, providers and deployers of AI systems posing limited risks would be made subject to transparency obligations (e.g., ‘deep fakes’ require disclosure that the content has been manipulated).
In conclusion, the AI Act emphasizes the regulatory focus on high-risk AI systems, where stringent obligations are placed on developers to ensure safety, transparency, and accountability. A smaller segment addresses limited-risk AI systems, imposing lighter transparency requirements, particularly concerning user awareness.
General-purpose AI (GPAI) systems
The AI Act also sets forth provisions for the GPAI models, meaning a model that is trained extensively using self-supervision at scale, demonstrates broad applicability and proficiency across diverse tasks, and facilitates seamless integration into various downstream applications. Further, GPAI models may be utilized to form GPAI systems as a result of which such systems may be capable of serving a variety of purposes, both for direct use as well as for integration in other AI systems. Thereafter, the GPAI systems can function a) as high-risk AI systems independently; or b) as components within other high-risk AI systems. To ensure an equitable distribution of responsibilities across the AI value chain, providers of the GPAI models must collaborate closely with relevant high-risk AI system providers.
The GPAI models are subject to obligations requiring, inter alia, adherence to certain transparency requirements, EU copyright compliance, and maintenance of various technical documentation, such as detailed training data summaries.
In addition, the AI Act addresses the potential systemic risks posed by GPAI models, including large generative AI models, which serve as the foundation for numerous AI systems in the EU, and, thus, may have a significant impact on the EU internal market due to their scope. Further, such powerful GPAI models are subject to additional obligations, including model evaluations, risk assessments, and incident reporting to mitigate the systemic risks that the models may pose.
Currently, as noted by the European Commission, the GPAI models trained with a total computing power exceeding 10^25 FLOPs are deemed to entail systemic risks, given their increased potency. The AI Office, established within the Commission, has the authority to revise this threshold as technology evolves. Additionally, it may designate other models as posing systemic risks based on additional criteria such as user base or level of autonomy. The current threshold for systemic risk in GPAI models encompasses the most advanced models, such as OpenAI’s GPT-4 and Google DeepMind’s Gemini. Due to their advanced capabilities, these models warrant additional scrutiny and obligations for their providers.
Copyright compliance
While the AI Act provides for comprehensive rules on AI, it primarily focuses on product liability rather than providing explicit rules on IP rights related to AI. A general provision under Article 53(1)(c) states that providers of GPAI models must put in place a policy to comply with EU law on copyright and related rights, and in particular Article 4(3) of Directive (EU) 2019/790. Nevertheless, in Recital 106 of the AI Act the matter has been taken further by stating that the providers of GPAI models should identify and comply with the reservations of rights expressed by rightsholders pursuant to Article 4(3) of the Directive regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of those GPAI models take place. It is also provided in the Recital that such compliance is necessary to ensure a certain standard among providers of GPAI models where no provider should be able to gain a competitive advantage in the EU internal market by applying lower copyright standards than those applied in the EU. Thus, the description in Recital 106 goes further than the relevant provisions under Article 53 of the AI Act.
Thus, the core question on the matter relates to the Recital 106 of the Act extending the geographical scope of copyright compliance/protection, and whether such an extension may be deemed binding and enforceable, as the Article 53(1)(c) does not provide any further clarification with respect to the presumed applicability of the extraterritorial nature of copyright compliance. Therefore, it is justified to consider whether the rules of EU copyright law may be extended abroad, too. As the GPAI models have already been developed within, and outside, the EU respectively, a relatively practical matter should be taken under consideration; how the deployers of the GPAI models can retrospectively ensure the EU copyright compliance of the data used for training the GPAI models?
Naturally, the aforementioned question around application of Article 53(1)(c) and Recital 106 of the AI Act has initiated further discussion. Thus, it remains to be seen i) whether any changes to the prevalent legislation will be introduced, ii) how the application of the Recital will influence decision making in practice, if any, and iii) to what kind of sanctions this may lead to.
Timeline
The AI Act is still subject to a final lawyer-linguist check after which it also needs to be formally endorsed by the European Council. Following such measures, the AI Act will, first, enter into force twenty (20) days after its publication in the Official Journal and, second, be fully applicable twenty-four (24) months after it has entered into force. However, there are some exceptions to the applicability of certain provisions of the AI Act, such as the prohibited practices (posing unacceptable risks), which will apply six months after the AI Act has entered into force.
Sanctions
Non-compliance with the obligations laid down in the AI Act may result in a penalty or other enforcement measures, which may also include warnings and non-monetary measures. For instance, non-compliance with the rules on prohibited AI practices (Article 5 of the AI Act) will be subject to administrative fines of up to 35 million euros or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher. As further stated in Article 99 of the AI Act, the equivalent thresholds for non-compliance of rules on AI systems other than the prohibited AI systems (e.g., high-risk AI systems) are 15 million euros and 3%.
Being prepared for the new set of rules on AI systems
To avoid the possible sanctions as listed above, every company should consider, at a minimum, the following matters before the AI Act becomes applicable:
- how AI is utilized within the company;
- whether the utilized AI systems belong e.g., to prohibited or high-risk AI systems; and
- how the operation models and policies of the company should be aligned with the new set of rules.
As different AI systems will be subject to a set of various new rules, it is particularly important to be aware of all the possible impacts which may arise from the AI Act, and thus, have implications for the company’s business operations.
Roschier continues to follow closely the final development phases of the AI Act. Therefore, we are happy to help you with any questions you may have regarding the potential impacts on your business or in general.