Artificial Intelligence Act
The purpose for the EU Regulation laying down harmonized rules on artificial intelligence (AI Act) is to regulate the development, placing on the market, and use of artificial intelligence (AI) technologies. The AI Act introduces certain technology-neutral requirements applicable to the design and development of AI systems before they can be placed on the market.
What does this mean?
- Certain AI practices are to be prohibited to ensure that all products placed on the EU market are safe.
- Both providers and deployers of AI systems in the EU will need to comply with obligations laid down in the rules regardless of where the system is established. Further, importers and distributors will be subject to obligations relating to product safety.
- The AI Act presents a framework for AI systems based on their risk level between unacceptable risk, high risk and other more limited risk. In addition, the AI Act lays down various obligations for general-purpose AI (GPAI) systems.
- AI practices with an unacceptable risk level will be prohibited from being introduced to or used on the European market. These are regarded as harmful uses of AI that contravene EU values, e.g., AI systems which manipulate individuals through subliminal techniques, exploit the vulnerabilities of a specific group of individuals, or systems used for social scoring.
- High-risk AI systems that create adverse impacts on safety or fundamental rights will be required to undergo a conformity assessment and meet certain requirements throughout their life cycle, such as record keeping and mandatory AI incident reporting to the market surveillance authorities. The high-risk AI systems may be marked by CE marking indicating that the AI system is in compliance with the respective requirements. Such a marking should be affixed visibly, legibly and indelibly. Where that is not possible due to the nature of the high-risk AI system, CE marking should be affixed to the packaging or to the accompanying documentation, as appropriate.
- Providers and deployers of certain other AI systems posing limited risks will be made subject to transparency obligations (e.g., ‘deep fakes’ require disclosure that the content has been manipulated).
- GPAI systems will be subject to certain transparency requirements, EU copyright compliance, and maintenance of various technical documentation, such as detailed training data summaries. In addition, the more powerful GPAI models, posing systemic risks, for example by having a significant impact on the EU internal market due to their scope, will be subject to additional obligations such as model evaluations, risk assessments, and incident reporting.
- The rules will be applicable to all AI systems offered by European system providers, or systems established in third countries which affect users within the EU.
Consequences
- Non-compliance of obligations related to prohibited AI systems in the AI Act may lead to fines of up to 35 million euros or, if the offender is an undertaking, 7% of the total annual turnover for the preceding financial year, whichever is higher. The equivalent thresholds for non-compliance with rules on AI systems other than the prohibited AI systems (e.g., high-risk AI systems) are 15 million and 3%.
- Monitoring and enforcement are the responsibility of the named authorities in each respective Member State.
- Additionally, the Commission has established the AI Office, which will ensure harmonized implementation of the Act through collaboration with member states and guidance. The AI Office will also directly enforce the rules for general-purpose AI models.
Timeline
- The AI Act was published in the Official Journal on 20 July 2024 and entered into force on 1 August 2024. The Act will be fully applicable twenty-four (24) months after it has entered into force, i.e. on 2 August 2026. However, there are some exceptions to the applicability of certain provisions of the AI Act, such as the prohibited practices (posing unacceptable risks), which will apply six (6) months after the Act has entered into force, i.e. on 2 February 2025 and transparency and governance requirements, which will apply twelve (12) months after the Act has entered into force, i.e. on 2 August 2025.