The EU Council has approved the Artificial Intelligence Act (AI Act), the world's first comprehensive regulation on AI.
This legislation establishes uniform rules for the use of AI systems in the Union, using a risk-based approach.
The legislative process of the AI Act was delayed in December 2022 following the introduction of ChatGPT.
As a result, the draft text was revised to include specific rules for generative AI. The law is expected to be finalized and voted on during a plenary session scheduled for April 10 and 11, 2024.
Here are the highlights:
1. Extended Definition of AI: Focus on Autonomy and Adaptability in Physical and Virtual Environments
The scope of artificial intelligence systems has been broadly defined, focusing on autonomy and adaptability.
The definition includes any machine-based system that demonstrates varying levels of autonomy, can adapt after implementation, and influences physical or virtual environments.
2. There is a closed list of prohibited AI practices, including:
- Use of subliminal or manipulative techniques to significantly distort behavior.
- Exploiting vulnerabilities of individuals or groups, causing serious harm.
- Biometric categorization systems that classify people based on sensitive information, except for law enforcement purposes.
- Social scoring systems.
- Real-time remote biometric identification systems used in public spaces by law enforcement.
- Predictive policing based solely on profiling or personality traits, unless it supports human assessments based on objective crime-related facts.
- Facial recognition databases created through untargeted scraping.
- Inferring emotions in workplaces or educational institutions, except for medical or security reasons.
- The law introduces a dual definition of high-risk AI systems, classifying them into two categories:
- Artificial intelligence intended to be used as a product or component of security regulated by specific EU legislation.
3. Dual definition of high-risk artificial intelligence systems
An important part of the artificial intelligence law involves strict and extensive regulation of high-risk artificial intelligence systems.
In practice, it will be of utmost importance for a company engaged in the artificial intelligence sector to determine whether the artificial intelligence system it develops, imports, distributes, or employs constitutes a high-risk artificial intelligence system.
The AI law considers two types of AI systems considered high-risk:
Artificial intelligence intended to be used as a product (or security component of a product) governed by specific EU legislation, such as civil aviation, vehicle safety, maritime equipment, toys, elevators, pressure equipment, and personal protective equipment.
AI systems listed in Annex III, such as remote biometric identification systems, AI used as a security component in critical infrastructures, and AI used in education, employment, credit scoring, law enforcement, migration, and democratic processes.
We can expect guidelines specifying the practical implementation of the classification of AI systems, complemented by a comprehensive list of practical examples of high and non-high-risk use cases of AI systems within 18 months of the AI law coming into force.
4. There is an important exception to the second category of high-risk AI systems.
If the system is designed to perform a narrow procedural task, improve the outcomes of previous human activities, or detect decision-making patterns without replacing human assessment, it will not be considered high-risk.
5. Providers of high-risk AI systems must ensure that their systems are reliable, transparent, and accountable.
They are responsible for conducting risk assessments, using high-quality data, documenting their choices, keeping system records, informing users, allowing human oversight, and ensuring accuracy, robustness, and cybersecurity.
6. Obligations extend throughout the value chain, including "importers," "distributors," and "users" of high-risk AI systems.
Importers and distributors must verify the compliance of these systems. Distributors are required to follow the supplier's instructions and ensure human oversight and intervention.
7. The Need for Fundamental Rights Impact Assessments for Implementing High-Risk AI Systems in the Public and Private Sectors
Public sector entities and private entities providing public services are required to conduct a fundamental rights impact assessment before implementing high-risk AI systems.
This assessment concerns risks, oversight measures, stakeholders, and frequency of use.
8. Responsibility along the value chain may change.
If a party other than the supplier modifies the AI system or changes its intended purpose, making it high-risk, it may be considered a supplier and subject to associated obligations.
9. High-risk AI systems
For high-risk AI systems, individuals have the right to an explanation of the system's role in decision-making processes.
This could trigger conflicts with providers protecting trade secrets.
10. Rights of individuals
Individuals have a broad right to report potential violations of the AI law to market surveillance authorities, without the obligation to prove their legitimacy.
11. General Purpose AI (GPAI) models are regulated separately.
GPAI models presenting systemic risks entail additional obligations, such as conducting model assessments and addressing cybersecurity issues.
12. Transparency and Labeling Obligations for AI Systems and GPAI Models in Human-Machine Interaction and Synthetic Content Generation
Some artificial intelligence systems and GPAI models are subject to special transparency obligations, including systems intended to interact with humans and those generating synthetic content.
In these cases, users must be informed, and labeling the content as artificially generated may be necessary.
13. Governance Structure and Roles in AI Law Enforcement: AI Office, National Authorities, and Market Surveillance Authorities
The AI law is associated with a complex governance structure involving multiple entities, such as the AI Office, competent national authorities, and market surveillance authorities.
These bodies will enforce the rules, review complaints, and impose sanctions.
14. Heavy Penalties for AI Non-Compliance: Fines up to 35 Million Euros or 7% of Global Turnover
The penalties for non-compliance are significant and can reach up to 35 million euros or 7% of the company's annual global turnover. For high-risk AI systems, the penalty can amount to 15 million euros or 3% of turnover.
15. The AI Act and Its Integration with Existing Laws on Intellectual Property, Data Protection, and Cybersecurity
The AI Act will be applied alongside other laws and regulations governing AI, including those related to intellectual property, data protection, contracts, and cybersecurity.
The AI law is expected to be published in mid-2024, with variable compliance timelines.
While the rules on prohibited AI practices will come into effect after six months, most other obligations will take effect after 24 months.
Meanwhile, rules for high-risk AI systems will be introduced after 36 months, while those for high-risk systems used by public authorities will come into force after 48 months.