AI Act leaked

  • Développement en droit 23 janvier 2024 23 janvier 2024
  • Royaume-Uni et Europe

  • Technology risk

5 key points to know about the upcoming new law

Following extensive negotiations, the EU AI Act – the world’s first comprehensive law regulating artificial intelligence – has now been finalized, and the details of the Act were leaked yesterday. Upon closer examination, here are the 5 crucial points you should be aware of. 

 

1. Scope (Article 2 AI Act)

The AI Act applies to entities involved in the development, deployment, and use of AI systems within the EU. It covers providers, deployers, importers, distributors, manufacturers, and affected persons. The new law regulates “AI systems”, i.e., a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Exclusions apply to areas outside EU law, national security, and military AI. Notably, specific exemptions are in place for research, personal non-professional activities, and free/open-source licenses. The Act underscores compliance with existing European Union laws on data protection, privacy, intellectual property, and communications. Furthermore, Member States or the European Union are empowered to uphold or introduce more favourable laws concerning workers in the realm of AI system usage. This includes recognizing the authority of Member States or the European Union to enact or maintain laws that benefit workers in the context of AI system usage by employers.

2. Prohibited AI Practices (Article 5 AI Act)

The proposed regulations aim to safeguard individuals from potential harm by outlining several prohibitions on AI practices. These include preventing the deployment of AI systems that utilize subliminal techniques or purposefully manipulative strategies to distort behaviour, leading to significant harm. The regulations also prohibit the use of AI systems that exploit vulnerabilities based on age, disability, or specific social and economic situations. Deployment of biometric categorization systems inferring sensitive information is also barred. Additionally, the placement, use, or service of AI systems for social behaviour evaluation or classification, resulting in detrimental treatment or unjustified consequences, is forbidden. Strict limitations are imposed on the real-time use of remote biometric identification systems for law enforcement, with necessary safeguards, conditions, and prior authorizations required to ensure proportionality and protection of fundamental rights.

3. High-Risk AI System (Article 6 et seq. AI Act)

High-risk AI systems are classified based on safety components of a product, European Union harmonization legislation, and Annex III criteria. Exemptions exist for AI systems that do not pose a significant risk of harm, to the health, safety, or fundamental rights of natural persons. High-risk AI systems must comply with the established requirements, considering their intended purpose and the state of the art in AI and related technologies. The risk management system outlined in Article 9 is crucial for ensuring compliance. Providers of products containing AI systems are responsible for full compliance with European Union harmonization legislation. The risk management system involves continuous iterative processes, including risk identification, evaluation, and adoption of appropriate measures. Testing of high-risk AI systems, considering specific criteria, is essential for identifying suitable risk management measures.

Compliance includes risk management, testing, technical documentation, record-keeping, transparency, human oversight, and cybersecurity. Providers of high-risk AI systems must implement a quality management system proportionate their size of the provider’s organization to ensure their compliance.

The EU database registration is required for market placement. Additionally, deployers of high-risk AI systems must perform a so-called fundamental rights impact assessment. The Commission issues standardization requests to improve AI system resource performance, and common specifications when training and testing high-risk AI systems, which provide a compliance presumption. Conformity assessment procedures are outlined, including certification, derogation, and CE marking.

4. Transparency Obligations for Providers and Deployers of Certain AI Systems and General-Purpose AI Models (Article 52 AI Act)

Transparency rules mandate clear communication for AI system interactions, labelling of synthetic content, and data regulation compliance. Disclosure obligations exist for deep fakes and AI-generated text. High-impact AI models with systemic risk have detailed procedures for notification, assessment, and designation. Technical documentation, information sharing, and copyright compliance are required for general-purpose AI models. Non-EU providers must appoint a European Union-based representative, and extra obligations apply to providers of high-risk general-purpose AI models.

5. Measures in Support of Innovation (Article 53 et seq. AI Act)

The regulation establishes AI regulatory sandboxes for controlled innovation. Competent authorities provide guidance, supervision, and support. Testing in real-world conditions outside the sandbox is allowed with regulatory oversight. SMEs receive priority access, tailored awareness activities, and reduced conformity assessment fees. Microenterprises may fulfil certain quality management system elements in a simplified manner. The regulation balances innovation promotion with regulatory oversight and protection of fundamental rights.

6. Penalties (Article 71 AI Act)

The AI Act provides for various violations that are fined differently.

Non-compliance with prohibited AI practices can result in administrative fines up to EUR 35,000,000 or 7% of a company's total worldwide annual turnover, whichever is higher. Non-compliance of an AI system with any of the listed provisions in Article 71 No. 4 related to operators or notified bodies, other than those laid down in Articles 5, shall be subject to administrative fines of up to EUR 15,000,000 or, if the offender is a company, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

The supply of incorrect, incomplete, or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to EUR 7,500,000 EUR or, if the offender is a company, up to 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

 

Fin

Restez au fait des nouvelles de Clyde & Cie

Inscrivez-vous pour recevoir de nos nouvelles par courriel (en anglais) directement dans votre boîte de réception!