The European Union’s Artificial Intelligence Act aims to rein in a technology whose influence grows by the day.
The new rules are now on track to be approved in April and may enter into force later this year, following a lengthy debate on whether the EU should ban certain AI practices and constrain advanced AI.
Here’s your cheat sheet on what made it into the final text.
What it covers
The European Commission proposed the AI Act in 2021 to establish rules not merely for the technology, but for its use in scenarios where it could have grave consequences for the public.
In other words, an AI tool that adds bunny ears to your Instagram shots does not need regulation; AI systems used to screen candidates for university admission definitely do — also known as the “risk-based” approach.
The AI Act’s Article 5 outlines use cases that are outright banned. Meanwhile, the EU created rules for so-called general-purpose AI to address the rapid growth of systems like OpenAI’s ChatGPT, which have no specific use and could be used to generate anything from recipes to propaganda.
What’s forbidden
Article 5 lays out what AI uses are forbidden.
Such banned uses include those aimed at influencing behavior, such as subliminal, manipulative or deceptive AI-aided techniques or using AI to exploit a person or group’s vulnerabilities.
Using biometric information to ascertain a person’s race, sexual orientation, beliefs or trade union membership isn’t allowed. Nor is social scoring, which involves tracking a person’s behavior in a way that could result in their unfavorable treatment in an unrelated situation — for instance, being denied access to a public service because of past drug use.
The use of real-time facial recognition (also known as “remote biometric identification,” or RBI) in public places is banned. There are exceptions: Law enforcement will still be able to use the technology when investigating serious crimes or searching for missing people, as long as they have a judge’s authorization. Using AI to estimate a person’s likelihood to commit a crime based solely on personal characteristics — so-called predictive policing — isn’t allowed.
AI tools cannot be used to create databases of facial images by scraping the internet or CCTV videos. An obvious example: the controversial U.S. company Clearview AI.
Finally, using AI tools to infer a person’s emotions in the workplace or an educational environment is forbidden.
Addressing high-risk AI …
Creators of AI systems used in “high-risk” scenarios are required to follow data governance practices, including the ethical collection of datasets for training and ensuring that they’re representative and as bias-free as possible.
They must also draft technical documentation of the AI’s functionality and its risk-management measures, keep a record of the AI’s use to help monitor incidents, ensure that the AI’s use can be overseen by actual people, and guarantee appropriate levels of accuracy, robustness and cybersecurity.
Anyone rolling out high-risk AI systems will also have to conduct an assessment of how the tool might affect fundamental rights under EU law.
Practically, high-risk AI systems will likely be presumed compliant if they follow standards and specifications set by the Commission and standard-setting organizations.
The areas considered high-risk, according to Annex III, include biometrics and facial recognition, when not explicitly forbidden as per Article 5; critical infrastructure components; education and the workplace; and access to public services, benefits or essential private services like banking and insurance.
Uses related to law enforcement, migration, justice and elections also qualify as high-risk.
… and general-purpose AI
The AI Act’s rules target general-purpose “models” that underpin AI tools — not the customer-facing apps, but the software architecture that is integrated into different providers’ products.
Developers of these models — such as those powering ChatGPT or Google’s Bard — will have to keep detailed technical documentation; help the companies or people deploying their models understand the tools’ functionality and limits; provide a summary of the copyrighted material (such as texts or images) used to train the models; and cooperate with the European Commission and the national enforcing authorities when it comes to compliance with the rulebook.
Some general-purpose models are labeled a “systemic risk” thanks to their power and reach — an ability to precipitate catastrophic events. Developers of these systems will also have to put mitigation strategies in place and pass on details of any incident to the Commission’s brand-new “AI Office” which is lined up to police the rules.