EU’s AI Law Passed, Set to Roll Out Starting Later This Year

by | Mar 18, 2024

The EU’s AI law has passed its last major hurdle in the European Parliament with massive support, and pending some final formalities in the coming weeks is about to go into force. The requirements will be staggered, however, with the first set introduced in about six months and some not in play until three years out.

What to look for in the new EU AI law

Much of the AI law focuses on regulating the developers of AI systems, but those putting them to use in their business will at minimum need to be aware of some new transparency and disclosure requirements that will eventually be introduced.

The point at which the AI law is fully in force is mid-2026, but individual components for systems considered more “high-risk” will be coming online before then, some as soon as late 2024. Each of the EU nations will be establishing its own watchdog organization that specializes in AI issues (and that data subjects can file complaints with), with everything centered on an AI Office in Brussels that will focus on larger AI systems with international impact (such as ChatGPT).

In terms of the complaint process, things look to be similar to how the General Data Protection Regulation (GDPR) terms are currently handled. Steeper fines are in place for AI violations, however, ranging up to 7% of annual turnover at maximum.

It is also important to note that while each of the AI law’s terms will be enforceable when they activate, the final shape of the regulation will likely take years to settle. Terms and enforcement actions will inevitably be challenged in court, and those decisions could ultimately change the direction of the law.

EU’S AI law rollout schedule

The first action coming online is a ban of AI systems that are considered of the highest level of risk, something that happens six months after the AI law goes into force (most likely sometime in October or November). The codes of practice come online at nine months, and transparency requirements for “general purpose” AI systems are in effect at 12 months.

General purpose generative AI systems like ChatGPT will not be considered “high risk,” but may very well be labeled “high impact.” That means they will eventually be subject to regular safety evaluations and new reporting requirements.

What can get a system banned in the EU under the new AI law? Real-time biometric identification, such as face scanning, will be broadly banned with limited exceptions for law enforcement purposes. AI vendors will also have to be careful about sorting people by demographic or personal identity qualities, for example in the case of screening software. And those that are intended to manipulate the behavior of children or other vulnerable groups can also expect a ban.

One final note is that there is a broad range of systems that can fall into the “high risk” category, but some of these will have up to 36 months to fully come into compliance. It appears that anything that works in tandem with products that are already in more highly regulated sectors (such as health care or automobiles) will likely be considered high risk, as will most public and government services and critical infrastructure systems.

Recent Posts

Attempted Audio Deepfake on LastPass is “The New Normal” for Voice Phishing
Attempted Audio Deepfake on LastPass is “The New Normal” for Voice Phishing

Employee targeted in the voice phishing attack received several different deepfake call attempts and at least one voicemail message, but did not respond as it’s exceedingly rare for anyone to communicate internally via WhatsApp, let alone for the CEO to randomly start peppering an employee with messages after business hours.

How can we help?

8 + 9 =

× How can I help you?