ASI

Europe Hits a Break on Its Toughest AI Laws – and the Backlash Has Begun

EU officials have agreed to water down certain aspects of the AI ​​Law, including delaying the implementation of rules covering many high-risk applications until December 2027, instead of the previously set deadline of August 2026, according to the latest update by EU lawmakers watering down AI rules.

The deal comes after many companies argued that the EU was hamstringing itself with unnecessary legislation, leaving the EU behind rivals in the US and Asia.

The agreement was reached after 9 hours of negotiations, which is the average for negotiations in Brussels. It still needs to be approved by EU leaders and the EU parliament, so don't expect final changes just yet. But the bottom line is clear: Europe still wants to control AI, albeit slowly.

The final agreement means that high-risk, stand-alone AI systems will have to comply by December 2, 2027, but high-risk systems embedded in high-risk products, such as cars or medical devices, will have until August 2, 2028 to fix.

The Council said this is to help “simplify” the AI ​​Act, including preventing overlap with other sector legislation. In other words, if a machine, medical product or device is already regulated as a regulated product, there is no need for companies to produce duplicate documents just to comply with the AI ​​Act.

That said, the deal isn't a golden ticket for big AI companies: The deal would introduce a ban on non-consensual, sexually graphic AI images and videos, including so-called “harassment” apps and child sexual abuse material.

The ban is scheduled to come into force on 2 December 2026, when watermarks on AI-generated content should come into effect – allowing for a clear timetable for industry players.

The European Parliament said the AI ​​Act simplification package “achieves a careful balance between simplifying the rules, maintaining the risk-based approach of the AI ​​Act and adding protections to so-called 'abusive apps'.”

It's an important point – few people would argue that we have to slow down the problem of sexism, especially after women, young people, and politicians see themselves as victims of artificial images, images that are not only harmful but harmful.

The main contention is about time. Civil society and digital rights activists argue that delaying tougher rules on high-risk AI means leaving people exposed in a variety of areas, from employment and education to biometrics, critical infrastructure and policing.

On the other hand, the business community argues that a vague environment with associated obligations will stop the European AI industry before it gets off the ground. Or it could be true, which makes this a minefield.

The original law came into effect in August 2024, when the European Commission announced it as the world's first comprehensive AI regulatory framework. The law is based on risk: certain uses of AI are prohibited, high-risk uses have stricter requirements and low-risk uses have lighter obligations. That remains the same under the new agreement, which delays the timing and scope of some of the stricter obligations.

It all sounds like a political clash. Europe has for years positioned itself as a responsible adult in the AI ​​debate: one that prioritizes rights and safety over hype.

Now, under enormous pressure from industry and big technology, it's going backwards. Pragmatism? Yes. Surrender? You can be sure that many will disagree with that. My guess​​​​ is that the truth lies somewhere in the dirty gray area in between.

Siemens and ASML have been lobbying for AI rules for industrial applications, with Reuters reporting that the AI ​​Act rules will not apply where there are industry-specific rules.

For manufacturers who have been worried about compliance headaches, especially in some of Europe's industrial powerhouses, that is a welcome development. It also poses a simple question: when does simplification become hollow?

The European Commission praised the agreement, noting that the revised AI Law is intended to encourage innovation while protecting citizens from the harmful effects of AI. “Innovation and security” and “speed and security” and “less paperwork and more human rights” — everyone wants that; no one wants it to be true.

To begin with, the postponement offers some relief. In the European Union, creating artificial intelligence has become a regulatory minefield and smaller companies may lack Google's resources in the form of a team of compliance experts.

If the AI ​​Act takes longer to take effect, it may give more space to European developers to compete instead of spending money on law firms as soon as they start seed production.

But consensus doesn't look so good in society. High-risk AI systems are labeled “high risk” for a reason—they can affect who gets hired, how governments provide services, how police use their tools, and even how critical infrastructure works. Delays in law enforcement may ease the industry's concerns, but they also delay the day when citizens get maximum protection. It is an uncomfortable dilemma that Brussels cannot answer.

Europe wants to be the region that sets the rules for the AI ​​age. But it also wants to be a place where AI companies build real-world products. Both of those goals are possible, but would need to be tempered with enough friction to create less heat. This week's agreement is designed to defuse some of the conflict before it escalates.

The final agreement will move to the next stage of the legal process and, if approved, will set the course for the first few years of the implementation of the AI ​​Law, while also giving a signal to countries outside the EU that even the most ambitious AI regulator in the world is adjusting its plans based on the speed, costs and political realities of the AI ​​race.

Now, the real question is: Does Europe still want to enforce strict AI rules? It clearly does. But can Europe make them work without weakening them so much that the safety shield starts to leak?

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button