The European Union is recognised as the first political actor to create a comprehensive regulatory framework for artificial intelligence, with the adoption of the EU AI Act at the end of 2024.
The implementation of bans on the riskiest AI practices, from algorithmic profiling of citizens to systems for predicting "risky" groups based on socio-demographic data, started on 2 February 2025.
However, after the first few months of implementation, the harmonisation and evaluation phases began. The European Commission announced "targeted changes" to strike a balance between strict regulation and the need for technological competitiveness.
Behind all regulation is the idea of protecting citizens and the market from potential abuse but also the warning that excessive bureaucracy discourages capital and talent and pushes them towards countries with more flexible rules.
At the POLITICO AI & Tech Summit in Brussels on 13 May, Kilian Gross, head of the European Commission's AI policy unit, stressed that the Commission would analyse feedback from business and civil society before proposing changes to key articles of the Act, particularly those concerning "high-risk" systems.
This step, six months after the introduction of bans and transparency obligations, represents a politically necessary adjustment but also a test of consistency in the implementation of the first ambitious European law on AI.
Unclear regulations
In the first wave of implementation, the definition of "AI systems" and the list of prohibited practices were somewhat unclear: some companies do not know whether their generative models are qualified as "general-purpose" or "high-risk" systems, while regulators in different member states apply the standards inconsistently.
The Commission took hundreds of comments on the definitions and prohibitions, resulting in non-binding guidance published on 4 February 2025 that clarified terms such as "total cost control" and "human oversight obligation".
Although more transparency is helpful in the first steps, the true picture of harmonisation is not expected until the Commission amends and simplifies the provisions that most hinder implementation.
Startups and tech giants complain that the cost of certifying "high-risk" systems reaches tens of millions of euros
From an industry perspective, the most critical players are in the area of large language models and generative AI tools.
Startups and tech giants complain that the cost of certifying "high-risk" systems reaches tens of millions of euros, seriously threatening the financial viability of projects in the early stages of development.
In the US, companies such as OpenAI and Google can count on informal, voluntary mechanisms of self-regulation, whereas in China, legislation favours stability and state control but follows a model that allows for faster innovation.
The EU is looking for a balance
Europe, which uses regulation as a tool to set the rules in the global AI race, is now balancing the need to maintain standards against the need not to overburden the domestic sector.
From a strategic perspective, it is clear that the EU AI Act will become a benchmark for other legal systems. Canada’s "Digital Charter", the UK "Digital Pact", and the American ACT (Artificial Intelligence Coordination and Trust) cite the EU as an example of strict regulation that should be accompanied by a commitment to implementation but not unnecessary rigidity.
If the EU gives in to pressure from lobbies, it risks losing its model of regulatory harmonisation with key partners and its standards remaining merely "non-binding guidelines" instead of norms for the global market.
Some EU members are seeking a greater degree of harmonisation with the US approach
In discussions at the Council and Parliament level, some EU members are seeking a greater degree of harmonisation with the US approach to avoid mandatory bans in favour of rules on liability and subsequent control of products, while others point out that this would open up "loopholes" through which powerful players could circumvent the barriers.
The Greens MEPs agreed that any reduction in obligations for risky systems would be "extremely dangerous". They argued that without strict rules, deepfake technologies could develop into a political tool. This raises a key question: does Europe want to be a bastion of ethical AI or an ambassador of business flexibility?
Easier certification
The Commission will face two challenges in the coming months. Firstly, there is the harmonisation of the definitions of general-purpose models, where a clearer distinction is sought between "general-purpose" and "high-risk" systems.
Secondly, regarding the certification procedure, instead of implementing a three-stage test over the next four years, a simpler "testing-by-dialogue" model is being considered. In this model, the regulator and manufacturer will jointly agree on necessary actions, rather than requiring the company to undergo expensive and time-consuming certification processes.
If the proposals are given the green light, "high-risk" systems will need to meet the transparency and impact assessment requirements - EU Commission
If the proposals are given the green light, "high-risk" systems will need to meet the transparency and impact assessment requirements but without prior mandatory certifications for each new version.
This would allow the market to react more quickly to risks, but supervision—if it is adequately funded and equipped—could also review the implications of the technology in real time.
For European start-ups, this means lower market entry costs, greater investor interest, and the ability to evaluate developed models more quickly in the face of global competition.
However, when a public debate on the final draft of the amendments is launched after the summer, it will become clearer how much the lobbyists of the largest technology companies can influence protection of their own interests.
It could be a major blow to the EU's credibility if the provision banning biometric surveillance is focused on the option of "user consent" rather than the principle of a complete ban, thereby losing the fundamental ethical objective.
Conversely, lifting the bans entirely would further distance the public from the legal framework and jeopardise the EU's commitment to the consistent protection of fundamental rights.
The EU can have a globally acceptable model
If the EU succeeds in systematically eliminating procedural defects, the European model can become a role model for harmonising standards among NATO allies, and even Tokyo could adopt similar guidelines for the ethical application of artificial intelligence in the defence sector.
Otherwise, Europe would become an example of a legal mechanism that is ambitious but fails to bridge the gap between regulatory principles and the practical needs of industry.
None of the changes in the coming months will be purely administrative
In short, none of the changes in the coming months will be purely administrative: targeted changes to the EU AI Act will show whether the Union is able to adapt the framework to market realities without abandoning the protection of fundamental rights.
The true indicator of success will be the ratio of investment in European AI companies in the second half of 2025 compared to the US and China, as well as the willingness of international partners to accept European standards as a global model. If Europe avoids the trap of rigidity, this will be the first major victory of regulatory pragmatism over regulatory inertia.