The European Commission confirmed in early July that a key code of practice for compliance with the AI Act may not be available before the end of 2025. This postponement means a delay of more than six months compared to the originally planned date of 2 May.
The code is intended to detail the obligations for providers of general AI models such as ChatGPT and similar systems, but without it, companies face considerable legal uncertainty.
Large technology companies are calling for a two-year suspension of the application of the code precisely because of this legal loophole. Among the initiators of the motion are Google and Meta, who fear the uncertainty of compliance costs.
European companies such as ASML and Mistral are also calling for a delay, as the lack of guidance could stifle innovation and affect smaller companies without strong in-house legal teams.
The Commission said the code can be voluntary and will provide legal certainty for those who accept it, despite industry pressure on Brussels. Those who do not sign the code will not be forbidden to work, but they have no guarantee that they will fully comply with the latest guidelines.
The very concept of voluntariness should not delay the application of the rules to high-risk AI applications, which come into force in August this year.
The AI code is part of a broader attempt to create a single market for safe and reliable AI systems in the EU. The first rules for general AI models should come into force on 2 August.
However, without clear operational guidelines, many providers are unsure how to meet the requirements for transparency, bias checks, and energy consumption reporting.
A pause for legal clarity
Aside from the tech giants, the pressure to delay is also coming from political circles. The Swedish Prime Minister, Ulf Kristersson, and lobby group CCIA Europe are pushing for a short pause, emphasising the need for legal clarity before stiff fines or regulatory penalties are imposed for non-compliance.
These voices point to growing concerns that the accelerated implementation of the new regime could stifle Europe's competitiveness in the global market for AI solutions.
The negative consequences of an uncertain regulatory framework mainly affect startups and smaller companies that are just beginning to face the challenge of commercialising complex machine learning models.
Without guidelines for risk testing and compliance verification, they run the risk of investing significant resources in developing products that later require drastic changes or have to be withdrawn from the market.
Brussels is willing to find a balance between the security of citizens and the pressure for a more flexible approach to innovation
However, the delay of the code itself does not mean any relief when it comes to high-risk AI applications. They remain subject to stricter impact assessment requirements and mandatory security measures.
This means that biometric identification systems and AI tools for critical infrastructure are not exempt from oversight, even if there are no detailed guidelines for general models.
The European Commission said it is considering additional measures to simplify the overall framework and that revisions to the legal text are possible if the need for corrections becomes apparent.
In an open letter accompanying the industry's request, the Commission pointed out that "their aim will continue to be to establish harmonised and risk-based rules across the Union."
This attitude shows Brussels' willingness to find a balance between the security of citizens and the pressure for a more flexible approach to innovation.
Avoiding collision
The coming months will be decisive for the tone of the negotiations. If the code is published by the end of the year, it will provide a degree of legal certainty but could also trigger a new wave of requests to suspend the obligations for another two years.
If this happens, the European Parliament's bodies and the relevant committees will respond to major discussions on the timeframe and degree of binding force of the document.
Looking ahead to next year, the EU will have to decide whether to extend the deadlines for the implementation of some provisions or introduce overlapping implementation phases.
Alternatively, special advisory bodies could be set up to monitor technological developments and regularly update the guidelines to keep pace with the rapid innovations in the field of machine learning.
The implementation of AI laws in Europe serves as a benchmark for ambition and consistency on the regulatory front
On the other hand, there is cause for concern that a prolonged delay could undermine public trust. Citizens and privacy advocacy organisations around the world are watching the implementation of AI laws in Europe as a benchmark for ambition and consistency on the regulatory front.
Any delay perceived as a concession to big tech risks being a reminder of lobbying practices that prioritise profit over the public interest.
In the face of global tensions and strong competition from the United States and China in the field of artificial intelligence, the EU must maintain its credibility as a leader in ethical regulation.
If the delay is interpreted as a capitulation to industry pressure, it could weaken the Union's negotiating position in bilateral and multilateral agreements on digital governance.
However, if Brussels is willing to compromise on a temporary delay with a clear plan to finalise the code by a certain date combined with a review mechanism, this could satisfy both the industry and NGOs. This would avoid a critical collision between technological progress and regulatory responsibility.
Minimising regulatory risk
In the coming period, most attention will be focused on the European Commission meetings and the first reports on the evaluation of the implementation of the Artificial Intelligence Act.
According to some analyses, it is expected that the commission could present a draft code before the December meeting to give companies enough time to prepare for formal signatures in early 2026.
The balance between the speed of technological development and regulatory protection will determine the EU's reputation as a global standardising leader - EU Commission
Forecasts suggest that if the rules are clearly defined by the end of 2025, companies could accelerate their investment in AI systems because regulatory risk would be minimised.
This would create a more favourable climate for innovation and help the European economy remain competitive, despite favouring big tech companies in the US and Asia.
Ultimately, the balance between the speed of technological development and regulatory protection will determine the EU's reputation as a global standardising leader.
The right choice of publication date and the degree of binding nature of the code of conduct could determine whether the European model will serve as an example for other regions or just register as a complex bureaucratic process that slows down progress.