First thoughts on the Digital Omnibus
It is quite fitting that I finally decided to start my own substack on Digital Omnibus’ day, because a LinkedIn post just won’t do.
I could go on and on about this, starting with the oddity of simplifying a Regulation that currently applies only in part, already assuming it won’t work, and just a little over a year after it was announced with big fanfare by the very EU leaders that now seem to dismiss it.
I will just limit myself to breaking down the different elements of the proposal and their implications, starting with the most relevant.
REASONS/FRAMING: the Commission mentions two main reasons why they came up with this proposal: “the slow designation of national competent authorities and conformity assessment bodies, as well as a lack of harmonised standards for the AI Act’s high-risk requirements, guidance, and compliance tools”. This is important, as much of the debate tends to “blame” standards delays only. Yet, the problem lies also within Member States themselves. Plus, standards are really important for a smooth implementation of the Act, but not legally necessary for it to apply (they remain voluntary, as compliance can be shown in other ways). What is legally necessary is the enforcement structure within the Member States, which is not yet fully there. So actually Member States delays are an even more substantial factor.
TIMELINE
The fact of including the famous “stop the clock”, i.e. a postponement of the application of the high-risk section of the Act into the Omnibus is very tricky. The Commission had two options here:
It could have gone for a targeted amendment of Article 113 under a separate procedure (“quick fix”), which could have been approved as fast as a couple of months, given all the necessary legal passages. Doing so it would have secured the postponement first, giving time for the Omnibus to run its course.
Choosing to integrate the postponement within the Omnibus puts a lot of pressure on the co-legislators (Council and Parliament): if the overall simplification proposal is not approved by August 2,2026, we risk having a situation whereby the original rules will apply, but only for a few months, creating even more problems for legal certainty.
Looking at the sustainability Omnibus, the draft proposal was made on February 26, 2025. It is mid-November (nine months later) and the Parliament only adopted its official mandate last week. This means that the negotiations with the Council (the “trilogues”) still need to happen before a final agreement can be found.
The AI Act requirements and obligations for high-risk AI systems will start applying in a little over eight months. I will let you do the calculations as to what can possibly go wrong in this case, keeping the example of the sustainability package in mind.
A final consideration: while Council is clearly interested in a speedy procedure for this simplification package, the Parliament (at least part of it) might not react that well to this amount of pressure. The strong reactions by Renew Europe, the Socialists and Democrats (S&D) and the Greens to the leaked versions of the text all but promise a smooth procedure. Expect it to be messy, full of amendments and furious lobbying. It can go South very easily.
We now know that the Commission is proposing an interesting solution: postponement until December 2027 for Annex III high-risk AI systems and August 2028 for Annex I, UNLESS the Commission itself decides to make the provisions applicable earlier, should standards and other tools be ready, giving a 6-month notice for the first, and one year for the latter. Now, I was surprised to hear that making a Regulation applicable earlier by a Commission’s Decision is even possible, as I thought it was a sort of legal abomination at first. Now, if the argument is that standards need to be ready, if I was a company I would do my best to delay standards even further, to make sure we get to the final stated dates.
This actually also begs another question, though: we say all the time that AI moves too quickly for regulators to follow. EVP Virkkunen repeated it once more just today. So how do we know that the framework will still be relevant two years (yes, two full years) from now?
TRAINING OF SENSITIVE DATA
Now, this is going to be very tricky. There are two main components to this:
The Commission basically proposes to turn article 10 paragraph 5 into its own article. This will make it very clear that the provision (originally specifying the additional safeguards required to process sensitive personal data in addition to what article 9 GDPR already foresees - when allowed) is now to be considered an additional legal basis allowing such processing (one of the conditions allowing it in article 9 GDPR). This is exactly what we (the Parliament) aimed to avoid during negotiations, with the invaluable help of the LIBE team and of the Parliament’s legal service. The Commission, instead, tried to insert those words (“additional legal basis”) until the very end of the negotiations, when we were just ironing out the recitals in January 2024. Until the very end. Reopening this will be extremely controversial in and of itself. And I won’t even go into reminiscing how that paragraph almost didn’t make it in the Regulation at all.
To this, we now add the second paragraph of this new article, which will allow ALL providers of AI systems (regardless of the risk level) to collect and process sensitive personal data for the purpose of detecting bias. As much as it is a worthy goal, this provision is particularly dangerous because it opens the door to a massive collection of protected personal data that was until now forbidden by EU law (with limited, targeted exceptions), for the sake of training AI systems of all kinds. Also, I wonder, we always say the AI Act is risk-based, that if you’re not a high-risk provider (or developing chatbots or deepfakes needing transparency) you don’t even need to read the AI Act. Well, now you do. I wonder whether this will not end up creating even more confusion for industry. I am also limiting myself to the AI Act amending proposal, knowing that the one amending GDPR is actually even more substantial in this sense.
AI LITERACY
Turning the obligation for providers and deployers to possess a sufficient level of AI literacy into an obligation for the Commission and the Member States to merely encourage AI literacy for AI operators is also a concerning message, even more than having a concrete impact. After months of saying “AI is not a toaster”, you need to know what you’re doing when using AI in a high-risk situation, now we’re doing a full 180 in the name of the AI imperative, basically admitting that it’s not that fundamental after all (even providers don’t have the AI literacy obligation anymore, building a toaster without knowing how it works!).
At the time, I remember that Member States did not want any specific imposition on them about providing trainings, resources, awareness campaigns and so on (or whatever that “shall encourage” might entail). Well, now they will have it. I am curious to know how they will react.
REGISTRATION OBLIGATION FOR NON-HIGH RISK AI SYSTEMS
This promises to be another extremely contentious point. Per the AI Act, AI systems intended to be used in one of the high-risk use cases in Annex III can be exempted from the high-risk requirements if they are only meant to perform basic tasks that do not influence the final decision (on a loan, on hiring etc.). To ensure a minimum level of traceability and avoid abuses, providers of such systems are only required to register in a specific section in the EU Database. The Commission’s proposal deletes this registration obligation. To be frank, I would love to hear arguments from industry to understand why registering (with only 9 points of basic information on the system) in this case would be so complicated, and if deleting this will really save European AI industry. Because so far, it rather seems one less safeguard to prevent companies to circumvent the Regulation without being traceable. Moreover, the political cost of this proposal seems quite steep, if even Renew Europe explicitly mentions it in its letter to the Commission on the Omnibus. I would expect massive backlash and amendments on this particular point.
GRANDFATHERING CLAUSE FOR ARTICLE 50
The rumors about a grace period for the transparency obligations for chatbots and deepfakes are confirmed: the obligation to label content as AI generated would normally start applying in August 2026. With this proposal, the Commission is simply adding, as was already the case for high-risk AI systems and for general-purpose AI models, that the rules are not immediately retroactive, i.e. they only apply to systems placed on the market after August 2026, while older systems have one more year to comply. To me, this appears as one of the least controversial changes, despite the heated debate over AI-generated content and disinformation, etc.
Interestingly, with the other new changes in the timeline (proposed postponement to December 2027-August 2028 or earlier if the Commission so decides), the transparency obligation would end up being the only provision left to apply from August 2026, while the rest would be delayed.
A particular consideration needs to be made of the explanatory statement, when it says
“The amendments put forward in the proposal are technical in nature. They are designed to ensure a more efficient implementation of rules that were already agreed at political level. There are no policy options that could meaningfully be tested and compared in an impact assessment report”.
Some of the described amendments are certainly not technical at all. They were the object of a very delicate political negotiation. Reopening them means dismissing the very “rules that were agreed at political level”.
Also, regarding the impact assessment: of course it couldn’t be done properly, as there was no time and also no proper data to collect. One can fully appreciate that. But this shouldn’t bode well with the usual outcry from companies always calling for impact assessment for every single proposal, no matter how minor. I haven’t seen such outcry this time. Some liberal MEPs even go as far as proposing impact assessments for individual amendments during negotiations, which always seemed to me as irrealistic as some proposals coming from the most radical parties.
DO THESE AMENDMENTS REALLY DO THE JOB?
Now, apart from the data provision (to be seen in conjunction with the draft amendments to GDPR) giving much more leeway to companies building AI products, the other amendments do not seem to substantially ease companies’ compliance burden to a substantial extent.
Extending the special provisions for SMEs to small mid-caps can be framed as outrageous, as we are basically extending them from the 98-ish% of EU companies to a staggering 99-something%. Yet, the provisions are not “exemptions” but rather simplified documentation, a simplified quality management system (in this case, we go from microSMEs to all SMEs potentially benefitting), priority access to sandboxes, reduced certification fees and penalties, dedicated channels and resources.
Sure, further clarifying provisions on sectoral legislation, or on DSA is helpful. One can also argue the intention was already there in the original text.
Expanding enforcement powers for the Commission might also be a sign that Member States finally acknowledge they do not have the resources to supervise everything.
Last but not least, the Regulation now mentions agentic AI as one of the challenges the AI Office will need to tackle, and in the competences for the conformity assessment bodies.
These are just a few, immediate reactions. Possibly more to come.
