In the early hours of Thursday 7 May 2026, after roughly nine hours of negotiation in Strasbourg, the European Parliament and the Council of the European Union reached a provisional political agreement on the Digital Omnibus on AI (COM(2025) 836, procedure 2025/0359(COD)). The deal closed under the Cypriot Council Presidency at the third political trilogue, nine days after the second trilogue had collapsed on 28 April over the conformity assessment architecture for AI embedded in regulated products. The institutions have stated their intention to complete formal adoption before 2 August 2026, the date on which the AI Act's high-risk obligations would otherwise begin to apply under the regulation as enacted.
For a NYC company reading the file today, the practical consequence is a meaningful but not radical shift. The risk-based architecture of Regulation (EU) 2024/1689 is intact. Scope under Article 2 is unchanged. The substantive obligations on high-risk providers and deployers — Articles 9, 10, 11, 13, 14, 15, 26 — survive the trilogue without softening. What changed is the timing, the addition of one new prohibition under Article 5, and a handful of targeted clarifications. This note walks through what was decided, what was left unsettled, and what NYC compliance teams should plan against now.
The core deal in one table
The headline elements of the political agreement, as published by the Council, Parliament and European Commission press services on 7 and 8 May 2026:
| Provision | Original date (Reg. 2024/1689) | New date (post-Omnibus) | Status |
|---|---|---|---|
| Annex III high-risk obligations (stand-alone) | 2 August 2026 | 2 December 2027 | Fixed date · standards readiness condition rejected |
| Annex I high-risk obligations (embedded in regulated products) | 2 August 2027 | 2 August 2028 | Fixed date · with Annex I compromise (see below) |
| New Article 5 prohibition · NCII / CSAM generation | — | 2 December 2026 | New prohibited practice |
| Article 50(2) watermarking · grandfathering for systems on the market before 2 August 2026 | 2 August 2026 | 2 December 2026 | Four-month transitional period |
| Article 50(2) watermarking · new systems placed on market on or after 2 August 2026 | 2 August 2026 | 2 August 2026 | Unchanged |
| Article 4 (AI literacy) | 2 February 2025 | 2 February 2025 | In application · unchanged |
| Article 5 (existing prohibitions) | 2 February 2025 | 2 February 2025 | In application · unchanged |
| GPAI obligations (Chapter V) | 2 August 2025 | 2 August 2025 | In application · unchanged |
| National AI regulatory sandboxes | 2 August 2026 | 2 August 2027 | One-year extension |
The fixed-date settlement on high-risk obligations
The Commission's original proposal of 19 November 2025 had introduced a conditional mechanism: high-risk obligations would apply only after the Commission confirmed by decision that compliance support measures — harmonised standards, common specifications, guidelines — were available. Six months for Annex III, twelve months for Annex I, with absolute backstop dates of 2 December 2027 and 2 August 2028 respectively.
That conditional structure did not survive the trilogue. Both Council (general approach, 13 March 2026) and Parliament (plenary position, 26 March 2026 by 569 votes to 45) had converged on rejecting the standards-availability trigger and replacing it with hard dates. The trilogue confirms that convergence. The new Article 113 reads as fixed: 2 December 2027 for Annex III stand-alone systems, 2 August 2028 for Annex I embedded systems, regardless of standards readiness.
For NYC companies, this matters in two practical ways. First, the planning horizon is genuinely longer — a year and four months for Annex III, two years for Annex I — but the obligations themselves are not lighter. Article 9 risk management, Article 10 data governance, Article 11 and Annex IV technical documentation, Article 13 transparency, Article 14 human oversight, Article 15 accuracy and robustness — all remain in their current form. Second, the absence of a standards-availability buffer means that companies cannot expect a further pause if harmonised standards are still incomplete by late 2027. The window has been extended once. A second extension is improbable.
The Annex I compromise — what changed for embedded AI
The 28 April trilogue collapsed on a single technical question: whether AI systems already covered by EU sectoral product safety legislation should remain under the AI Act's direct scope (Section A of Annex I) or be governed primarily by their sectoral frameworks (Section B). Parliament had pushed for moving all Section A products to Section B; Council had resisted.
The 7 May compromise lands in the middle. The Machinery Regulation (EU) 2023/1230 has been moved from Section A to Section B of Annex I — AI systems falling under it are now addressed primarily through that sectoral framework rather than through the AI Act's direct scope. The Medical Devices Regulation, In Vitro Diagnostics Regulation, and Radio Equipment Directive remain unchanged, with AI-specific overlay obligations preserved. A new mechanism allows the Commission, through implementing acts, to resolve situations where sectoral law contains AI-specific requirements equivalent to those of the AI Act, by limiting the direct application of AI Act high-risk requirements where the sectoral regime delivers an equivalent level of protection.
The agreement also narrows the definition of "safety component" — products with AI functions that only assist users or optimise performance are no longer automatically captured. The relevant test is whether failure or malfunction creates health or safety risks. For NYC companies whose products fall under EU sectoral safety law, the primary effect is that the conformity assessment route is clarified, not that the substantive obligations disappear.
The new Article 5 prohibition · NCII and CSAM
The single most operationally significant addition is the new prohibition introduced into Article 5. The agreed text targets AI systems used to generate non-consensual intimate imagery (NCII) of identifiable natural persons, and AI systems used to generate child sexual abuse material (CSAM). The prohibition applies in three configurations: placing such AI systems on the EU market with the purpose of generating that content; placing them on the market without reasonable safety measures to prevent such generation; and deployers using them for that purpose.
Three structural points matter. First, the prohibition reaches not only intentional designs (the so-called "nudifier" applications) but also general-purpose generative image, video and audio models that fail to put reasonable safeguards in place against this misuse. Second, the safeguards exception is genuine — providers who implement effective technical measures (training data filtering, prompt and output classifiers, output-stage content moderation) are within the safe harbour. Third, the application date is 2 December 2026, which gives providers approximately seven months from formal adoption to bring affected systems into line.
For NYC companies developing or deploying generative AI capable of producing realistic images, video or audio of identifiable persons, the operational implication is straightforward: technical safeguards must be production-ready by December 2026, and the documentation showing that those safeguards are reasonable, proportionate and effective must be ready alongside.
Article 50(2) watermarking — the deadline that comes first
Article 50(2), which requires providers of generative AI systems to mark synthetic audio, image, video and text content as machine-readable, was not postponed. Its application date remains 2 August 2026 for AI systems placed on the market on or after that date. What the Omnibus does add is a four-month transitional period for AI systems already on the market before 2 August 2026 — those systems must comply by 2 December 2026.
This is the closest live deadline coming out of the deal. The Commission had originally proposed a six-month grace period running to 2 February 2027. The Parliament had pushed for three months, ending on 2 November 2026. The trilogue split the difference at four months. For providers of generative AI features in production today, the engineering work — UI labelling, machine-readable metadata embedding at the output layer, and detection capability — needs to be ready within roughly seven months. This is not a documentation task.
What did not change
The provisions that remain entirely outside the Omnibus deserve explicit attention, because some industry communications since 7 May have been imprecise on this point.
Article 4 (AI literacy) remains in its current direct-obligation form. The Commission had proposed transforming it into a Member State and Commission duty to "foster" AI literacy. That proposal did not survive Council and Parliament negotiations in the form initially feared. Article 4 continues to oblige providers and deployers — including NYC companies operating in the EU — to ensure their staff have a sufficient level of AI literacy. Training programs should continue.
The eight existing Article 5 prohibitions remain in application. Subliminal techniques beyond consciousness, exploitation of vulnerabilities, social scoring, predictive policing based solely on profiling, untargeted scraping of facial images, emotion recognition in workplaces and education, biometric categorisation by sensitive attributes, real-time remote biometric identification — all eight have been in force since 2 February 2025 and are unaffected.
GPAI obligations under Chapter V (Articles 50 to 55) continue on their current schedule. Application since 2 August 2025; not amended by the Omnibus.
Article 49 registration was retained. The Commission had proposed eliminating the registration obligation for AI systems that providers self-classify as non-high-risk under Article 6(3). Both Council and Parliament rejected that elimination. The registration obligation survives in a streamlined form, with reduced content requirements but preserved in principle.
"Strict necessity" standard for bias detection survived. The Council and Parliament resisted the Commission's broader formulation for processing special categories of personal data (GDPR Article 9) for the purpose of bias detection and correction. The "strictly necessary" threshold remains the standard, narrower than the unconditional necessity language the Commission had pushed for.
De lege lata versus de lege ferenda — the formal adoption gap
The 7 May agreement is provisional. It must still go through legal-linguistic revision (a technical exercise, but one that occasionally produces small substantive shifts), formal endorsement by the Council and the European Parliament, and publication in the Official Journal of the European Union. Until publication, Regulation (EU) 2024/1689 remains in force in its original wording. The 2 August 2026 application date for high-risk obligations is the legally binding one as a matter of lex lata.
For most operational planning this distinction is academic — both institutions have publicly committed to completing formal adoption before 2 August 2026, and political reversal of the deal is unlikely. But there are two scenarios where the distinction matters. First, if formal adoption slips past 2 August 2026 for any procedural reason, the high-risk obligations enter into application that day on the original timeline. Second, the consolidated text after legal-linguistic review may contain small modifications relative to the political text — recital drafting in particular tends to evolve. Companies that have built their compliance posture on a specific recital interpretation should re-read the consolidated text when published.
For risk-conscious NYC counsel, the operational posture is: plan against the new dates for resource allocation and documentation roadmaps, treat the original dates as the legal floor, and hold a contingency review for the period August through October 2026 in case the consolidated text shifts material content.
What NYC companies should do this quarter
The deal does not change the fundamental compliance architecture. It changes the timing, adds one prohibition, and clarifies the sectoral interaction. The action list for NYC companies follows from that:
Confirm scope and classification work is current. Article 2 territorial scope, Article 3 role classification (provider versus deployer), Annex III high-risk classification — none of these are affected by the Omnibus. If your scope assessment has been waiting for trilogue clarity, that wait is over. The picture is now stable for at least the next eighteen months. Run or refresh the scope self-assessment.
Begin synthetic-content disclosure engineering for the 2 December 2026 watermarking deadline. If you provide generative AI to EU users — directly or through any product surface that reaches them — UI labelling, output-layer metadata embedding, and detection capability must be production-ready by December. Seven months is not a long runway for engineering work that touches model output paths, content delivery infrastructure, and user-facing UI.
Assess Article 5 NCII/CSAM exposure for any generative image, video or audio capability. The prohibition reaches general-purpose generative AI tools that fail to put reasonable safeguards in place against this misuse — not only intentional nudifier applications. Documentation of training data filtering, prompt and output classifiers, and output-stage moderation should be drafted alongside the technical implementation, in the register an EU market surveillance authority would expect to read.
Use the extended Annex III runway to strengthen documentation, not to defer it. The 2 December 2027 application date is more than a calendar shift — it is the window in which Annex IV technical documentation, Article 9 risk management, Article 10 data governance evidence, and Article 26 deployer records can be built to a quality standard that survives inspection by an EU national market surveillance authority. Companies that treat the extension as a reason to pause are the companies whose first enforcement encounter, whenever it arrives, will find them unprepared.
For dual-regime NYC companies, integrate LL144 and EU AI Act work now. The Annex III runway extension does not change the LL144 annual bias audit cadence. Most AEDTs used in NYC recruitment are simultaneously Annex III high-risk AI systems. A single integrated workstream — bias methodology aligned with Article 10, candidate notice infrastructure aligned with Article 26(11), audit cadence aligned with Article 9 iterative risk management — produces evidence usable by both DCWP and EU market surveillance. See dual compliance for HR tech in NYC.
Article 4 AI literacy programs continue. The Omnibus did not transform Article 4 into a Member State foster duty in the form initially proposed. Direct provider and deployer obligations remain. Continue staff training programs and document them.
Watch the consolidated text closely. Once legal-linguistic revision concludes and the consolidated text is published, run a re-read against your current compliance plan. Recitals matter for interpretation, particularly for the safeguards exception under the new Article 5 prohibition and for the equivalence mechanism under Annex I. Small drafting shifts can have outsized practical consequences.
The argument for a second delay is exhausted
A reasonable question, particularly from companies that paused work in late 2025 expecting Brussels to keep moving the deadline, is whether a further delay might come. The honest reading is that it will not.
The political argument for delay was used in this round and accepted. The Cypriot Presidency has framed the deal as the first deliverable under the One Europe, One Market roadmap. The Annex I compromise was face-saving rather than structural. A second postponement would terminate the leverage that makes the AI Act globally relevant — the Brussels effect depends on a credible application date. The institutions have spent political capital on this deal and are unlikely to re-open it.
For NYC compliance teams, the operative posture is clear: the new dates are the planning baseline, formal adoption is a matter of weeks rather than months, and the work between now and 2 December 2026 — watermarking engineering, NCII/CSAM safeguards, scope confirmation — is non-negotiable.
Primary sources. Council of the EU, Press release "Artificial Intelligence: Council and Parliament agree to simplify and streamline rules", 7 May 2026. European Parliament, Press release ref. 20260427IPR42011, 7 May 2026. European Commission, COM(2025) 836 final — Proposal for a Regulation amending Regulations (EU) 2024/1689 and (EU) 2018/1139 (Digital Omnibus on AI), procedure 2025/0359(COD), 19 November 2025. Council general approach, Press release 189/26, 13 March 2026. European Parliament Report A10-0073/2026 (joint IMCO-LIBE), 18 March 2026; plenary vote 26 March 2026 (569 in favour, 45 against, 23 abstentions). Joint press conference of co-rapporteurs Arba Kokalari (EPP, Sweden) for IMCO and Michael McNamara (Renew, Ireland) for LIBE, Strasbourg, 7 May 2026. Regulation (EU) 2024/1689: Articles 4, 5, 6, 9, 10, 11, 13, 14, 15, 26, 49, 50(2), 60, 75, 99, 113; Annex I; Annex III; Annex IV. Earlier Lexara Advisory analysis: The EU AI Act Digital Omnibus: what changes for NYC companies (19 April 2026).