Three and a half months before the AI Act's originally-planned general application date of 2 August 2026, the regulatory landscape is in active motion. The Digital Omnibus on AI (COM(2025) 836) — tabled on 19 November 2025 and now in trilogue — proposes to amend the application dates. The Council adopted its position on 13 March 2026; the Parliament followed on 26 March 2026 with 569 votes in favour. Harmonised standards are in public enquiry but not yet OJEU-published. National competent authorities continue to be designated at uneven pace. This note summarizes where things stand for NYC companies preparing EU AI Act compliance, with sources from the Commission, Council, and Parliament directly.

This is a point-in-time update as of April 2026. The regulatory situation will continue to evolve through the trilogue conclusion (expected May-June 2026) and the entry into force of any amended text.

Status of the Digital Omnibus on AI — trilogue ongoing

The Commission proposed the Digital Omnibus on AI on 19 November 2025 as COM(2025) 836. The key amendments: conditional high-risk application tied to standards availability, Article 4 AI literacy transformation into a Member State duty, AI Office supervision centralisation for systems built on GPAI models, extension of SME regulatory privileges to small mid-cap enterprises (SMCs).

The Council adopted its general approach on 13 March 2026 under the Cypriot Presidency, converting the conditional trigger to fixed dates: 2 December 2027 for Annex III high-risk systems, 2 August 2028 for Annex I high-risk systems embedded in products. The Council also added a new prohibited practice under Article 5 covering AI generating non-consensual sexually explicit content (including child sexual abuse material), reinstated the Article 49 registration for Article 6(3) exempt systems, and limited the AI Office's exclusive competence over GPAI-system supervision where specific sectoral authorities apply (law enforcement, border management, financial services).

The Parliament adopted its negotiating position in plenary on 26 March 2026 with 569 votes in favour, 45 against, 23 abstentions. The Parliament aligned with the Council on fixed dates and the nudifier prohibition but used a broader consent formulation. The Parliament proposed a shorter Article 50(2) transitional period — until 2 November 2026 rather than the Commission's 2 February 2027. For detail, see our Digital Omnibus deep dive.

Trilogue negotiations launched on 26 March 2026, immediately after the Parliament plenary vote. The Cypriot Presidency has stated the objective of reaching final agreement before 2 August 2026. Whether they succeed depends on the speed of compromise on open divergences — particularly on AI Office competence scope and on the precise Article 5 nudifier formulation.

What this means for 2 August 2026 as currently scheduled

The AI Act as currently in force (before any Omnibus amendment) specifies that the main body of obligations — including Annex III high-risk obligations, Article 26 deployer duties, Article 49 registration, Article 50 transparency — applies from 2 August 2026. Article 6(1) products-integrated high-risk applies from 2 August 2027.

If the Omnibus concludes with Council-Parliament agreement before 2 August 2026 and is adopted by the EU legislator, the new fixed dates become binding: 2 December 2027 for Annex III, 2 August 2028 for Annex I. If the Omnibus does not conclude by 2 August 2026, the original dates apply as baseline and NYC companies must meet them.

Our working assumption as of April 2026: Omnibus will likely conclude and enter into force before 2 August 2026, given the Cypriot Presidency's stated priority. But this is not certain.

Harmonised standards — first AI-specific standard in public enquiry

On 30 October 2025, prEN 18286 — Artificial Intelligence - Quality Management System for EU AI Act Regulatory Purposes entered public enquiry as the first harmonised standard specifically developed for the AI Act. It supports Article 17 quality management system requirements and is the first AI Act standard to progress toward OJEU publication.

Until harmonised standards are OJEU-published, providers of high-risk AI systems lose the Article 40 presumption of conformity shortcut. They must document compliance against Articles 9–15 directly, articulating their own methodology. This increases documentation burden but does not change the underlying requirements.

The Commission has also stated it will request during 2026 a standardisation deliverable on resource performance (energy, material consumption) of high-risk AI systems, extending the standards portfolio beyond QMS.

AI Office Service Desk launched — official compliance resources available

The Commission's AI Act Service Desk and Single Information Platform are now live at ai-act-service-desk.ec.europa.eu. The platform provides an interactive Compliance Checker, the authoritative AI Act Explorer, FAQ compiled from stakeholder submissions, and a direct-question submission channel to the AI Office expert team. In early 2026 the platform will expand to all 24 EU official languages.

This is the authoritative resource for NYC companies seeking Commission interpretation of specific questions. For systematic analysis, see our official resources guide.

AI Office's GPAI enforcement powers enter application 2 August 2026

The AI Act's Chapter V obligations on GPAI providers have applied since 2 August 2025. The Commission's enforcement powers against GPAI providers — powers to request documentation under Article 91, conduct evaluations under Article 92, request measures under Article 93, and impose fines under Article 101 — enter application on 2 August 2026.

Before that date, the AI Office monitors compliance but does not yet have the enforcement hammer. Providers of GPAI models already on the market before 2 August 2025 have until 2 August 2027 to reach compliance.

The General-Purpose AI Code of Practice, effective 2 August 2025, offers a voluntary compliance route. The Commission has stated in its guidelines that providers adhering to an adequate Code of Practice receive focused AI Office enforcement attention, meaning the Code is operationally attractive even as a voluntary instrument.

National competent authorities — designation continues

The AI Act requires each Member State to designate at least one market surveillance authority for AI Act enforcement. Designation progress as of April 2026 is uneven: some Member States have designated (France has tasked ANSSI with some AI Act competences, Spain has designated AESIA, Ireland has not yet designated a single MSA); others are in consultation or drafting phase.

For NYC companies with EU market exposure, the practical implication is that you will not know definitively which authority will look at your AI system deployments until national designations complete. Member States whose designation is most advanced tend also to be the most active in early enforcement posture.

Commission guidelines pipeline — extensive output expected in 2026

The Commission has announced in the Digital Omnibus Explanatory Memorandum that it will publish during 2026 approximately a dozen distinct guidelines, covering:

Practical application of the high-risk classification; practical application of Article 50 transparency requirements; reporting of serious incidents by high-risk AI providers; practical application of high-risk requirements (Arts. 9–15); practical application of obligations for providers and deployers (Arts. 16, 26); FRIA template under Article 27; responsibilities along the AI value chain; substantial modification analysis; post-market monitoring plans; simplified QMS elements for SMEs and SMCs; interplay with GDPR, Cyber Resilience Act, and Machinery Regulation; and competences of conformity assessment bodies.

The Code of Practice on marking and labelling of AI-generated content is expected in Q2 2026 (May-June) in support of Article 50(2) transparency obligations.

What NYC companies should do now

With regulatory flux on specific dates but the core Regulation unchanged, the defensible strategy for NYC companies in April 2026 is:

Continue scope and classification work. Article 2 scope, Article 3 roles, Article 5 prohibitions, Annex III classification — none of this is affected by the Omnibus. This work has to happen regardless of whether application is 2 August 2026 or 2 December 2027.

Maintain documentation audit-readiness. Whether the deadline is August 2026 or December 2027, documentation must be current and retrievable when requested. Audit-ready documentation is the best protection against Tier 3 penalty exposure under Article 99(5) (see our penalties note).

Structure Article 5 compliance assuming the nudifier prohibition passes. If your product involves generative AI for images or video, assume the additional prohibition will be in force before 2 August 2026. Implement technical safeguards that can later be documented as "effective."

If eligible, prepare to claim SMC privileges. NYC companies with 50-500 employees and between €10M and €100M turnover may fall within the proposed SMC definition (Commission Recommendation 2025/1099). Document the qualification now so it can be invoked if and when the Omnibus passes.

Monitor trilogue weekly. The most consequential date for NYC compliance planning is when the trilogue concludes. Subscribing to the Commission AI Act Service Desk updates and the European Parliament Legislative Train feed is the most efficient way to track this.

Article 4 AI literacy — maintain training programs. Even if Article 4 transforms from an operator obligation to a Member State duty, the Article 26(2) deployer obligation to ensure appropriate training remains. And AI literacy is good governance regardless.

The underestimated risk — documentation without retrievability

From our advisory practice, the single most underestimated compliance risk is not missing the deadline (most companies preparing seriously will meet it). It is producing incomplete or disorganised documentation that generates Article 99(5) Tier 3 exposure when authorities request information.

Tier 3 fines up to €7.5M or 1% of worldwide turnover attach to providing "incorrect, incomplete or misleading information" in response to authority requests. This tier triggers most often not from bad faith but from documentation that cannot be retrieved under deadline pressure.

The operational answer is unglamorous: maintain a single indexed file, current at all times, covering Annex IV technical documentation, Article 9 risk management records, Article 10 data governance documentation, Article 14 human oversight design, Article 17 QMS records, and Article 72 post-market monitoring logs. This is substantively cheaper to build and maintain as you go than to reconstruct under deadline.


Primary sources. European Commission, COM(2025) 836 final, "Proposal for a Regulation amending Regulations (EU) 2024/1689 and (EU) 2018/1139 (Digital Omnibus on AI)", procedure 2025/0359(COD), 19 November 2025. Council of the EU, Press release 189/26, "Council agrees position to streamline rules on Artificial Intelligence", 13 March 2026. European Parliament plenary vote, 26 March 2026 (569–45–23); Press release "Artificial Intelligence Act: delayed application, ban on nudifier apps", 26 March 2026; Report A10-0073/2026 (IMCO-LIBE joint report). European Standardisation Body, prEN 18286 public enquiry entry, 30 October 2025. Commission AI Act Service Desk (ai-act-service-desk.ec.europa.eu). Commission Guidelines on Prohibited AI Practices (4 February 2025). Regulation (EU) 2024/1689: Article 2, Article 4, Article 5, Article 6, Article 9–15, Article 16, Article 17, Article 26, Article 27, Article 40, Article 49, Article 50, Article 62, Article 72, Article 73, Article 91, Article 92, Article 93, Article 99, Article 101, Article 111, Article 113. Commission Recommendation (EU) 2025/1099 (SMC definition).