Status update — April 2026: AI Office enforcement powers against GPAI providers under Articles 91–93 and 101 enter application on 2 August 2026. The Digital Omnibus on AI (COM(2025) 836) proposes expanding AI Office supervisory competence under Article 75 to AI systems built on GPAI models where the model and system come from the same provider, as well as to AI systems embedded in VLOPs/VLOSEs under the Digital Services Act. Both Council and Parliament positions include exceptions retaining national authority competence in law enforcement, border management, and financial services. See our Digital Omnibus note.

A common compliance anxiety in NYC tech teams in early 2026 concerns general-purpose AI models — the foundation models accessed via API from OpenAI, Anthropic, Google, and others. The anxiety is typically framed as: "we use GPT in our product, which makes us a provider of a GPAI model, which pulls us into Chapter V of the EU AI Act." This framing is usually wrong. The actual obligation landscape is narrower than the anxiety suggests, but the parts that do apply require attention.

What Chapter V actually covers

Chapter V of Regulation (EU) 2024/1689 (Articles 51-56) covers general-purpose AI models. A general-purpose AI model is defined in Article 3(63) as an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.

The obligations in Chapter V apply to providers of general-purpose AI models. Article 53 sets obligations for all GPAI model providers: technical documentation maintenance (with specific content listed in Annex XI), information provision to downstream providers integrating the model, compliance with Union copyright law, and publication of a sufficiently detailed summary of training data content.

Article 55 adds obligations for providers of GPAI models with systemic risk: model evaluation, risk assessment and mitigation, incident reporting, and cybersecurity requirements.

The provider/deployer distinction is the key point

A NYC company that calls OpenAI's API to use GPT in a product is not, in the usual case, a provider of the GPAI model. OpenAI is. Anthropic is for Claude. Google is for Gemini. The NYC company is a deployer of an AI system that integrates a GPAI model — and its obligations follow the AI system classification, not the GPAI model classification.

The distinction matters because Chapter V obligations (Articles 53-55) apply to GPAI model providers. They do not apply to deployers who merely integrate an existing GPAI model into their own AI system. For the NYC deployer, the applicable obligations are in Chapter III Section 3 (deployer obligations for high-risk AI systems, Article 26) or the transparency obligations in Article 50, depending on what the integrated AI system does.

When does a NYC company become a GPAI model provider

Article 3(63) and the surrounding text establish when a company becomes a provider of a GPAI model. The main triggers are:

Training a foundation model from scratch. If the NYC company trains a large language model on a substantial dataset and places it on the market, the company is a GPAI model provider.

Substantially modifying an existing GPAI model. Fine-tuning alone, on a modest dataset for a specific downstream application, typically does not convert the fine-tuner into a new GPAI model provider. The Regulation's approach here draws the line where the modification produces a new model that itself displays significant generality. Light fine-tuning for customer-service chat does not cross that line; retraining a base model with billions of new tokens and releasing it as a distinct product does.

Open-sourcing or redistributing a modified model. If the modified model is made available to third parties, this can trigger provider obligations for the modification.

In practice, very few NYC companies building on GPAI APIs become GPAI model providers themselves. The label applies to the foundation-model lab, not the application developer.

What a NYC deployer using GPT-class models should actually document

Even though Chapter V does not apply, the NYC deployer still has obligations. These depend on what the integrated AI system does:

If the integrated AI system is used for Annex III high-risk purposes (recruitment, credit scoring, law enforcement, critical infrastructure, etc.), Article 26 deployer obligations apply. This includes: assigning human oversight as designated by the provider, monitoring operation, maintaining logs automatically generated by the system to the extent under the deployer's control, informing workers and their representatives before putting a high-risk system into service, and ensuring input data is relevant and sufficiently representative in view of the intended purpose.

If the integrated AI system is a chatbot or generates synthetic content, Article 50 transparency obligations apply. This means disclosing to users that they are interacting with an AI system (unless obvious from context) and marking generated content as artificial in machine-readable form.

Crucially, the NYC deployer relies on the GPAI model provider's Article 53 documentation to satisfy parts of its own obligations. When OpenAI, Anthropic, or Google publishes model documentation (training data summaries, technical documentation to downstream providers), that documentation becomes part of the NYC deployer's evidence base.

The practical compliance package for a NYC GPAI deployer

A minimal defensible compliance package for a NYC company deploying GPT-class models via API, in an Annex III context (let's say a recruitment chatbot), includes:

An inventory entry naming the GPAI model used, its provider, and the date of the information relied on from the provider's Article 53 documentation.

An Article 50 disclosure shown to users interacting with the chatbot, stating they are interacting with an AI system. One sentence suffices, but it must appear at first interaction.

An Article 26 human oversight design defining which human is responsible for the AI system's outputs, how they can interrupt or override, and how their oversight is logged.

An Article 13 instructions-for-use document reproducing the limitations and intended use disclosed by the GPAI model provider, adapted to the deployer's specific integration.

A risk management file entry (Article 9) covering known GPAI-specific risks: hallucination, prompt injection, jailbreaks, bias propagated from training data, and the mitigations the deployer has implemented (guardrails, output validation, input sanitization).

Systemic risk GPAI — what changes

Article 51 introduces the concept of GPAI models with systemic risk — determined by aggregate compute used for training (the threshold set in Article 51(2) is 10^25 FLOPs) or by Commission designation based on criteria in Annex XIII. For these models, Article 55 adds provider obligations: model evaluations, systemic risk assessment and mitigation, serious incident reporting, and cybersecurity.

For NYC deployers, this does not change the analysis materially. The deployer is still not the GPAI provider. But the deployer should be aware of which GPAI models are classified as having systemic risk (as of early 2026, the most recent frontier models from OpenAI, Anthropic, Google, and others typically cross the threshold), because those are the models whose provider documentation is most rigorous and whose incident reporting creates downstream signals the deployer should monitor.


For GPAI deployer compliance documentation support, see Lexara Advisory.

Primary sources. Regulation (EU) 2024/1689: Articles 3(63), 9, 13, 26, 50, 51, 53, 55; Annexes XI, XIII.