One of the most consistent misunderstandings among US counsel and in-house compliance teams, in our experience during early 2026, is the assumption that the EU AI Act is a European law for European companies. The Regulation itself, in its first substantive article, disabuses that reading. Article 2 of Regulation (EU) 2024/1689 extends the territorial reach of the EU AI Act to providers and deployers established outside the Union under specific conditions. For a NYC company building or using AI systems, those conditions are met more often than the incorporation paperwork suggests.
This piece walks through Article 2 as it actually reads, identifies the three scope triggers that most commonly capture NYC companies, and names the factual tests that decide each. It is technical reading oriented to compliance leads and general counsel who need to answer a single binary question: does the EU AI Act apply to us, or not?
The text of Article 2(1)
Article 2(1) of the Regulation lists the persons and activities to which the Regulation applies. For the NYC-company inquiry, three of the grounds matter:
Article 2(1)(a) captures providers that place on the market or put into service AI systems, or place on the market general-purpose AI models, in the Union, irrespective of whether those providers are established or located within the Union or in a third country. The phrase "irrespective of whether established in a third country" is the extraterritorial hook. A US-incorporated company that supplies an AI system to EU users is a provider within the meaning of the Regulation.
Article 2(1)(b) captures deployers of AI systems that have their place of establishment or are located within the Union. This is the intra-Union ground — not directly relevant to a NYC company's US operations.
Article 2(1)(c) captures providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union. This is the output-used-in-Union ground. It is the most commonly triggered and most often overlooked.
Scope trigger one — you place an AI system on the EU market
A NYC company that sells, licenses, or otherwise makes available an AI system to customers in the European Union is a provider under Article 2(1)(a), regardless of where the company is incorporated. "Placing on the market" is defined in Article 3(9) as the first making available of an AI system on the Union market. "Making available on the market" is defined in Article 3(10) as any supply of an AI system for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.
For a NYC SaaS company with a pricing page that accepts EU-issued credit cards, EU-registered company entities as customers, or physical delivery to EU addresses, the test is typically met. For a NYC-based B2B company selling to US multinationals that then deploy the AI tool across their EU subsidiaries, the question is more subtle — the NYC company may still be the provider if the contract explicitly contemplates EU deployment, or if the NYC company has designed the tool for Union deployment.
The factual questions that resolve this scope trigger are: does the sales motion actively target EU customers, does the contractual framework explicitly contemplate EU use, and is the tool technically configured for Union market use (language localization, data residency options, GDPR-compliant data flows). A yes to any of these supports provider status.
Scope trigger two — you put an AI system into service through EU operations
A NYC company that operates an EU subsidiary, an EU branch office, or a regulated EU entity and deploys an AI system through that entity is a deployer under Article 2(1)(b) with respect to that deployment. This is a less common scenario for pure-play NYC startups but a very common one for established NYC financial institutions, law firms, professional services firms, and multinationals with EU-facing business lines.
The factual test here is whether the EU entity is genuinely using the AI system — as distinct from the US parent company technically operating the system with EU personnel merely consuming outputs. The Regulation does not distinguish based on corporate form; it looks at deployment reality. An AI system accessed by EU employees over a VPN from US infrastructure, in the course of work performed within the Union, is put into service in the Union even if the technical stack is US-hosted.
Scope trigger three — output used in the Union
Article 2(1)(c) is the most aggressive scope ground and the most commonly missed. It captures a US-established deployer when the output of the AI system is used in the Union — regardless of where the system itself runs, regardless of where the deployer is established, regardless of whether the deployer has any EU corporate presence.
The text says what it says: where the output of the AI system is used in the Union. Recital 22 of the Regulation confirms the intent — the scope extends to providers and deployers in third countries where, for example, an operator established in a third country uses the output of an AI system to perform operations relating to persons located in the Union.
Three common NYC-company scenarios trigger this ground:
A NYC company runs a content-generation model whose output is delivered to EU users. The model can be entirely US-hosted, trained on US data, and operated by US personnel. If the generated content reaches a user in Paris or Berlin, the output is used in the Union.
A NYC company runs a recommender or ranking model that orders content for EU consumers. Identical logic. The recommendations are outputs used in the Union.
A NYC company runs a decision-support tool used by internal EU staff. If an EU-based employee consults the AI system's output to make a decision affecting a counterparty, customer, or other operation, the output is used in the Union.
The factual test is deceptively simple: trace where the AI system's output lands. If any landing point is in the Union, Article 2(1)(c) applies. There is no minimum-volume threshold in the text. One EU user can be enough.
What does NOT trigger scope
Some features commonly assumed to trigger scope actually do not, read strictly:
Having EU shareholders does not trigger scope on its own. The Regulation is about operational deployment, not capital structure.
Having an EU-based software vendor in the supply chain does not automatically make the NYC company a provider or deployer under the AI Act — the AI Act is about who places the AI system on the market and who deploys it, not about the identity of third-party vendors in the stack. The vendor may have its own AI Act exposure, which propagates contractually, but the NYC company's own status is determined by its own activities.
Having a website accessible from the EU does not, by itself, trigger scope. Mere accessibility has not been treated as sufficient under related Union extraterritoriality regimes (GDPR Article 3(2), for instance, requires something more — targeting behaviour). The AI Act's scope grounds follow the same logic: there must be a real deployment or real output-in-Union use, not merely passive technical accessibility.
Exclusions under Article 2(3) and Article 2(6)
Article 2 also carves out exclusions. Article 2(3) removes AI systems developed or used exclusively for military, defence or national security purposes. Article 2(6) removes AI systems released under free and open-source licenses from most obligations applicable to providers of low-risk systems — though prohibited practices under Article 5 and general-purpose AI model obligations in Chapter V still apply.
For a typical NYC commercial company, neither exclusion will normally be relevant. The defence carve-out is narrow and strictly construed. The open-source carve-out has significant limits in practice: it applies only to non-high-risk systems, and most high-risk deployments involve commercial productization that falls outside the open-source shelter.
What to do with this analysis
A defensible scope determination, at the level that would hold up in an EU market surveillance authority inquiry, produces a short written memo covering four points:
First, an inventory of the company's AI systems — both those built in-house and those deployed through third-party providers — with a brief technical description of each.
Second, for each system, an Article 2 scope analysis addressing the three trigger grounds (2(1)(a), 2(1)(b), 2(1)(c)). The analysis should record the factual basis for the conclusion, not merely the conclusion itself.
Third, for any system that is in scope, a preliminary classification under Article 5 (prohibited), Annex III (high-risk), Article 50 (transparency obligations for limited-risk systems), or outside any specific category.
Fourth, a gap assessment mapping current compliance state to the obligations that apply for the classification determined in step three.
This memo is the starting point for everything downstream — technical documentation under Annex IV, risk management under Article 9, data governance under Article 10, human oversight under Article 14, registration under Article 49 where applicable, and the deployer obligations in Article 26.
The cost of doing this memo now, with six months or more before the 2 August 2026 application date, is substantially lower than the cost of doing it in response to an enforcement inquiry in late 2026 or 2027.
For a written scope assessment or dual compliance engagement, see Lexara Advisory.
Primary sources referenced. Regulation (EU) 2024/1689 of 13 June 2024 (EU AI Act): Article 2 (scope), Article 3 (definitions), Article 5 (prohibited practices), Article 49 (EU database registration), Article 50 (transparency obligations), Article 113 (entry into force and transitional provisions); Annex III (high-risk AI systems). Recital 22 (extraterritorial application rationale).