Status update — April 2026: Application dates referenced below (2 August 2026 for Annex III high-risk obligations) are subject to the Digital Omnibus on AI (COM(2025) 836), currently in trilogue. Both Council (13 March 2026) and Parliament (26 March 2026) support fixed dates of 2 December 2027 for Annex III and 2 August 2028 for Annex I. Scope, roles, and classification analysis are unaffected — Article 2, Article 3, and Article 6 remain as enacted. See our Digital Omnibus note.
Most NYC companies we talk to in 2026 cannot answer the basic question: "Does the EU AI Act apply to us, and if yes, how?" This self-assessment walks through the key decision points in five minutes. It is not a substitute for a formal scope memo — for that, Lexara Advisory runs a free EU AI Act readiness assessment for NYC companies — but it is a first-pass structured check that helps you decide whether you need a formal review at all.
Work through the four questions in order. Each has specific outcomes. If at any step you reach "in scope," you likely need a formal memo; the subsequent questions then refine classification.
Step 1 — Do you use, build, or distribute AI systems at all?
The Regulation defines AI systems in Article 3(1): "a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
In practice this includes: machine learning models, large language models deployed in your products, recommender systems, ranking algorithms, computer vision systems, voice recognition, natural language processing tools, automated decision systems, and most modern AI-labelled features.
Not covered: pure deterministic software, rule-based systems without inference, basic data analytics, spreadsheet formulas, traditional statistical models where human judgment dominates.
If your company uses none of the covered technologies, the EU AI Act does not apply to you. Proceed no further. If your company uses at least one covered AI system, go to Step 2.
Step 2 — Is there any EU nexus in your AI deployments?
Under Article 2 of the Regulation, the EU AI Act reaches your company if any of three grounds apply (see our Article 2 scope pillar for the full analysis). The binary test:
Ground 1 — EU market presence. Do you sell, license, or provide your AI system to customers in the European Union? If yes (even one EU customer on a pricing page that accepts EU-issued cards), you are a provider under Article 2(1)(a).
Ground 2 — EU deployment. Do you operate an EU subsidiary, branch office, or affiliate that uses an AI system in the ordinary course of its operations? If yes, that entity is a deployer under Article 2(1)(b).
Ground 3 — Output used in Union. Does the output of your AI system reach persons located in the EU — content delivered to EU users, recommendations consumed by EU audiences, decisions affecting EU counterparties, signals used by EU-located staff? If yes, you are captured under Article 2(1)(c).
If all three grounds are negative, the EU AI Act does not apply. If any ground is positive, proceed to Step 3. If you cannot clearly answer any ground, that itself is a reason for a formal assessment.
Step 3 — Is the AI system at issue prohibited, high-risk, or limited-risk?
Once in scope, the next question is classification. The Regulation uses a four-tier risk structure: prohibited (Article 5), high-risk (Article 6 + Annex I or Annex III), limited-risk (Article 50 transparency), and minimal-risk (no specific AI Act obligations beyond general duties).
Prohibited — Article 5. Is your AI system within one of the eight categories under Article 5(1) — subliminal manipulation causing significant harm, exploitation of vulnerabilities, social scoring, predictive policing by pure profiling, untargeted facial image scraping, emotion inference in workplace/education, biometric categorization for sensitive attributes, or real-time remote biometric ID in public spaces for law enforcement? If yes, the practice is prohibited and applicable since 2 February 2025. See our Article 5 deep dive. If your deployment crosses any of these lines, the only compliance path is to remove the feature or withdraw from EU market.
High-risk — Annex III. Is your AI system used for any of the eight Annex III categories: biometrics, critical infrastructure, education and vocational training, employment and workers management, essential services and credit, law enforcement, migration/asylum/border, or administration of justice and democratic processes? If yes, and Article 6(3) does not exempt you (narrow procedural task without profiling), the system is high-risk as of 2 August 2026. See our Annex III categories explained.
Limited-risk — Article 50. Is your AI system a chatbot interacting with natural persons, a system generating synthetic media, or an emotion-recognition or biometric-categorization system not within Article 5 prohibitions? If yes, transparency obligations under Article 50 apply from 2 August 2026.
Minimal-risk. If your AI system is none of the above and is not a general-purpose AI model, only baseline obligations apply (Article 4 AI literacy for all providers and deployers, already in force since February 2025).
Step 4 — Are you a provider, a deployer, or both?
Obligations attach to specific roles. Article 3 defines the main roles; Chapter III specifies the obligations by role.
Provider — Article 3(3). You develop an AI system or have it developed and place it on the market or put it into service under your own name or trademark. Provider obligations are the heaviest: Annex IV technical documentation, Article 9 risk management, Article 10 data governance, Article 43 conformity assessment, Article 49 database registration, Article 22 authorised representative if established in third country. See our authorised representative note.
Deployer — Article 3(4). You use an AI system under your authority in the course of a professional activity. Deployer obligations concentrate in Article 26: use in accordance with provider instructions, human oversight as designed, input data relevance, monitoring operation, logging, cooperation with competent authorities, and notice to natural persons subjected to the system under Article 26(11).
Both. Many NYC companies are simultaneously deployers (of tools they license from third parties) and providers (of their own-branded or own-developed features). In this case each capacity has its own obligation set for the relevant AI systems.
Importer / distributor. If you place AI systems on the EU market that were developed by a third country provider, you may have importer or distributor obligations under Articles 23 and 24. Less common for pure-play NYC startups.
Outcomes — what to do with your answers
Combine the four steps:
No EU nexus → you are out of scope. Keep a short memo documenting this conclusion. Revisit if your business changes (new EU customers, new EU operations, AI outputs reaching EU users).
EU nexus + prohibited practice → immediate action needed. The practice is already in violation. Remove the feature, restructure the deployment, or exit the EU market. Article 5 applies since February 2025, with full penalty tier (€35M / 7%) from August 2026.
EU nexus + high-risk → compliance preparation required before 2 August 2026. For providers: Annex IV documentation, Article 9 risk management, Article 43 conformity assessment, Article 49 registration, Article 22 authorised representative. For deployers: Article 26 operational implementation, Article 14 human oversight, Article 26(11) candidate or user notices.
EU nexus + limited-risk → Article 50 transparency obligations from 2 August 2026. User disclosure of AI interaction, marking of synthetic content. Lighter but still substantive.
EU nexus + minimal-risk → Article 4 AI literacy only. Ensure staff using AI systems have sufficient AI literacy (already applicable since February 2025). See our Article 4 guide.
When to engage a formal assessment
This self-assessment is a first pass. You should engage a formal scope memo with a qualified advisor if:
Any step above was hard to answer cleanly, especially Step 2 (scope) or Step 3 (classification). Ambiguity in scope or classification is exactly what authority inspection will probe.
You conclude "in scope" and "high-risk." The compliance obligations are substantial and the documentation requires specific expertise.
You are an AI provider planning EU market entry. The authorised representative structural requirement is prerequisite.
You face an LL144 bias audit cycle and want to integrate with EU AI Act documentation. See our dual compliance roadmap.
You are a deployer of GPAI models (GPT, Claude, Gemini via API) and want to understand the precise documentation boundary with the upstream provider. See our GPAI deployer note.
Next step
If the self-assessment has flagged likely scope, Lexara Advisory runs a free EU AI Act readiness assessment for NYC companies. It is delivered as a structured written memo produced by an EU-trained lawyer based in New York, drafted in the register EU market surveillance authorities will apply. No commitment, no generic checklist.
Primary sources. Regulation (EU) 2024/1689: Article 2 (scope), Article 3 (definitions), Article 4 (AI literacy), Article 5 (prohibited practices), Article 6 (high-risk classification), Article 9-15 (provider requirements), Article 22 (authorised representative), Article 26 (deployer obligations), Article 43 (conformity assessment), Article 49 (registration), Article 50 (transparency), Article 113 (application dates); Annex I, Annex III, Annex IV.