Status update — April 2026: The Digital Omnibus on AI (COM(2025) 836), in trilogue after the Council (13 March 2026) and Parliament (26 March 2026) adopted positions, proposes fixed application dates for Annex III high-risk obligations: 2 December 2027 for stand-alone Annex III high-risk systems, 2 August 2028 for Annex I high-risk systems embedded in regulated products. The Regulation's original 2 August 2026 date applies unless the Omnibus is adopted. See our Digital Omnibus note for the trilogue state.

Annex III of Regulation (EU) 2024/1689 lists AI systems that are classified as high-risk — with the full weight of Chapter III obligations attaching: Annex IV technical documentation, Article 9 risk management, Article 10 data governance, Article 14 human oversight, Article 26 deployer duties, Article 49 database registration. For NYC companies the central question is which of their AI systems fall into Annex III categories. This note walks through all eight, with realistic NYC deployment examples for each.

Written by an EU-trained lawyer who reads Annex III in the same register a European market surveillance authority will apply it — not through translations calibrated to a US audience.

Article 6 and the Annex III mechanism

Before the categories themselves, the classification rule. Under Article 6(1), an AI system is high-risk if it is used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex I. Under Article 6(2), an AI system is high-risk if it falls within the categories listed in Annex III.

Article 6(3) adds an important carve-out: an AI system listed in Annex III is not high-risk where it performs narrow procedural tasks, is intended to improve the result of a previously completed human activity, is intended to detect decision-making patterns without replacing or influencing human assessment, or is intended to perform preparatory tasks relevant to an assessment. But the carve-out does not apply if the system performs profiling of natural persons — profiling systems are always high-risk under Annex III listing.

Providers who conclude their Annex III-listed system is not high-risk under Article 6(3) must document that assessment before placing the system on the market. If the conclusion is disputed by a national authority under Article 80, the provider must be able to defend the classification from the documented record.

Annex III point 1 — biometrics

AI systems intended to be used for biometric identification of natural persons (other than verification intended solely to confirm a specific person is the claimed person), AI systems for biometric categorization based on sensitive attributes or characteristics protected under Union law, and AI systems for emotion recognition.

Note the interaction with Article 5: real-time remote biometric identification in publicly accessible spaces for law enforcement is prohibited (5(1)(h)); emotion inference in workplace/education is prohibited (5(1)(f)); biometric categorization for sensitive attributes is prohibited (5(1)(g)). What remains as high-risk here is biometric identification for other purposes, emotion recognition outside workplace/education contexts, and biometric categorization on non-sensitive attributes.

NYC deployment examples: building access control with facial recognition (retail, office), sports-stadium biometric entry systems, AI-based customer identification in banking, voice authentication in call centres.

Annex III point 2 — critical infrastructure

AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating, or electricity.

This targets operational AI in utilities and digital backbone. The "safety component" framing narrows it — AI that ranks electricity demand forecasts is not a safety component; AI that triggers grid shutdown based on anomaly detection is.

NYC deployment examples: ConEdison or National Grid operational AI would fall here if deployed in the EU; digital-infrastructure companies (CDNs, DNS, major cloud) with EU operations would be in scope for AI components affecting service integrity.

Annex III point 3 — education and vocational training

AI systems for determining access, admission, or assignment of natural persons to educational and vocational training institutions; for evaluating learning outcomes, including those used to steer the learning process; for assessing the appropriate level of education; and for monitoring and detecting prohibited behaviour of students during tests.

This is the education-facing category. NYC exposure is primarily through edtech companies with EU customer bases (universities, corporate training platforms, K-12 products reaching EU families).

NYC deployment examples: edtech platforms that score essays, admissions prediction AI, automated exam proctoring, adaptive learning systems that place students into tracks based on AI assessment.

Annex III point 4 — employment, workers management and access to self-employment

AI systems for the recruitment or selection of natural persons, including placement of targeted job advertisements, analysis and filtering of applications, and evaluation of candidates; and AI systems intended to make decisions affecting terms of work-related relationships, promotion or termination, task allocation based on individual behaviour or personal traits, and monitoring and evaluating performance and behaviour.

This is the category most relevant to NYC employers. It covers AEDTs already regulated under NYC Local Law 144 — creating direct dual-compliance exposure. It also extends to performance monitoring, task allocation systems, and promotion/termination tools.

NYC deployment examples: ATS platforms with AI ranking (Greenhouse, Lever, Workday Recruiting), video-interview analysis, AI-driven targeted job ads on LinkedIn or similar, workforce management AI that assigns shifts based on behavioural patterns, AI performance-review tools. See our dual compliance roadmap.

Annex III point 5 — essential services

AI systems intended to be used by public authorities or on their behalf to evaluate eligibility for public assistance benefits and services; AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score (other than AI systems used to detect financial fraud); AI systems intended to be used for risk assessment and pricing in the case of life and health insurance; and AI systems for classifying and prioritizing emergency calls.

Two sub-categories matter most for private NYC companies: credit scoring and insurance pricing. Both capture core financial services AI.

NYC deployment examples: NYC fintechs offering consumer credit, point-of-sale installment plans, BNPL with AI underwriting, AI-driven life insurance underwriting for EU customers, AI health insurance pricing. Notable exception: AI for financial fraud detection is excluded from this category (though may still fall under other AI Act provisions).

Annex III point 6 — law enforcement

AI systems for risk assessment of natural persons to assess the risk of becoming a victim of criminal offences; AI as polygraphs; AI to evaluate reliability of evidence; AI for predicting criminal or re-offending; AI for profiling natural persons in the course of detection, investigation, or prosecution.

Public-sector category. Private NYC companies are primarily exposed through B2G contracts — selling to law enforcement or offering services used by it.

NYC deployment examples: NYC-based vendors selling predictive policing, forensic AI, court-use risk-assessment tools, AI evidence-analysis products to EU police forces, prosecution offices, or courts.

Annex III point 7 — migration, asylum and border control management

AI systems as polygraphs at borders; AI to assess risk posed by third-country nationals entering a Member State; AI for examining asylum, visa, or residence permit applications and associated complaints; AI for detecting, recognizing, or identifying natural persons in the context of border control management (other than verification of travel documents).

Public-sector category. Private NYC exposure is via B2G — vendors selling border management AI, travel-tech companies with identity verification that crosses into border control.

NYC deployment examples: travel tech with EU border agency contracts, AI document-verification systems, AI risk-scoring platforms for visa processing.

Annex III point 8 — administration of justice and democratic processes

AI systems intended to be used by a judicial authority or on its behalf to assist a judicial authority in researching and interpreting facts and law, and in applying the law to a concrete set of facts; AI systems used to influence the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote (other than AI systems whose output is not directly exposed to natural persons, like tools for organizing, optimizing or structuring political campaigns).

The judicial-AI branch captures legal research and decision-support tools used by or on behalf of courts. The electoral-AI branch captures AI targeting voters, with an important carve-out for campaign internal tools.

NYC deployment examples: legal research AI used by EU judges; AI political-advertising tools that target individual voters; micro-targeting political-campaign AI. Note: Claude, GPT, or Gemini used by an EU judge for drafting is in scope — the tool, not the lab, ends up as a high-risk deployment in that specific context.

The Article 6(3) exception — when Annex III does not make you high-risk

Article 6(3) is the valve that lets providers de-classify Annex III systems that don't actually pose significant risk. The conditions:

The AI system performs a narrow procedural task (not substantive decision-making).

The AI system is intended to improve the result of a previously completed human activity (the human did the work; AI polishes).

The AI system is intended to detect decision-making patterns or deviations from patterns, without replacing or influencing the human assessment without proper human review.

The AI system is intended to perform a preparatory task of an assessment relevant for the Annex III use case purposes.

Critical limitation: if the AI system performs profiling of natural persons, Article 6(3) does not apply. Profiling systems are always high-risk if listed in Annex III.

In practice: a spellcheck that refines a recruiter's manually-written feedback is outside Annex III point 4 via 6(3). An AI that ranks or scores candidates remains high-risk because it profiles.

What this means practically for a NYC company

The classification exercise is the second step after scope (see our Article 2 scope analysis). For each AI system in scope:

First, check whether it falls within any of the eight Annex III categories. The text of each category should be read precisely — the categories have specific carve-outs (credit scoring excludes fraud detection; border control excludes travel-document verification).

Second, if Annex III-listed, evaluate Article 6(3) — does the system qualify for the narrow-procedural-task exception, and does the profiling trap close?

Third, document the classification. A provider who classifies an Annex III-listed system as non-high-risk must document the 6(3) analysis and make it available to authorities on request.

Fourth, proceed to the Chapter III obligations if the conclusion is high-risk, or to transparency obligations under Article 50 if the system is limited-risk.


Primary sources. Regulation (EU) 2024/1689: Article 6 (classification), Article 6(3) (exception for narrow tasks), Article 80 (procedure for disputed non-high-risk classifications); Annex I (Union harmonisation legislation); Annex III (high-risk categories). NYC Admin Code §§ 20-870 to 20-874 for LL144 overlap analysis.