Status update — April 2026: Deadlines referenced below (2 August 2026 for Annex III high-risk application) are subject to the Digital Omnibus on AI (COM(2025) 836), currently in trilogue. Both Council (13 March 2026) and Parliament (26 March 2026) support fixed dates of 2 December 2027 for Annex III standalone high-risk (including employment AEDTs under Annex III point 4). LL144 obligations are unaffected — the 2024 NYC Local Law 144 continues to apply. See our Digital Omnibus note.

If you are running HR tech in NYC in 2026, there is a reasonable chance that the AI system you operate is simultaneously subject to two regulatory regimes. NYC Local Law 144 has been enforceable since July 2023 for automated employment decision tools used in hiring and promotion. The EU AI Act, for high-risk AI systems under Annex III — which expressly includes AI systems used for recruitment or selection of natural persons — becomes fully applicable on 2 August 2026. The overlap is not theoretical. The same recruitment model that ranks candidates for a NYC role can, through the company's EU hiring or the EU residency of a subset of applicants, land inside both regimes.

This piece walks through the practical steps, in order, to prepare for that dual compliance by August 2026. It is written for HR tech founders, HRIS/ATS leads, and in-house counsel at companies who have already confirmed that the EU AI Act applies to them (if you have not confirmed that, see our scope analysis pillar first).

Step 1 — Inventory and classification

The first deliverable is a written inventory of every AI component in the hiring and employment workflow. This is broader than many HR tech leads initially expect. It includes the obvious — the ATS ranking model, the resume parser's ML extraction, the candidate-match scoring — but it also includes less obvious components such as AI-powered sourcing tools, candidate chatbots, video-interview analysis, skills-gap predictors, and even some scheduling tools that use ML to optimize interviewer allocation.

For each component, the inventory records: the function (what does it do), the provider (who built it), the type of model (classification, ranking, generation, other), the training data type, the inputs it receives, the outputs it produces, and where those outputs flow in the downstream workflow.

Then each component is classified. Under LL144, the question is whether it is an AEDT within the DCWP Final Rules meaning — in particular, whether its output substantially assists or replaces discretionary decision-making for NYC hiring or promotion. Under the EU AI Act, the question is whether it falls within Annex III point 4, which covers AI systems intended for recruitment or selection, placement of targeted job advertisements, analysis and filtering of applications, evaluation of candidates, decisions affecting terms of work-related relationships, promotion or termination, task allocation based on individual behaviour, and monitoring/evaluating performance.

An AI component can be in scope of one, both, or neither. Components that are in scope of LL144 only (NYC hiring, no EU deployment) follow the LL144 track alone. Components in scope of EU AI Act only (high-risk under Annex III but not used for NYC hiring) follow the EU AI Act track alone. Components in scope of both — the typical ATS ranking model for a multinational — need integrated compliance.

Step 2 — Bias audit with dual-regime scope

The LL144 bias audit, under DCWP Final Rules, requires computing selection rate and impact ratio for each category protected under NYC Human Rights Law (sex, ethnicity, race, and their intersections). The audit must be conducted by an independent auditor (see the auditor independence checklist for what that requires).

For a dual-compliance context, the audit scope should be expanded to serve the EU AI Act Article 10 expectation. Article 10(2)(f) requires examination of biases likely to affect health and safety, have a negative impact on fundamental rights, or lead to discrimination prohibited under Union law. The Union discrimination acquis is broader than NYC Human Rights Law — it covers disability, age, religion, and sexual orientation as well.

A dual-scope audit runs the LL144-required analyses for the core LL144 categories, plus equivalent bias analyses for the additional EU-relevant categories where the training and output data support it. The audit's working papers document the methodology used and the categorization scheme, which supports both regimes.

Step 3 — Technical documentation under Annex IV

Where the deploying company is the provider of the AI system (built in-house or substantially modified), Annex IV of the EU AI Act specifies the technical documentation required under Article 11. This is a distinct workstream from the LL144 audit and has no LL144 analogue.

Annex IV documentation covers, in summary form: general description of the AI system (intended purpose, user interface, general logic, hardware requirements); detailed description of elements (methods and steps performed for development, training methodologies and techniques used, main design choices); information about data (training data provenance, scope, main characteristics, labelling procedures, data cleaning methodologies, pre-processing steps); description of monitoring, functioning and control measures; detailed description of the risk management system under Article 9; description of accuracy, robustness and cybersecurity under Article 15; and a list of harmonised standards applied.

For many NYC HR tech companies, this is the heaviest lift. The documentation did not exist in a structured form before EU AI Act preparation began. Drafting it from engineering memory, ticket systems, and codebase inspection takes weeks to months depending on system maturity.

Step 4 — Risk management system under Article 9

Article 9 requires a risk management system that is a continuous iterative process run throughout the entire lifecycle of the high-risk AI system. This is not a one-time document — it is a living system that identifies, evaluates, and mitigates risks, and that is updated as the system evolves.

For an HR tech company, the risk management system typically includes: identification of foreseeable misuse scenarios, bias risks, robustness failures, security threats; assessment of severity and likelihood; documented mitigation measures; residual risk analysis; and a testing regime that validates the mitigations under realistic conditions.

The output of this step is not only the risk management file itself but also a set of implemented mitigations — code changes, policy changes, training changes — that become evidence of compliance.

Step 5 — Data governance under Article 10

Article 10 sets requirements for the training, validation, and testing datasets. Datasets must be relevant, sufficiently representative, and to the best extent possible free of errors and complete in view of the intended purpose. Data governance and management practices must include examination in view of possible biases.

For HR tech, this is where decades of historical hiring data bias come home. The typical training corpus for a recruitment model includes historical hire/reject decisions that themselves reflect human bias. Article 10 does not require perfect data; it requires documented awareness of bias, documented mitigation where possible, and documented acknowledgment of residual limitations in the technical documentation.

This step overlaps significantly with the bias audit in step 2 — the audit's findings feed the data governance documentation. Efficient project structure runs steps 2 and 5 in parallel with shared working papers.

Step 6 — Candidate notice and instructions for use

LL144 requires a pre-use candidate notice at least ten business days before the AEDT is used, identifying the job qualifications and characteristics the AEDT will use and offering an alternative process. DCWP Final Rules specify content and delivery requirements.

The EU AI Act has a related but not identical obligation. Article 26(11) requires deployers of high-risk AI systems to inform natural persons subjected to the use of the system. Article 50 adds transparency obligations for certain lower-risk AI systems (e.g., the obligation to disclose when a person is interacting with an AI chatbot).

A coherent candidate communication program delivers the LL144 notice content to NYC-located candidates, the Article 26(11) deployer disclosure to EU-located candidates, and the Article 50 AI disclosure to any candidate interacting with an AI chatbot. The underlying backend logic routes the correct disclosure based on candidate jurisdiction.

Step 7 — Human oversight under Article 14

Article 14 requires that high-risk AI systems are designed and developed in such a way that they can be effectively overseen by natural persons during the period in which the AI system is in use. The oversight must be designed to allow the responsible person to: understand the capacities and limitations of the system; remain aware of automation bias; correctly interpret outputs; decide not to use the system or override outputs; and intervene or interrupt operation.

For HR tech, this typically manifests as: documented recruiter-training materials explaining the AEDT's output and its limitations; clear mechanisms for recruiters to override the AEDT's ranking; logging of override decisions; and periodic review of override patterns to detect systematic issues.

This is distinct from the LL144 expectation. LL144 does not mandate human oversight at this level of specificity; it regulates the audit and the notice. But Article 14 requires operationalized oversight, documented in the risk management file.

Step 8 — Article 49 registration and conformity

For providers of high-risk AI systems, Article 49 requires registration in the EU database before placing the system on the market. Annex VIII specifies the registration fields — including the AI system's intended purpose, brief description, harmonised standards applied, and contact information.

For deployers, registration is required only in specific circumstances (mainly for certain public-sector deployers). Most private-sector HR tech deployers do not register separately; the provider registers.

Conformity assessment for high-risk AI systems under Annex III is, for most cases, internal control under Article 43(2) — the provider assesses its own conformity against the requirements in Chapter III Section 2. Third-party conformity assessment through a notified body is required only in specific circumstances (e.g., biometric identification AI systems under certain conditions).

Step 9 — Post-market monitoring and incident reporting

Article 72 requires post-market monitoring by providers of high-risk AI systems. Article 73 requires reporting of serious incidents to the market surveillance authority. These are ongoing obligations after deployment.

For HR tech, post-market monitoring means: continuous collection of performance data, bias metrics, override patterns, and user feedback; periodic review of the risk management file; triggered re-audits when material changes occur or when monitoring reveals degradation.

Sequencing and timeline

For a company starting the dual compliance workstream in April or May 2026, a realistic sequence to be ready for 2 August 2026 is: steps 1 and 2 in parallel in month one; steps 3, 4, and 5 in parallel in months two and three; steps 6 and 7 in month three; step 8 in month four; step 9 as the ongoing operational tail.

This is compressed. Companies that start the workstream in September or October 2026 are starting after the application date. This does not create immediate enforcement risk — EU market surveillance takes time to mobilize — but it creates exposure that compounds with each passing month of non-compliant operation.


For a dual compliance program engagement, see Lexara Advisory. For the LL144 bias audit component, see auditll144.

Primary sources referenced. Regulation (EU) 2024/1689 of 13 June 2024: Articles 9, 10, 11, 14, 26, 43, 49, 50, 72, 73, 113; Annexes III, IV, VIII. NYC Admin Code §§ 20-870 to 20-874. DCWP Final Rules, 6 RCNY Ch. 5 Subchapter T.