Status update — April 2026: The Digital Omnibus on AI (COM(2025) 836) proposes a new Article 4a replacing Article 10(5) — extending the legal basis for processing special categories of personal data (GDPR Article 9(1)) for bias detection and correction from high-risk AI providers only to providers and deployers of all AI systems and models. The Council position of 13 March 2026 retains a strict-necessity standard; the Commission and Parliament versions use a broader formulation. Trilogue pending. See our Digital Omnibus note.

One of the most common misconceptions in NYC compliance teams is the assumption that robust GDPR compliance satisfies EU AI Act obligations, or conversely, that EU AI Act compliance covers the GDPR bases. Neither is correct. The AI Act and the GDPR are two parallel regimes with separate scope grounds, separate obligations, and separate enforcement authorities — with significant overlap in evidence but distinct legal duties. Article 2(7) of the Regulation is explicit: the AI Act applies "without prejudice to the obligations" under the GDPR and Union data protection law.

For a NYC company operating under both regimes, the practical task is mapping where they overlap (so you can produce evidence once and use it twice), where they diverge (so you do not assume discharge), and where they conflict (so you know which authority to follow). This note walks through the interplay with working knowledge of both regimes — the GDPR after nearly eight years of enforcement, the AI Act in its first application phase.

The conceptual difference

The GDPR is fundamental-rights legislation. It protects the right to data protection recognised in Article 8 of the Charter of Fundamental Rights of the European Union. The regulated actor is a controller or processor of personal data. The regulated conduct is processing.

The AI Act is product-safety legislation. It implements the internal market legal basis under Article 114 TFEU. The regulated actor is a provider, deployer, importer, distributor, or authorised representative of an AI system. The regulated artefact is an AI system or GPAI model.

These are different legal species. The GDPR asks whether your processing of personal data is lawful. The AI Act asks whether your AI system meets certain characteristics regardless of whether personal data is involved. Most high-risk AI systems involve both — they process personal data (GDPR trigger) and they are AI systems performing a regulated function (AI Act trigger) — which is why the practical overlap is dense.

Where the two regimes overlap

In practice, six areas of overlap dominate:

Impact assessments. GDPR Article 35 requires a Data Protection Impact Assessment (DPIA) where processing is likely to result in high risk to rights and freedoms. AI Act Article 27 requires a Fundamental Rights Impact Assessment (FRIA) for certain deployers of high-risk AI systems. Both assessments require analysis of purposes, risk identification, mitigation measures, and safeguards. Content overlaps substantially but the documents are distinct legal deliverables. Running a single integrated working session that produces both is efficient; treating them as one document is a legal error.

Transparency to data subjects / natural persons. GDPR Articles 13 and 14 require transparency about processing at collection or from other sources. AI Act Article 26(11) requires deployers of high-risk AI systems to inform natural persons subjected to the use of the system. Article 50 AI Act requires disclosure when a natural person interacts with an AI system unless obvious. The three obligations operate in parallel and can be discharged through a single well-designed disclosure flow if the content covers all bases.

Automated decision-making. GDPR Article 22 governs solely-automated decisions with legal or similarly significant effects on individuals — requiring specific legal basis, meaningful information about the logic involved, right to human review. AI Act Article 14 requires human oversight for high-risk AI systems, with specific oversight-ability characteristics. For an AI system making decisions about individuals, both regimes apply — the system must be capable of human oversight (AI Act) and the decision pipeline must allow for human review with legal effect (GDPR).

Data governance and quality. GDPR Article 5(1)(d) requires accurate personal data. AI Act Article 10 requires datasets that are relevant, sufficiently representative, and to the best extent possible free of errors. The GDPR principle is a broad obligation; the AI Act specification operationalizes it for training data of high-risk systems. Evidence from one can support the other.

Record of processing / technical documentation. GDPR Article 30 requires records of processing activities. AI Act Annex IV requires technical documentation of the AI system. Different content, but both are maintained files that must be available to authorities. Integrated governance structures keep both files current simultaneously.

Breach/incident notification. GDPR Article 33-34 requires personal data breach notification to authorities and, where applicable, to affected data subjects. AI Act Article 73 requires serious incident reporting for high-risk AI systems. For an AI system that processes personal data and causes a serious incident (which may include a personal data breach), both reporting duties may be triggered separately.

Where the two regimes diverge

Despite the overlap, the regimes diverge in ways that matter for NYC compliance:

Scope triggers. GDPR applies to any processing of personal data in the relevant scope (Article 3 GDPR territorial scope). AI Act applies to AI systems and GPAI models with the scope grounds of Article 2 AI Act. A system that is AI but processes no personal data (a physics simulation AI, a pure mathematical AI) is covered by AI Act but not GDPR. A processing activity that is non-AI but involves personal data is covered by GDPR but not AI Act.

Legal basis. GDPR processing requires a lawful basis under Article 6 (consent, contract, legal obligation, vital interests, public task, or legitimate interests). AI Act does not require a lawful basis for operating an AI system — deployment is either permitted, restricted under Chapter III obligations, or prohibited. The "may I process personal data" question is GDPR; the "may I deploy this AI system" question is AI Act.

Enforcement authorities. GDPR enforces through national data protection authorities (DPAs), coordinated via the European Data Protection Board (EDPB), with cross-border consistency procedures under Article 60 GDPR. AI Act enforces primarily through national market surveillance authorities (typically outside the DPA structure — in many Member States these are in ministry-of-economy or dedicated technical bodies), with the AI Board and the European AI Office at Union level. In some Member States, the AI Act MSA and the DPA are the same authority; in most they are not.

Penalty regimes. GDPR fines up to €20M or 4% of worldwide turnover (whichever higher) under Article 83. AI Act fines up to €35M or 7% of worldwide turnover (Article 99 Tier 1, for Article 5 prohibitions). The AI Act has the higher ceiling, though applied to a narrower set of violations. Parallel fines for overlapping practices are possible — the same underlying conduct may breach both regimes and each authority may fine independently subject to double-jeopardy limits.

Special categories. GDPR Article 9 lists special categories of personal data with tighter processing conditions (race, ethnicity, political opinions, religious/philosophical beliefs, trade union membership, health, sex life, sexual orientation, biometric data for identification, genetic data). AI Act Article 5(1)(g) prohibits biometric categorization deducing these sensitive attributes. The two regimes converge on the list of sensitive attributes but via different mechanisms.

The EDPB-Commission joint guidance

In late 2025, the European Data Protection Supervisor indicated that the EDPB and the European Commission were developing joint guidance on GDPR-AI Act interplay. The guidance was anticipated to be released in 2026. As of mid-April 2026, the final joint guidance has not been published.

When the joint guidance issues, it will be the most authoritative interpretive material on interplay questions. Until then, NYC compliance leads should monitor: EDPB publications (usually on edpb.europa.eu), Commission AI Office publications, and national DPA guidance (especially from CNIL, the Spanish AEPD, and the Dutch DPA, all of whom have engaged actively with AI-adjacent data protection questions).

The Digital Omnibus and pending amendments

In November 2025, the Commission released a Digital Omnibus simplification package targeting multiple pieces of EU digital legislation including the GDPR and the AI Act. Political groups in the European Parliament have proposed amendments including changes to AI Act obligations — though the Omnibus as released does not amend Article 5 prohibitions.

The Omnibus process has implications for the GDPR-AI Act interplay analysis: the two regimes are both in partial simplification-amendment flux, which makes any joint guidance and any compliance documentation moderately provisional. Compliance drafting should flag reliance on provisions susceptible to amendment so that revision can be targeted when the Omnibus text settles.

Practical NYC compliance architecture

For a NYC company operating an AI system that processes personal data of EU subjects, our advisory approach is to structure a single governance workstream producing paired deliverables:

Shared foundation — an inventory of AI systems + personal data flows, scope analysis addressing both GDPR Article 3 and AI Act Article 2, classification analysis addressing both GDPR special category data and AI Act high-risk / prohibited / transparency classifications.

GDPR-specific deliverables — records of processing (Article 30), DPIAs (Article 35), data subject rights infrastructure (Articles 12-22), lawful basis documentation (Article 6 and where applicable Article 9), international transfer mechanisms (Chapter V if applicable).

AI Act-specific deliverables — Annex IV technical documentation, Article 9 risk management system, Article 10 data governance records, Article 14 human oversight design, Article 13 instructions for use, Article 27 FRIA where applicable, Article 49 registration where applicable, Article 22 or 54 authorised representative mandate where applicable.

Shared evidence — bias analysis that serves both GDPR fair processing and AI Act Article 10 quality expectations; incident monitoring that feeds both GDPR Article 33-34 and AI Act Article 72-73.

This structure allows the compliance team to produce a unified file with appropriate cross-references, rather than parallel silos that duplicate work and risk inconsistency.

One trap NYC companies fall into

The single most common trap is the assumption that a DPIA under GDPR Article 35 satisfies the FRIA under AI Act Article 27. It does not. A DPIA analyses risks to rights and freedoms from processing personal data; a FRIA analyses fundamental rights impacts from deployment of a high-risk AI system. The content overlaps heavily but the legal framing, the identified rights, the obligation holders, and the authorities differ. Expect to produce both where both apply. A single consolidated working session can drive both documents, but the documents themselves are distinct legal deliverables.


Primary sources. Regulation (EU) 2024/1689: Article 2(7) (without prejudice to GDPR), Article 9 (risk management), Article 10 (data governance), Article 13 (instructions for use), Article 14 (human oversight), Article 26 (deployer obligations), Article 27 (FRIA), Article 50 (transparency), Article 72-73 (post-market monitoring and serious incidents), Article 99 (penalties). Regulation (EU) 2016/679 (GDPR): Article 3 (territorial scope), Article 5 (principles), Article 6 (lawful basis), Article 9 (special categories), Article 22 (automated decision-making), Article 30 (records of processing), Article 33-34 (breach notification), Article 35 (DPIA), Article 83 (fines). EDPB-EDPS Joint Opinion 5/2021; Commission Digital Omnibus Package (November 2025).