Status update — April 2026: Both the Council (13 March 2026) and European Parliament (26 March 2026) have adopted negotiating positions under the Digital Omnibus on AI that would add a ninth prohibited practice to Article 5: AI systems generating or manipulating realistic images or videos depicting non-consensual sexually explicit content of identifiable natural persons. The Parliament's formulation prohibits AI that "alters, manipulates or artificially generates realistic images or videos so as to depict sexually explicit activities or the intimate parts of an identifiable natural person, without that person's consent." Both positions exclude systems with effective safeguards. Trilogue will settle the final text. See our Digital Omnibus note.

Article 5 of the EU AI Act is the "red line" article. It lists AI practices that are prohibited outright — not high-risk to be documented, not limited-risk to be disclosed, but forbidden. Article 5 entered into application on 2 February 2025 and has been enforceable since 2 August 2025. Penalty tier for violation: the highest under Article 99(3), up to €35M or 7% of worldwide turnover, whichever is higher.

For a NYC company, the question is not when Article 5 applies — it applies now — but which practices are actually prohibited and whether your deployment crosses any line. This note walks through each of the eight prohibited categories, with plain language and realistic NYC-industry exposure.

The eight categories of prohibited AI under Article 5(1)

Article 5(1) lists eight lettered prohibitions. Each has precise conditions; all are strictly construed. The Commission published Guidelines on prohibited AI practices on 4 February 2025 to clarify scope.

Article 5(1)(a) — subliminal or manipulative techniques

AI systems that deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the effect of materially distorting the person's behaviour in a way that causes or is reasonably likely to cause significant harm.

The key test is "material distortion" + "significant harm." Ordinary persuasive advertising does not meet the threshold. What crosses the line: AI systems that use covert neurological patterns (e.g., subliminal audio), or that systematically exploit cognitive shortcuts to push users toward decisions they would not otherwise make when the expected outcome is physical, financial, or psychological harm.

NYC exposure: low for most deployments. Financial services chatbots that use standard persuasion tactics (urgency messaging, social proof) are not prohibited unless they cross into deception causing significant financial harm. A NYC fintech using AI to push high-risk trading to retail users with vulnerability indicators would be closer to the line.

Article 5(1)(b) — exploitation of vulnerabilities

AI systems that exploit vulnerabilities of persons due to age, disability, or specific social or economic situation, with the effect of materially distorting behaviour and causing or likely to cause significant harm.

Unlike (a), this focuses on who is targeted rather than how. AI systems that specifically detect and target minors, elderly, disabled, or economically vulnerable persons with outputs that distort their behaviour to their detriment are prohibited.

NYC exposure: moderate for edtech, fintech targeting sub-prime credit, gaming platforms with loot-box mechanics, certain health/wellness apps. Practical red flag: if your AI system identifies users by vulnerability characteristics and personalizes outputs to encourage decisions against their interest, audit closely.

Article 5(1)(c) — social scoring

AI systems for social scoring of natural persons, based on evaluation or classification over time based on social behaviour or known/predicted personal characteristics, with certain detrimental effects: treatment in social contexts unconnected to the data source, or treatment disproportionate to the underlying behaviour.

This is narrower than often assumed. Credit scoring based on financial data (repayment history, income, debt) is not prohibited — it is an Annex III high-risk system, not an Article 5 prohibition. What is prohibited is scoring that spreads across unrelated social contexts or applies disproportionate treatment.

NYC exposure: low for standard credit and insurance. Higher for aggregated "trustworthiness" scores used across services, for services that use social media behaviour to gate access to unrelated goods, or for platforms that exclude users from one service based on unrelated conduct on another.

Article 5(1)(d) — predictive policing based solely on profiling

AI systems for risk assessment of natural persons to assess or predict the risk of committing a criminal offence, based solely on profiling or on assessing personality traits and characteristics.

The "solely" is crucial. AI systems used in risk assessment that also rely on objective and verifiable facts directly linked to criminal activity are not prohibited — they are high-risk under Annex III point 6. What is prohibited is risk scoring purely from personality profiles or demographic data.

NYC exposure: primarily public sector (NYPD, DA offices, courts). Private NYC deployments rare except for pretrial risk assessment software marketed to public clients.

Article 5(1)(e) — untargeted facial image scraping

AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

This targets the Clearview-style business model directly. The prohibition is on the database-building method, not on facial recognition as such. Facial recognition built on consented datasets or on targeted imagery is not prohibited by (e), though it may still be regulated as high-risk under Annex III point 1.

NYC exposure: limited to companies in the facial recognition provider space. A NYC company using facial recognition built by a third party should diligence the provider's dataset provenance.

Article 5(1)(f) — emotion inference in workplace and education

AI systems to infer emotions of natural persons in the areas of workplace and education institutions, except where the AI system is intended to be put in place or into the market for medical or safety reasons.

This prohibition covers emotion-recognition AI in employment and educational contexts specifically. Emotion AI in other contexts (entertainment, consumer apps) is not prohibited under (f), though it may fall under Article 50 transparency obligations.

NYC exposure: real for HR tech. A NYC company using video-interview analysis that claims to infer candidate emotions (enthusiasm, deception, stress) is likely within the prohibition. Several NYC HR tech vendors have emotion-inference features that will need to be removed or restructured before enforcement. The exception for medical or safety reasons is narrow — a wellness app claiming to detect employee stress is not obviously within the exception if used by employers for performance management.

Article 5(1)(g) — biometric categorization for sensitive attributes

AI systems that categorise natural persons based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. Exception: labelling or filtering of lawfully acquired biometric datasets by law enforcement.

This prohibits AI that uses physical characteristics to infer sensitive attributes. The list of protected attributes mirrors GDPR Article 9 special category data.

NYC exposure: advertising platforms that use biometric inference to infer demographics for targeting are exposed. AI systems in media, advertising, and audience analytics that claim biometric-based demographic inference should be audited.

Article 5(1)(h) — real-time remote biometric identification in public spaces for law enforcement

AI systems for real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement, except in narrowly defined cases (serious crimes, specific named victims, certain imminent threats) subject to prior authorisation.

This is the headline prohibition in public discourse — live facial recognition in public spaces by police. The exceptions are narrow and require judicial or independent authorisation.

NYC exposure: public-sector only. NYPD and similar bodies if they deploy such systems. Private NYC companies are essentially not affected.

The Commission's guidelines — what they add

The European Commission's Guidelines on Prohibited Artificial Intelligence Practices (4 February 2025) provide interpretive anchors for each category. They do not have the legal force of the Regulation text, but they represent the Commission's articulated position and will inform market surveillance authority enforcement. Key takeaways:

The material distortion threshold is high. Ordinary commercial persuasion is not manipulation under (a). The Guidelines make clear that the Regulation targets extreme cases, not everyday UX optimization.

Purpose and intended effect matter. For exploitation of vulnerabilities under (b), the Guidelines emphasize that the AI system must operate with the practical effect of materially distorting behaviour of the targeted group. Incidental processing of vulnerable-user data is not per se a prohibition.

The GDPR operates in parallel. Article 2(7) of the Regulation states the AI Act is without prejudice to the GDPR. This means practices prohibited under Article 5 that also involve personal data processing are subject to both regimes. GDPR enforcement can proceed independently of AI Act enforcement for the same underlying practice.

Which NYC industries face the highest Article 5 exposure

In our advisory practice, we see highest exposure in:

HR tech — emotion inference in recruitment (5(1)(f)) is the single most common violation pattern. Video interview analysis, voice-stress analysis, and "cultural fit" scoring based on behavioural inference all risk crossing the line.

Advertising and media — biometric categorization for targeting (5(1)(g)) captures certain audience-analytics products that infer demographics from webcam, voice, or image-recognition signals.

Fintech targeting vulnerable populations — exploitation of vulnerabilities (5(1)(b)) for sub-prime lending that uses AI to identify and target economically vulnerable users is exposed. The standard defence that "we're offering them credit they need" does not address the distortion-of-behaviour test.

Gaming and consumer apps — loot-box or pay-to-play mechanics targeting minors (5(1)(b)) are exposed if AI personalizes the monetization to child-specific vulnerability patterns.

Public-sector clients — NYC companies selling into NYPD, DA offices, schools, or similar public bodies can inherit Article 5 exposure if their product enables a prohibited use by the client.

What to do now

Article 5 is already applicable. For a NYC company, the practical sequence is:

First, inventory all AI systems your company develops or deploys, with technical description of features. Second, screen each against the eight Article 5 categories. Third, for any system that plausibly triggers (a)–(h), conduct a written analysis — does the factual test actually close, or is it a borderline case that can be documented as non-prohibited? Fourth, for any system that is or may be within Article 5, remove the feature, restructure the deployment, or withdraw from EU market entirely before the enforcement attention materializes.

Unlike Annex III high-risk work, there is no "bring it into compliance" path for Article 5. Prohibited practices must be removed or the system must exit the Union market. Early recognition is therefore decisively cheaper than late recognition.


Primary sources. Regulation (EU) 2024/1689: Article 5 (prohibited practices), Article 6 (high-risk classification), Article 50 (transparency), Article 99(3) (Tier 1 fines), Article 113 (application dates). European Commission, Guidelines on Prohibited Artificial Intelligence Practices (4 February 2025). GDPR Article 9 (special category data) for overlap analysis.