Most coverage of NYC-based AI compliance focuses on NYC Local Law 144 (the city bias audit regime) or the EU AI Act (the extraterritorial regime). Between those two sits a New York State layer that became more substantive in late 2025 and early 2026 — the Responsible AI Safety and Education Act, known as the RAISE Act. This note places the RAISE Act in the stack and identifies when a NYC company should pay attention to it.
What the RAISE Act is
The RAISE Act was signed into law by New York Governor Kathy Hochul on December 19, 2025, with a chapter amendment signed on March 27, 2026 that represents the final enacted text. The law takes effect January 1, 2027.
The RAISE Act targets a specific slice of the AI industry: frontier-model developers. The statute defines a frontier model as a foundation model trained using computing power greater than 10²⁶ integer or floating-point operations. It applies to developers of such models with annual revenues exceeding $500 million. The law covers frontier models developed, deployed, or operating in New York.
The obligations focus on safety and transparency. Covered developers must establish and publish safety protocols addressing critical risk, conduct pre-deployment risk assessments, implement post-deployment monitoring, and report serious incidents. The law grants enforcement authority to the New York Attorney General, with civil penalties up to $1 million for a first violation and up to $3 million for subsequent violations. A new office within the New York Department of Financial Services is granted rulemaking authority to implement the law.
The RAISE Act is deliberately narrower than earlier drafts. The chapter amendment signed in March 2026 reflects negotiated revisions that aligned the law more closely with California's Transparency in Frontier Artificial Intelligence Act (TFAIA) and scaled back some provisions that safety-advocate groups had backed. The comparison-point is California, not the EU AI Act.
Who in NYC is actually covered
The $500 million annual revenue threshold combined with the 10²⁶ FLOP training-compute threshold means the RAISE Act covers a small number of companies. In practice, the universe of frontier-model developers meeting both tests is bounded by the frontier labs — entities with the compute budget and commercial scale to train base models at that size.
A typical NYC startup building an AI product on top of GPT, Claude, or Gemini APIs is not covered. The startup does not train the base model, and even if it fine-tunes, the fine-tuning compute is orders of magnitude below the 10²⁶ FLOP threshold. The startup may also not clear the $500 million annual revenue threshold.
A typical NYC HR tech company deploying a recruitment AI is not covered for the same reasons. The RAISE Act is not a general AI regulation. It is a frontier-safety regulation.
Where the RAISE Act does matter for a typical NYC company
Even for companies outside direct coverage, the RAISE Act matters in three practical ways:
First, upstream effects on foundation-model providers. Because the frontier-model developers the NYC company depends on (OpenAI, Anthropic, Google, others) are covered, their compliance postures will change. This affects the terms of service, the technical documentation made available to downstream integrators, and the incident-reporting patterns the NYC deployer will see. A NYC deployer's GPAI compliance documentation (see our GPAI deployer note) pulls information from provider disclosures, which the RAISE Act will expand.
Second, federalism signaling. The RAISE Act was signed shortly after a December 2025 executive order seeking to limit state-level AI regulation. The federal-state conflict is unresolved and may surface litigation in 2026-2027 that affects the law's application. NYC companies should track this development because the outcome affects whether the RAISE Act actually operates in its current form by its January 2027 effective date, and whether similar state-level AI laws survive federal preemption attempts.
Third, regulatory direction indication. The RAISE Act reflects where New York State's AI regulatory approach is heading — a safety-and-transparency model rather than a prohibition model, with $500 million-revenue and 10²⁶-FLOP thresholds that can be revised downward in future legislative sessions. A NYC company not currently covered may become covered if growth brings it within thresholds, or if future revisions capture a broader slice.
How this layers against LL144 and the EU AI Act
For a NYC company in dual compliance with LL144 and the EU AI Act, the RAISE Act is typically a fourth layer to watch rather than a primary workstream. The practical layering:
LL144 — applies to any AEDT used for NYC hiring/promotion. Already enforceable. Highest short-term compliance priority for HR tech.
EU AI Act — applies based on Article 2 scope triggers (EU market, EU deployment, output used in Union). Full application for high-risk systems on 2 August 2026. Highest medium-term priority for companies with EU exposure.
RAISE Act — applies narrowly to frontier-model developers meeting revenue and compute thresholds. Effective 1 January 2027. Primary priority only for the narrow subset that qualifies.
Federal layer — in flux. An executive order and ongoing congressional proposals aim at federal preemption and minimally-burdensome federal framework. Outcome affects whether state laws like the RAISE Act operate as enacted.
For a NYC HR tech company, the prioritization is clear: finish LL144 work, prepare for EU AI Act August 2026, monitor the RAISE Act and federal developments as context for the 2027 horizon.
A note on what this is not
The RAISE Act is not a general-purpose NYC AI regulation. It does not regulate hiring tools (that's LL144), it does not regulate AI broadly (that's the EU AI Act's approach, which NY has explicitly declined to replicate), and it does not regulate consumer-facing AI products for everyday business purposes.
Coverage in mainstream press sometimes conflates the RAISE Act with a broader AI regulation than it is. The statute's narrow scope is deliberate and was negotiated explicitly to differentiate it from the EU AI Act's horizontal approach. Readers sorting their compliance landscape should preserve that distinction.
For a compliance scan mapping LL144, EU AI Act, RAISE Act, and federal AI law exposure to your operations, see Lexara Advisory.
Primary sources. New York Responsible AI Safety and Education Act (RAISE Act), S6953B / A6453B, signed December 19, 2025; chapter amendment signed March 27, 2026; effective January 1, 2027. California Transparency in Frontier Artificial Intelligence Act (TFAIA) for comparison context.