New York Governor Kathy Hochul recently signed into law the Responsible AI Safety and Education Act (RAISE Act). The law is aimed at regulating the deployment of highly capable artificial intelligence (AI) systems, the so-called “frontier models,” and imposes new obligations on the developers of such models.
Legislative Background
State Senator Andrew Gounardes and Assemblymember Alex Bores introduced the RAISE Act to regulate advanced AI models, citing serious risks posed by a lack of regulation. While the legislature passed a more aggressive version of the bill, Governor Hochul approved RAISE on Dec. 19, 2025, only after a significant rewrite.
Scope and Applicability
The RAISE Act applies only to developers of large-scale, high-risk AI systems referred to as “frontier models.” A frontier model is defined as one that is trained using over 1026computational operations and costs more than $100 million in compute resources.
The law targets “large developers,” i.e., a company “that has trained at least one frontier model …. and has spent over one hundred million dollars in compute costs in aggregate in training frontier models.” Importantly, accredited universities conducting academic research are exempt. The law applies to frontier models developed, deployed or operated wholly or partly in New York.
Key Legal Requirements
Safety and Security Protocols
Large developers must implement a written safety and security protocol before deploying any frontier model. This document must:
- Identify and mitigate risks of “critical harm” (defined as over 100 deaths or $1 billion in damage).
- Include cybersecurity controls and a detailed testing regimen.
- Designate a senior officer responsible for compliance.
Deployment Restrictions and Ongoing Reviews
Developers may not deploy frontier models that pose an “unreasonable risk of critical harm.” Risk mitigation efforts and safeguards must be established before deployment. Additionally, safety protocols must be reviewed and updated annually, with any material changes republished and resubmitted.
Safety Incident Reporting
“Safety incidents” under this law refer to critical harm or – interestingly – a “frontier model autonomously engaging in behavior other than at the request of a user” if there is demonstrable evidence of an increased risk of critical harm. Such incidents must be reported to the state within 72 hours of discovery. The report must include:
- The date of the incident,
- Reasons the event qualifies as a safety incident,
- A plain-language summary of what occurred.
Knowingly submitting false or misleading information in any of these documents is subject to penalty.
Whistleblower Protections
The act provides legal protections for employees who report activities reasonably believed to pose an unreasonable or substantial risk of critical harm. Employers may not retaliate against such disclosures and must notify employees of these rights.
Enforcement and Penalties
The New York Attorney General holds exclusive enforcement authority. The AG may seek civil penalties up to $1 million for a first violation and $3 million for subsequent violations. Injunctive relief is also available to prevent continued noncompliance.
While there is no private right of action, companies should consider reputational and operational risks alongside legal exposure. Contractual attempts to shift liability for compliance to others are void, and the law includes language allowing regulators to pierce the corporate veil in cases of bad-faith structuring.
Conclusion
New York’s RAISE Act reflects a growing trend of state-level AI regulation focused on high-risk models. For companies operating in New York, particularly those developing foundation models or advanced generative AI systems, the law imposes significant new transparency and risk management duties. In-house legal teams should begin aligning development, compliance, and incident response functions now to ensure readiness ahead of the law’s 2026 effective date.
Our Artificial Intelligence Industry Team tracks AI issues throughout the country. If you have questions or concerns about AI-related matters, please reach out to attorney Brendan M. Palfreyman at (315) 214-2161 and bpalfreyman@harrisbeachmurtha.com, or the Harris Beach Murtha attorney with whom you most frequently work.
This alert is not a substitute for advice of counsel on specific legal issues.
Harris Beach Murtha’s lawyers and consultants practice from offices throughout Connecticut in Bantam, Hartford, New Haven and Stamford; New York state in Albany, Binghamton, Buffalo, Ithaca, New York City, Niagara Falls, Rochester, Saratoga Springs, Syracuse, Long Island and White Plains, as well as in Boston, Massachusetts, and Newark, New Jersey.