New York lawmakers passed the RAISE Act. It’s a strong step to stop AI-driven disasters. The bill focuses on advanced AI from firms like OpenAI, Google, and Anthropic. These companies will need to follow new safety and transparency rules.
Preventing High-Risk AI Outcomes
The law is designed to reduce the risk of major disasters—including scenarios causing the death or injury of over 100 people or resulting in $1 billion+ in damages. It’s a direct response to growing concerns that unchecked AI models could pose serious threats to public safety.
A Victory for the AI Safety Movement
The bill is seen as a win for AI safety advocates like Geoffrey Hinton and Yoshua Bengio, who have long warned of advanced AI risks. If signed into law, the RAISE Act will create the first legally binding transparency standards for frontier AI systems in the U.S.
The bill is a win for AI safety advocates. Experts like Geoffrey Hinton and Yoshua Bengio have warned about AI risks for years. If signed into law, the RAISE Act will set the first legal transparency rules for advanced AI in the U.S.
What the RAISE Act Requires
If the bill becomes law, AI companies must:
-
Publish detailed safety and security reports on their most advanced models
-
Report incidents such as model misuse or security breaches
-
Comply with strict transparency rules for any model trained with over $100 million in compute
-
Face civil penalties of up to $30 million for non-compliance
Designed to Protect Without Slowing Innovation
New York State Senator Andrew Gounardes, a co-sponsor, said the bill avoids stifling startups and research. He emphasized the urgency:
“The window to put in place guardrails is rapidly shrinking.”
Unlike California’s SB 1047 (which was vetoed), the RAISE Act does not require a ‘kill switch’ or hold post-trainers of AI accountable for all downstream harms—addressing key criticisms.
Who’s Affected?
The bill is narrowly aimed at large AI developers, including companies based in the U.S. and abroad. Its reach covers only those whose models are
-
Trained with $100M+ in compute
-
Offered to New York residents
Smaller startups and academic institutions are excluded.
Industry Pushback
Despite its narrow focus, the tech industry isn’t thrilled. Andreessen Horowitz’s Anjney Midha called the bill
“Another stupid, stupid state-level AI bill.”
Critics fear overregulation could push companies to avoid doing business in New York. But Assemblymember Alex Bores disagrees, saying:
“There’s no economic reason not to operate in New York.”
The Road Ahead
The bill now sits on Governor Kathy Hochul’s desk. She can:
-
Sign it into law
-
Request changes
-
Veto it
If signed, it will make New York the first U.S. state to legally require AI transparency and safety disclosures for frontier models.
Final Thoughts
Some AI labs, like Anthropic, haven’t taken a clear stance. They warned the bill might be too broad. But New York lawmakers say it’s fair and necessary. They believe the RAISE Act keeps safety in step with fast-moving AI.