RONTIER AI
California enacts first U.S. frontier AI law
California is taking the lead in AI regulation, passing the country’s first law aimed at ensuring safety and transparency for frontier AI systems.
Gov. Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law on Monday.
The move marks the first legislation in the U.S. to target the safety and transparency of cutting-edge AI models specifically, and cements the state’s position as a national leader in AI development.
Features of the TFAIA include:
- Requirements for AI developers to disclose safety incidents
- Transparency in model design
- Installing guardrails on the development of frontier AI
The bill is based on findings from a first-in-the-nation report on AI guardrails, which offered recommendations for evidence-based policymaking.
The news comes as the use of AI increasingly comes into the spotlight, with the federal government not yet rolling out a comprehensive AI policy and state governments rising to meet this gap. California, in particular, hopes to offer a blueprint to other states for establishing ethical AI.
“With this law, California is stepping up, once again, as a global leader on both technology innovation and safety,” Senator Scott Wiener said in a statement.
The latest bill comes one day after another AI-focused initiative, the California AI Child Protection Bill, passed the statehouse.
Aimed at safeguarding children, the bill seeks to prevent adolescent users from accessing chatbots unless they are “not foreseeably capable of doing certain things that could harm a child.”
The bill is now awaiting Newsom’s signature. It has, however, faced pushback from industry members who argue that sweeping regulations could hamper innovation.
“Restrictions in California this severe will disadvantage California companies training and developing AI technology in the state,” the Computer and Communications Industry Association wrote in a floor alert on the bill. “Banning companies from using minors’ data to train or fine-tune their AI systems and models will have far-reaching implications on the availability and quality of general-purpose AI models, in addition to making AI less effective and safe for minors.”

