The California "Frontier AI Model Safety Innovation Act" passed the California Assembly Appropriations Committee after significant weakening. The main changes in the amended bill include:
-
The Attorney General is no longer allowed to sue AI companies for ignoring safety issues before catastrophic events occur. The regulatory focus has shifted to actual harm.
-
Only developers with model training computing costs exceeding $100 million will face regulatory requirements. Existing models are not within the scope of regulation. Future models like Llama 4 and GPT-5 may be subject to regulation.
-
The bill requires cloud service providers to collect customer information for regulatory traceability.
-
Companies within the regulatory scope need to take measures to prevent model misuse, have the ability to shut down models in emergencies, submit safety practice statements, and undergo independent audits annually.
-
Violations may face fines ranging from $10 million to $30 million.
-
There are significant disagreements in support and opposition to the bill. Supporters view it as the minimum requirement for effective regulation, while opponents worry it will hinder AI development and the open-source ecosystem.
-
The bill's author refuted some criticisms, stating it would not harm innovation and open-source AI.
-
The revised bill added protection clauses for fine-tuning open-source models.
The specific content and impact of the bill remain to be further observed.