On the innovative soil of Silicon Valley, AI scientists like Fei-Fei Li and Andrew Ng are engaged in a tug-of-war with regulatory bodies over safety and innovation.
The core of the controversy is a bill named SB-1047, fully titled "The Frontier Artificial Intelligence Model Safety and Innovation Act." This bill attempts to establish clear safety standards for high-risk AI models to prevent misuse or catastrophic consequences.
The main contents of the bill include:
-
Applies to AI models trained using over 10^26 operations or costing over $100 million.
-
Requires model developers to bear legal responsibility for downstream use.
-
Establishes a "Frontier Models Division" as a regulatory body.
-
Includes whistleblower protection clauses.
Many scientists consider the bill's provisions unreasonable and severely hindering innovation. Fei-Fei Li points out three major issues with the bill:
-
Excessively punishes developers, stifling innovation.
-
Constrains open-source development.
-
Weakens public sector and academic AI research.
Dozens of scientists have also jointly opposed the bill, believing it would:
-
Create a "chilling effect" on open-source model releases.
-
Use unscientific methods for predicting AI risks.
-
Provide insufficient protection for open-source models.
-
Impact student employment and career development.
This tug-of-war reflects the contradiction between safety and innovation in AI development, necessitating a balance between regulation and development.