Scientists Unite Against California's AI Restriction Bill, Led by Fei-Fei Li

The California SB-1047 bill has sparked debate in the field of artificial intelligence.

On the innovative soil of Silicon Valley, AI scientists like Fei-Fei Li and Andrew Ng are engaged in a tug-of-war with regulatory bodies over safety and innovation.

The core of the controversy is a bill named SB-1047, fully titled "The Frontier Artificial Intelligence Model Safety and Innovation Act." This bill attempts to establish clear safety standards for high-risk AI models to prevent misuse or catastrophic consequences.

The main contents of the bill include:

  1. Applies to AI models trained using over 10^26 operations or costing over $100 million.

  2. Requires model developers to bear legal responsibility for downstream use.

  3. Establishes a "Frontier Models Division" as a regulatory body.

  4. Includes whistleblower protection clauses.

Many scientists consider the bill's provisions unreasonable and severely hindering innovation. Fei-Fei Li points out three major issues with the bill:

  1. Excessively punishes developers, stifling innovation.

  2. Constrains open-source development.

  3. Weakens public sector and academic AI research.

Dozens of scientists have also jointly opposed the bill, believing it would:

  1. Create a "chilling effect" on open-source model releases.

  2. Use unscientific methods for predicting AI risks.

  3. Provide insufficient protection for open-source models.

  4. Impact student employment and career development.

This tug-of-war reflects the contradiction between safety and innovation in AI development, necessitating a balance between regulation and development.

Bill link