The California lawmaker, who introduced an AI safety protection bill later vetoed by Gov. Gavin Newsom, has again introduced legislation to regulate the powerful technology.
SB 53 would protect AI lab whistleblowers from retaliation if they speak out about risks or irresponsible development. The bill would also create a compute cluster—CalCompute—to ensure a broad array of developers can access the compute they need to succeed.
Specifically, the bill would prohibit developers of certain artificial intelligence models from enforcing policies that prevent employees from disclosing information or retaliating against those who report concerns.
SB 53 defines “critical risk” as a foreseeable and material risk that the development, storage, or deployment of a foundation model could result in the death or serious injury of more than 100 people or cause more than $1 billion in damages.
Additionally, developers would be required to establish an internal process allowing employees to report concerns anonymously.
“We are still early in the legislative process, and this bill may evolve as the process continues. I’m closely monitoring the work of the Governor’s AI Working Group, as well as developments in the AI field for changes that warrant a legislative response,” Sen. Scott Wiener (D- San Francisco) said in a statement.
“California’s leadership on AI is more critical than ever as the new federal Administration proceeds with shredding the guardrails meant to keep Americans safe from the known and foreseeable risks that advanced AI systems present.”
Last year, when Wiener introduced a similar bill, other lawmakers and tech companies opposed the proposed legislation, saying it was “ill-informed, will cause more harm than good, and will eventually drive developers away from the state.”