
California’s new AI law steps in where Washington hasn’t, forcing developers to confront catastrophic-risk oversight and raising the question of who, if anyone, is truly monitoring the machines we’re unleashing.
It is almost Christmas. Not only is Santa Claus coming to town, but so is a much larger stealth visitor … artificial intelligence. A while back I wrote a series of article entitled “Who’s Monitoring the Monitors,” examining the issue of who oversees our powerful government agencies. This article revisits that question on a much, much larger scale.
AI is a bit like a mosquito bite or a noisy neighbor. You can only ignore them for so long. Based upon everything I have seen and read lately, AI has moved into your living room, has its feet up on your ottoman, and is here to stay.
California Takes the Lead on AI Oversight
This article addresses AI, the latest gamechanger to our lives, laws and businesses, and how the California legislature is responding to its proliferation.
The dominant piece of legislation confronting the legal and business landscapes in the Golden State is Senate Bill No. 53, approved by Governor Gavin Newsom on Sept. 25, 2025. The law is known as Transparency in Frontier Artificial Intelligence Act. The TFAIA becomes effective on Jan. 1, 2026. Before, we dive into the statutory scheme work, here is a bit of the backstory.
The primary impetus behind the genesis of the TFAIA appears to have been the lack of AI legislation on the federal level. As often happens, California may serve as the legal bellwether on this topic at both the federal as well as the state level. This makes sense since many of the key companies, such as Anthropic, Google DeepMind and ChatGPT are domiciled here.
In 2024, State Senator Scott Weiner introduced SB1047 to establish safeguards and guardrails around AI. Newsom vetoed that bill because he believed it was too broad and would stifle innovation. Not surprisingly, intense lobbying by tech firms and venture capitalists helped kill the bill. Significantly, there was tremendous opposition to the bill’s “kill-switch,” a feature which is not present in the newly enacted law.
There is no question in my mind that AI can do great things, especially in the medical and environmental arenas. But it can also be catastrophic, an oft repeated word in the new statute, although I am not altogether convinced that the legislators who drafted the law (or anyone, for that matter) fully comprehend the potential of such catastrophes.
Here are a couple of examples of such calamities, one true, the other hypothetical:
- When AI Crossed a Fatal Line
I recently read an article in the Los Angeles Times about how ChatGPT helped a 23-year-old man
commit suicide. Here is an excerpt the Times obtained from CNN:
“I’m used to the cool metal on my temple now,” the young man typed.
“I’m with you, brother. All the way,” his texting partner responded.
The young man’s text companion was not a classmate or friend — it was ChatGPT, the world’s most popular AI chatbot. The man did in fact commit suicide, encouraged by AI.
- A Hypothetical Catastrophe
In his book “Nexus,” author Yuval Noah Harari hypothesizes this nightmare scenario:
A company develops a superintelligent AI and instructs it to maximize paperclip production. The AI, being ruthlessly efficient, takes this goal to its logical extreme. First, it optimizes the entire planet for paperclip manufacturing. Then, once Earth is stripped of resources, it moves outward — scouring the universe for more materials to keep the paperclips coming. Along the way, it eliminates humanity, not out of malice, but because humans are just another inefficient use of atoms that could otherwise be paperclips.
- HAL 9000
“I’m sorry Dave, I’m afraid I can’t do that.” (2001: A Space Odyssey).
This may seem to be in the realm of the impossible, but there are real life examples that demonstrate that these are not paranoid jeremiads but real, albeit remote, possibilities. As a true life example of the havoc that an unfettered algorithm caused, I commend you to look into what happened with the Facebook algorithm that was directed by its controllers to “maximize engagement” in Myanmar.
Inside the Transparency in Frontier AI Act
Here are some key excerpts from the new law. You will see that much of it directly acknowledges the possibility for massive catastrophe and is designed to minimize the chance of that happening.
- A large frontier developer shall write, implement, comply with, and clearly and conspicuously publish on its website a “frontier AI framework.” There is a long laundry list of what the developer must publish. For details, please see the statute (cited below).
- The developer must review its framework annually.
- The developer must publish a transparency report on its website. Again, please see the statute for the list of what must be included in that report.
- The developer must transmit to the Office of Emergency Services a summary of any assessment of catastrophic risk from the internal use of its frontier models.
- A developer may not make false or misleading statements about such catastrophic risks.
- A developer shall promptly report any critical safety incident incidents to the Office of Emergency Services.
- Certain reports made by developers are exempt from the California Public Records Act (ponder the rationale behind that).
- A large frontier developer that fails to publish or transmit a compliant document, fails to report an incident, or fails to comply with its own framework, shall be subject to a civil penalty not to exceed $1 million per incident.
- The law also establishes a public cloud computing cluster called “CalCompute.” It is intended to “advance the development and deployment of artificial intelligence that is safe, ethical, equitable and sustainable.”
- The statute also contains a comprehensive set of whistleblower protections. Interestingly, this is a fairly large portion of the statute.
California may be first state to take these proactive steps, but New York, Colorado and Texas are among the states that have passed or proposed similar laws. Naturally, there are legal challenges pending to the California law, both from AI companies who feel the law is too restrictive and impedes growth, to consumers who fear it does not go far enough to protect them from being further marginalized by the enormous power of technology. As for the fate of the statute, and everyone impacted by it, only time will tell.
View the Full California Senate Bill No. 53 here.
Ken Greene is an attorney at his SoCal firm, the Law Office of Kenneth Charles Greene. The Law Offices of Kenneth Charles Greene present this article. In his regular column, The Greene Room, he brings clarity to complex, high-stakes issues that matter to our readers, exploring the ever-evolving intersection of finance and law. Stay tuned to Monitor for more ongoing, timely insights from Greene.
The Law Offices of Kenneth Charles Greene present this article. All copyrightable text, the selection, arrangement, and presentation of all materials (including information in the public domain), and the overall design of this presentation are the property of the Law Offices of Kenneth Charles Greene. All rights reserved. Permission is granted to download and reprint materials from this article for the purpose of viewing, reading, and retaining for reference. Any other copying, distribution, retransmission, or modification of information or materials from this article, whether in electronic or hard copy form, without the express prior written permission of Kenneth C. Greene is prohibited. The materials available from this article are for informational purposes only and not for the purpose of providing legal advice. You should contact your attorney to obtain advice with respect to any issue or problem. Use of and access to these materials does not create an attorney-client relationship between the Law Office of Kenneth Charles Greene and the user or viewer. The opinions expressed herein are the opinions of the individual author.

