3 Comments
Aug 19·edited Aug 19

Thanks for your take on SB 1047, Kevin.

To me, the discourse over the Bill seems to have deeper roots. One being the underlying disagreement over the risks AI poses..."black box", "cybersecurity", privacy, IP, bias and so on. And there's also the sentiment that enterprises cannot be trusted to regulate themselves. Example, 2002 Sarbenes-Oxley act and 2008 financial meltdown. Finally, there's the velocity of AI development which clearly outstrips regulatory agencies understanding and authority. They (lawmakers) are still trying to make heads or tails about social media and digitalization. Any AI regulations should be agile and flexible, but not necessarily "light touch" or "over-prescriptive."

AI governance is a journey that typically starts with principles and guidelines. Frameworks such as NIST AI RMF and OECD help operationalize these principles. Finally standards such as the ISO 42001 that help set best practices and controls to demonstrate compliance with existing laws (EU AI Act, NYC Law 144)

Look forward to part II.

Expand full comment
author

No question, there are many elements to AI regulation, including "soft law" elements like standards and industry codes. SB 1047 is just trying to address one set of issues. And I agree that part of the conflict is over the scope of AI risks.

Expand full comment

In 2023, according to Stanford's HAI Index report, there were 150 total state-level bills proposed, a significant increase from the 61 bills proposed in 2022. Hopefully this SB 1047 is part of the the domino effect that produces a federal level regulation to harmonize AI regulation in the US and not fragment it further which could happen with incoming new administration in DC. Harmonization on international level is also necessary through international organizations such as the ISO, IEEE to ensure collaboration and standards development in AI.

Expand full comment