CALIFORNIA

California’s AI employment laws look tough, but they leave workers exposed

2d ago · March 24, 2026 · 4 min read

Why It Matters

California has positioned itself as the national leader on artificial intelligence regulation, but a growing chorus of labor advocates and policy experts argue that the state’s AI employment laws contain critical gaps that leave workers without real-time protection from automated decision-making. As algorithmic systems increasingly determine who gets hired, disciplined, or terminated, California workers face a regulatory framework built more on documentation than prevention.

The stakes are high. California employs roughly 19 million people, and the use of AI-driven hiring and workforce management tools has expanded rapidly across industries including logistics, retail, healthcare, and technology. The question of who — or what — has the authority to make employment decisions is no longer theoretical.

What Happened

Governor Gavin Newsom vetoed Senate Bill 7, known as the “No Robo Bosses Act,” a measure that would have required human review before an algorithm could fire or discipline a California worker. The veto, issued in 2025, signaled the administration’s position that such requirements would place an undue burden on innovation and business operations in the state.

The bill’s failure came despite California lawmakers processing more than 30 AI-related bills over the past two years. While several measures survived the legislative process, critics argue that the laws that ultimately passed rely heavily on delayed compliance mechanisms — including training data summaries, incident reports, and post-hoc audits — rather than proactive safeguards that could prevent harm before it occurs.

Alberto Rocha, who leads policy development at the Algorithmic Consistency Initiative, argues in a commentary published by CalMatters that the existing framework has a fundamental design flaw. When an algorithm quietly denies someone a job, demotes them, or ends their employment, the current legal architecture requires disclosure and reporting after the fact — long after the affected worker has already suffered the consequences.

Rocha contends that many of the algorithmic systems making these decisions are built and controlled by out-of-state technology companies operating under minimal oversight within California’s borders. The veto of SB 7, in his view, was not a minor policy disagreement but a deliberate choice to prioritize the interests of the technology sector over workers’ rights to meaningful human accountability in employment decisions.

By the Numbers

  • 30+ AI-related bills processed by California lawmakers over the past two years
  • 1 major vetoed bill — Senate Bill 7, the “No Robo Bosses Act” — that would have mandated human review before algorithmic termination or discipline
  • 19 million workers in California’s labor force potentially subject to AI-assisted employment decisions
  • 0 real-time intervention requirements currently mandated for employers using algorithmic workforce management tools under surviving legislation
  • The AI hiring and workforce management software market is projected to exceed $1 billion annually in the United States, with California-based firms among the largest adopters and developers

Zoom Out

California’s legislative struggle reflects a broader national debate over how governments should regulate AI in the workplace. Several states, including New York and Illinois, have enacted targeted AI employment laws, though most focus on narrow areas such as automated video interview screening or bias audits rather than comprehensive worker protections during the full employment lifecycle.

At the federal level, the Equal Employment Opportunity Commission has issued guidance on how existing anti-discrimination law applies to AI-driven hiring tools, but no comprehensive federal statute governs algorithmic employment decisions. The absence of federal action places pressure on states to act, while simultaneously creating a patchwork of rules that large employers argue creates compliance complexity across state lines.

The European Union has taken a more aggressive posture. Under the EU AI Act, high-risk AI systems used in employment contexts face mandatory human oversight requirements — the type of provision that California’s vetoed SB 7 sought to introduce domestically. Advocates in the United States have pointed to the EU framework as a model for meaningful worker protections that do not necessarily prohibit the use of AI tools.

What’s Next

Labor advocates are expected to reintroduce legislation during California’s current legislative session that addresses the gaps left by the SB 7 veto and the limitations of existing AI disclosure laws. Proposals under discussion include mandatory human review triggers for adverse employment actions, real-time worker notification requirements when algorithmic systems are used in performance evaluations, and expanded private right of action provisions that would allow affected workers to seek legal remedies.

Governor Newsom’s office has indicated support for AI regulation that balances transparency with economic competitiveness, though the administration has not publicly endorsed specific worker-protection measures. Legislative deadlines in the California Assembly and Senate will determine whether new bills advance to the governor’s desk before the end of the 2026 session.

For California workers in industries where automated management tools are already in use, the outcome of those negotiations may determine whether the state’s much-publicized AI leadership translates into concrete on-the-job protections.

Last updated: Mar 24, 2026 at 8:42 PM GMT+0000 · Sources available
STAY INFORMED
Get the Daily Briefing
Top stories from every state. One email. Every morning.