A US federal court has granted conditional class certification in Mobley v. Workday โ€” the first major AI hiring discrimination case in the country to reach this stage. The March 2026 ruling allows key claims under the Age Discrimination in Employment Act (ADEA) against an AI screening vendor to proceed as a collective action, potentially representing thousands of job applicants who were filtered out by the company’s AI-powered resume screening system.

The case has implications that extend well beyond Workday. It establishes the clearest legal signal yet that liability for AI-driven hiring discrimination does not rest solely with employers โ€” it extends to the AI vendors whose systems produce the discriminatory outputs.

What the Case Alleges

The complaint centers on Workday’s AI-powered applicant tracking system, which is used by thousands of employers to screen resumes and rank candidates before any human reviews them. The plaintiff, Derek Mobley, alleges that the system systematically disadvantaged older applicants โ€” filtering them out at higher rates than younger candidates for reasons that correlate with age rather than job-relevant qualifications.

The class certification ruling means the case can now proceed as a collective action on behalf of all applicants who were screened by the system and may have been disadvantaged by the same alleged bias. Class actions in employment discrimination cases are significant because individual applicants rarely have the resources to pursue litigation alone โ€” collective certification makes the legal risk real for companies that might otherwise ignore individual claims.

The 90% Problem: How AI Bias Propagates

A University of Washington study cited alongside the case found that recruiters using biased AI tools mirrored the tool’s inequitable choices up to 90% of the time. This finding gets to the core of why AI hiring discrimination is particularly concerning: the bias doesn’t stop at the algorithm. When human reviewers are presented with AI-ranked candidate lists, they tend to defer to the ranking rather than independently evaluating candidates โ€” effectively laundering the algorithm’s bias through a layer of ostensibly human judgment.

The result is that AI bias in hiring produces discriminatory outcomes that are harder to challenge legally than overt human discrimination, because there’s always a human decision at some point in the process. The Mobley ruling begins to pierce that defense by allowing plaintiffs to challenge the AI system itself โ€” not just the employer’s final hiring decisions.

The Dual Liability Ruling

The court’s ruling has a structural importance beyond the Workday case: it signals that both the AI vendor and the employer carry liability for AI-driven discriminatory outcomes. This is a significant departure from how most companies have thought about AI procurement risk. The dominant assumption has been that employers bear the legal responsibility for their hiring decisions, while vendors bear responsibility for the accuracy of their technical claims.

If Mobley v. Workday establishes precedent that vendors can be held directly liable for discriminatory algorithmic outputs โ€” not just under contract law, but under civil rights statutes โ€” it fundamentally changes the AI procurement market. Vendors face direct legal exposure for how their systems perform in production, not just for how they’re marketed.

What HR Teams Need to Do Now

The legal signal from Mobley and the parallel EU AI Act classification of recruitment AI as high-risk create an urgent compliance question for any organization using AI in hiring:

  • Request bias audit documentation from your ATS vendor โ€” in writing. If they can’t produce one, that’s a material risk signal.
  • Conduct adverse impact analysis on your AI-assisted hiring outcomes across protected classes, particularly age, race, and gender.
  • Implement a human-in-the-loop override at minimum for any AI-filtered stage before final candidate selection.
  • Document everything. Both the EU AI Act and the emerging US legal framework require transparency and accountability records for AI systems used in consequential decisions.

The Broader Regulatory Backdrop

Mobley v. Workday doesn’t exist in isolation. The EU AI Act classifies recruitment AI as high-risk with mandatory third-party audits required before deployment. California’s SB 947 on automated employment decision systems has advanced in committee. Minnesota’s SF 4689, which regulates automated decision systems in employment settings, has passed through multiple committees. The federal and state regulatory environment around AI in hiring is consolidating around the same core principle the Mobley ruling is establishing in litigation: AI systems that make or influence consequential decisions about people’s economic lives must be auditable, transparent, and demonstrably non-discriminatory.

Conclusion

The class certification in Mobley v. Workday is a watershed moment in AI liability law โ€” the first time a court has allowed a large-scale AI discrimination claim to proceed to the merits stage. Its outcome will shape how AI hiring tools are built, deployed, audited, and legally defended for years. Browse our directory to explore the AI tools that are reshaping hiring, productivity, and knowledge work โ€” and follow the regulatory frameworks that are beginning to govern them.