Mobley v. Workday and the end of vendor-only risk
The conditional class certification in Mobley v. Workday has turned AI hiring bias vendor liability into a board-level risk for every large employer using algorithmic screening. When a federal court held that a single job applicant could pursue a putative class action on behalf of applicants aged 40 and over under the Age Discrimination in Employment Act, it signaled that artificial intelligence in the hiring process is now squarely within mainstream employment law, not a speculative frontier. For CHROs, the message is blunt and immediate.
For years, many employers treated AI hiring tools as a black box owned by vendors, assuming any discrimination or disparate impact would sit with the software provider rather than the company making the final employment decision. Mobley Workday litigation challenges that assumption by arguing that Workday’s screening tools functioned as an “agent” of client companies, which means those companies may share responsibility for discrimination claims and impact claims under Title VII and other civil rights statutes. The court did not decide the merits, but it allowed the age discrimination class to move forward, which is the moment liability becomes real rather than theoretical.
That shift matters because plaintiffs’ lawyers now see a viable path to class actions that aggregate thousands of job applicants into a single proceeding, dramatically raising potential insurance exposure and settlement pressure. Under U.S. anti discrimination and civil rights law, employers cannot outsource compliance to artificial intelligence vendors and then argue that biased decision making was someone else’s problem, especially when the employer configures the hiring practices and approves the use of screening tools. In practical terms, AI hiring bias vendor liability now attaches to the enterprise that benefits from the algorithm, not just the company that coded the machine learning model.
Contract terms, operating models, and the new line between sort and reject
Mobley v. Workday arrives just as SHRM reports that most talent acquisition leaders now rank critical thinking above technical skills for recruiters, reframing the recruiter role around oversight of automated tools rather than execution of every hiring task. That shift gives CHROs a narrow window to renegotiate vendor contracts so that AI hiring bias vendor liability is shared in a way that reflects who controls the data, the model, and the hiring process. The alternative is to let plaintiffs’ lawyers and courts define that allocation for you.
Three clauses belong on your renegotiation agenda this quarter if you rely on artificial intelligence for employment screening. First, indemnification must explicitly cover discrimination claims, disparate impact claims, and class action litigation tied to the vendor’s models, with clear caps, carve outs, and insurance obligations that match your risk appetite and the scale of your applicants pool. Second, audit rights and bias audits should be contractual, including access to model documentation, training data summaries, and disparate impact reporting by protected class, not just glossy fairness marketing.
Third, you need model change notification and versioning language that treats algorithm updates like material changes to any other critical HR system, because silent shifts in machine learning parameters can alter who gets through the hiring funnel overnight. That legal architecture only works if your operating model draws a hard line between automated sort with human review and automated reject with no human in the loop, since the latter concentrates AI hiring bias vendor liability on a single decision point. As SHRM’s State of AI in HR research notes, recruiter roles are evolving into AI oversight functions, which means your people team must be trained to interrogate screening tools, not just click through workday-style workflows that feel as routine as approving a retirement party gift policy.
Governance moves for CHROs: committee charters and 90-day audits
With AI hiring bias vendor liability now a live issue in court, CHROs need governance that is as concrete as any financial control. Start with your AI governance committee charter and add three clauses that connect directly to employment law and EEOC expectations rather than generic ethics language. One clause should assign clear ownership for bias audits of hiring tools, including responsibility for monitoring disparate impact across age, gender, race, and other protected class categories under Title VII and related civil rights statutes.
A second clause should require pre-deployment and periodic reviews of any automated decision making that can reject job applicants without human intervention, with explicit thresholds for acceptable disparate impact ratios and triggers for pausing a model. A third clause should mandate that all AI-related hiring practices be documented in a way that can be produced in court, including data lineage, screening tools configurations, and rationales for using particular machine learning models in specific employment contexts. Those records will matter when a court held that your company “knew or should have known” about potential discrimination risks but continued to use the system anyway.
From there, run a 90-day audit using only internal resources and existing HR analytics capabilities, focusing on one high-volume role where AI tools influence hiring decisions. Pull six to twelve months of data from your applicant tracking system, segment applicants by age bands and other protected characteristics, and calculate selection rates at each stage of the hiring process to identify any statistically significant disparate impact. If you find patterns that raise concern, engage your legal team, revisit your contracts with companies providing AI tools, and adjust your operating model so that no automated system can permanently reject candidates without a documented human review, because in the era of Mobley Workday, the line between vendor algorithm and employer liability is now a litigation roadmap.
Key quantitative signals on AI hiring and liability
- SHRM’s State of AI in HR research reports that 73 percent of talent acquisition leaders now rank critical thinking as the top skill for recruiters, while AI technical skills have dropped to fifth place, underscoring the shift from execution to oversight.
- In the Mobley v. Workday litigation, the federal court granted conditional class certification for applicants aged 40 and over under the Age Discrimination in Employment Act, opening the door to large-scale age discrimination claims tied to AI screening tools.
- Internal audits at several large employers have shown that even small changes in machine learning model parameters can alter interview selection rates for protected classes by more than 10 percentage points within a single workday hiring cycle.
Questions leaders are asking about AI hiring bias vendor liability
How does Mobley v. Workday change my legal exposure when using AI in hiring ?
The conditional class certification in Mobley v. Workday signals that courts are willing to treat AI vendors as potential agents of employers, which means your organisation can face discrimination claims even when a third party built and operated the screening tools. If your hiring process relies on automated decision making to advance or reject applicants, you should assume that both your company and the vendor may be named in any class action alleging disparate impact or intentional discrimination. That reality makes it essential to align your contracts, governance, and internal audits with employment law standards rather than assuming the vendor will absorb all liability.
What contract clauses should CHROs prioritise when renegotiating AI hiring tool agreements ?
CHROs should focus on indemnification language that explicitly covers discrimination claims, disparate impact claims, and regulatory investigations related to the vendor’s artificial intelligence models. Audit rights are equally important, including the ability to review bias audits, disparate impact analyses, and documentation of machine learning changes that could affect protected class outcomes. Finally, model change notification clauses should require timely disclosure of any updates that may alter hiring outcomes, so your legal and HR teams can reassess risk before those changes affect job applicants.
What is the practical difference between automated sort and automated reject in AI hiring systems ?
Automated sort systems use AI to prioritise or rank applicants but still require a human recruiter or hiring manager to make the final employment decision, which preserves a layer of human judgment and potential correction. Automated reject systems allow artificial intelligence to make final negative decisions without human review, concentrating legal and ethical risk at a single algorithmic step. From a liability perspective, courts and regulators are more likely to scrutinise automated reject models for disparate impact, especially when employers cannot explain or document how the decision making logic aligns with anti discrimination and civil rights law.
How can we run a meaningful bias audit in 90 days without external consultants ?
A 90-day internal bias audit starts with selecting one or two high-volume roles where AI tools influence screening or interview selection, then extracting six to twelve months of applicant data from your HR systems. Your analytics équipe can calculate selection rates by age, gender, race, and other protected characteristics at each stage of the hiring funnel, then compare those rates to identify any disparate impact that may raise legal concerns. If the audit reveals significant gaps, you can adjust your hiring practices, reconfigure or pause specific screening tools, and work with legal counsel to document remediation steps before plaintiffs or regulators raise formal claims.
What should an AI governance committee oversee specifically in the context of hiring ?
An effective AI governance committee should own policy and oversight for all artificial intelligence used in employment decisions, including pre-hire assessments, résumé screening, and interview scheduling tools. Its remit should cover approval of new AI systems, review of bias audits and disparate impact analyses, and coordination with legal teams on compliance with Title VII, the Age Discrimination in Employment Act, and EEOC guidance. By centralising accountability for AI hiring bias vendor liability, the committee helps ensure that technical decisions about machine learning models are aligned with corporate risk tolerance, civil rights obligations, and the organisation’s broader future of work strategy.
Trusted sources for further reading : SHRM State of AI in HR, SHRM Executive Network briefings for CHROs, and publicly available court filings in Mobley v. Workday.