The Evidence

Lawsuits, research, and your legal rights. The data is clear: AI hiring is broken, and the courts are starting to agree.

85%
AI prefers white-associated names
University of Washington, 2024
0%
Black male names preferred vs white males
University of Washington, 2024
1.1B
Applications processed by Workday
Mobley v. Workday court filings
75%
of resumes never reach a human
Industry research

Landmark Lawsuits

Legal challenges that are reshaping AI hiring accountability

Mobley v. Workday, Inc.
Active - Class Certified
Court: N.D. California Filed: February 2023 Case No: 23-cv-00770

Derek Mobley, a Black man over 40 with anxiety and depression, applied to over 100 jobs through companies using Workday's AI screening tools. He was rejected every time - often within minutes, including one rejection at 1:50 AM, less than an hour after submitting his application. The lawsuit alleges Workday's algorithms discriminate based on race, age, and disability.

Key Developments

  • July 2024: Court ruled Workday can be held liable as an "agent" of employers - setting precedent that AI vendors can face direct discrimination liability
  • May 2025: Case certified as nationwide collective action under ADEA (Age Discrimination in Employment Act)
  • July 2025: Scope expanded to include HiredScore AI features
  • Potential class size: Workday disclosed 1.1 billion applications were processed - the collective could include "hundreds of millions" of members

Why this matters: This is the first major court ruling that AI vendors - not just employers - can be held directly liable for algorithmic discrimination. It opens the door for job seekers to sue the software companies, not just the hiring companies.

EEOC v. iTutorGroup, Inc.
Settled - $365,000
Court: E.D. New York Filed: May 2022 Settled: September 2023

The EEOC's first-ever AI discrimination case. iTutorGroup programmed its hiring software to automatically reject female applicants age 55+ and male applicants age 60+. The discrimination was discovered when an applicant submitted two identical applications with different birth dates - only the application with the younger date received an interview invitation.

Settlement Terms

  • $365,000 payment to over 200 rejected applicants
  • Mandatory anti-discrimination policies and training
  • All rejected applicants invited to reapply
  • Company must stop requesting birth dates from applicants

Why this matters: The EEOC proved that "the algorithm did it" is not a defense. Even relatively simple age-based filters violate federal law. This case signals the EEOC treats algorithmic discrimination as a top enforcement priority.

D.K. v. Intuit / HireVue
Pending Investigation
Agency: EEOC & Colorado Civil Rights Division Filed: March 2025

D.K., a deaf Indigenous woman, worked for Intuit since 2019 with positive reviews and bonuses. When she applied for a promotion in 2024, Intuit required her to complete an AI video interview through HireVue. She requested human-generated captioning as an accommodation. Intuit denied the request. The automated subtitles failed - portions had no captions at all. The AI gave her low scores and recommended she "practice active listening." She didn't get the promotion.

Allegations

  • HireVue's speech recognition performs worse for deaf applicants and non-white English dialects
  • Failure to provide reasonable accommodation under ADA
  • Race discrimination under Title VII (Native American)
  • Violations of Colorado Anti-Discrimination Act
CVS HireVue "Lie Detector" Case
Privately Settled
State: Massachusetts Settled: July 2024

A job applicant alleged CVS broke Massachusetts law by requiring applicants to take HireVue video interviews that analyzed facial expressions (smiles, smirks) and assigned "employability scores" measuring "conscientiousness and responsibility" and "innate sense of integrity and honor" - essentially functioning as an illegal lie detector test.

Key Issues

  • AI video analysis of facial expressions used to assess "integrity"
  • Violated Massachusetts polygraph/lie detector laws
  • Settlement terms not disclosed

The Research

Peer-reviewed studies documenting AI hiring bias

85.1%
White Names Preferred

AI resume screening tools preferred white-associated names 85% of the time vs. Black-associated names just 9% of the time.

0%
Black Male Names Never Favored

In head-to-head comparisons, Black male-associated names were never preferred over white male-associated names. Zero percent.

99%
Fortune 500 Using AI

An estimated 99% of Fortune 500 companies now use some form of AI-powered screening in their hiring process.

Industry research, 2024
42%
Know It's Biased, Use It Anyway

42% of employers using AI hiring tools admit they're aware of potential bias - but still prioritize efficiency over fairness.

70%
No Human Oversight

Roughly 7 in 10 companies allow AI tools to reject candidates without any human ever reviewing the decision.

Business leader survey, October 2024
11%
Female Names Preferred

Female-associated names received preference in just 11% of AI screening tests, compared to 52% for male names.

Your Legal Rights

Laws that protect you when AI is used in hiring decisions

NYC Local Law 144
In Effect (July 2023)

If you're applying for jobs in New York City, employers must disclose AI screening, publish bias audits, and offer alternatives.

10-Day Notice

Must be told at least 10 business days before AI is used to evaluate you

Bias Audit

Annual third-party audit required; results must be published on company website

Alternative Process

You can request an alternative selection process or accommodations

Penalties

$500-$1,500 per violation; $10,000+ per week of continued violation

File a complaint: Report violations to NYC Dept. of Consumer and Worker Protection at nyc.gov/dca or call 311.

Illinois AI Video Interview Act
In Effect (2020)

If you do a video interview in Illinois, employers must disclose AI analysis and get your consent.

Disclosure

Must be told AI is analyzing your interview before it happens

Explanation

Must explain how the AI works and what it's evaluating

Consent

Must get your written consent before using AI

Deletion

Must delete the video within 30 days upon your request

Colorado AI Act
Effective February 2026

Colorado is implementing comprehensive AI regulations requiring "reasonable care" to prevent algorithmic discrimination.

Reasonable Care

Companies must take steps to prevent algorithmic discrimination

Notification

Must be notified when AI makes adverse decisions about you

Explanation

Right to explanation of how AI made the decision

Appeal

Right to human review and appeal process

EU AI Act
Effective August 2026

The EU classifies hiring AI as "high-risk" with strict requirements. Affects EU companies and those hiring in Europe.

Human Oversight

Humans must have authority to intervene and override AI decisions

Transparency

Must inform candidates when AI is making decisions about them

Bias Testing

Mandatory testing for discrimination before deployment

Penalties

Up to €35 million or 7% of global revenue

Take Action

File an EEOC Complaint

If you believe you were discriminated against by AI hiring tools based on race, age, disability, or other protected characteristics.

File Online at EEOC.gov →

Report NYC LL144 Violations

If a NYC employer didn't notify you about AI screening or doesn't have a published bias audit.

Report to DCWP →

Join the Workday Collective Action

If you're 40+ and were rejected through Workday's platform since September 2020, you may be eligible to join.

Check Eligibility →

Share Your Story

Document your experience. Every story adds to the evidence of systemic discrimination.

Share Your Story →

Email Templates

Request human review of your application