Over the past three years, businesses in the United States have rapidly adopted artificial intelligence (“AI”) technology – defined broadly as the ability of machines to perform tasks that typically require human intelligence. In particular, many companies now perform critical business functions using machine learning technology, a subset of AI in which computers use algorithms and statistical models to analyze and draw conclusions from data. Much like with humans, the conclusions drawn by these AI tools are susceptible to bias. Bias in an AI model can arise from the data used to train the model or from the design of the model itself. Data bias occurs when the AI model is trained on a biased data set and then replicates that bias in its conclusions. Algorithmic bias occurs when the AI model is coded to look for certain terms that may be more likely used by certain groups. Either can lead the AI model to produce biased outcomes.
Recently, a growing wave of litigation in the United States has taken aim at AI bias, alleging that AI systems used by companies in fields such as employment, housing, and insurance exhibit biases against individuals based on age, race, and other protected classes. In addition, several states have started to enact legislation that addresses AI bias – see the AI practice area note below. The litigation has forced courts to grapple with the question whether machines can be responsible for discrimination. Thus far, courts have reached the conclusion that companies using AI and vendors supplying AI can be held liable for discriminatory conduct when decisions made by their AI systems disparately impact a protected class of people. As a result of these decisions, litigation over AI bias is likely to grow significantly in the future.
Disparate Impact Claims
Lawsuits premised on AI bias have been successful in stating claims for discrimination based on disparate impact. Disparate impact claims are a tool often used to fight discrimination under Federal law. These claims help root out discriminatory intent by prohibiting facially neutral policies that disproportionally impact people belonging to a protected class, which allows “plaintiffs to counteract unconscious prejudices and disguised animus that escape easy classification as disparate treatment.” Texas Department of Housing & Community Affairs v. Inclusive Communities Project, Inc., 576 U.S. 519, 540 (2015).
Disparate impact claims may be brought under a number of federal anti-discrimination statutes, including the Civil Rights Act of 1964, the Americans with Disabilities Act (“ADA”), Age Discrimination in Employment Act (“ADEA”), and the Fair Housing Act (“FHA”). To plead a prima facie case of disparate impact, a plaintiff must (1) show a significant disparate impact on a protected class or group; (2) identify the specific practices or selection criteria at issue; and (3) show a causal relationship between the challenged practices or criteria and the disparate impact. Bolden-Hardge v. Office of Cal. State Controller, 63 F.4th 1215, 1227 (9th Cir. 2023). Over the past three years, disparate impact claims have been brought against a number of companies and their AI vendors, alleging that the AI software built by the vendors and deployed by the companies is biased in a manner that disparately impacts a protected class of people.
AI Bias in Hiring: Mobley v. Workday
Thus far, AI bias has been most prominently litigated in cases that involve use of AI to make hiring decisions. Academic studies have convincingly demonstrated AI bias in certain hiring applications. For example, a University of Washington study provided three AI models with job applications that were identical in all respects other than the name of the applicant. It found that the AI models preferred resumes with white-associated names in 85% of cases and those with Black-associated names only 9% of the time, and exhibited preferences for male names over female names. Kyra Wilson & Aylin Caliskan, “Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval,” U. Wash. Information School (2024).
Litigation bringing claims alleging discrimination by AI hiring models has already been filed in several jurisdictions. Most significantly, in May of this year the United States District Court for the Northern District of California took the precedent-setting step of certifying a collective action in an AI bias case, in Mobley v. Workday, Inc., 2025 WL 1424347 (N.D. Cal. May 16, 2025). Workday provides businesses with a platform to screen job applicants. Workday’s platform uses AI to compare the skills required in the employer’s job posting with applicants’ resumes and applications, and then determines the extent to which the applicant’s skills match the role for which he or she applied. Workday’s AI used these conclusions to provide recommendations to the employer about which candidates would be the best fit for the available position.
The plaintiffs in Mobley are five individuals over the age of forty who applied for hundreds of jobs using Workday’s system and were rejected in almost every instance without an interview, allegedly because of age discrimination in Workday’s AI recommendation system. After Workday moved to dismiss the claims in 2024, the Court allowed Mobley’s disparate impact claim under the ADEA and ADA to proceed, holding that Workday had liability as an agent of the employers who used its AI product. Mobley v. Workday, Inc., 740 F. Supp. 3d 796 (N.D. Cal. 2024). The Court reasoned that because Workday’s AI software participated in the decision-making about which applicants to hire, its biases could be grounds for a cognizable discrimination claim. The Court explained, “Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being who is sitting in an office going through resumes manually to decide which to reject. Nothing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one.” Id. at 807. The Court also recognized the policy implications that would flow from a contrary holding, warning, “Drawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era.” Id.
In May 2025, the Court certified the litigation as a collective action. Mobley v. Workday, Inc., 2025 WL 1424347 (N.D. Cal. May 16, 2025). In so holding, the Court again held that Workday’s AI was an active participant in the hiring process and that Workday’s AI algorithm constitutes a unified policy that is applicable to all members of the collective, even though they applied to different positions with different employers and Workday’s AI was thus assessing different parameters when it made decisions about which candidates to recommend. Plaintiffs are now in the process of notifying potential members of the collective, and the case is proceeding to the discovery phase.
Other cases in addition to Mobley have also brought claims based on AI bias in hiring. For example, in 2022 the U.S. Equal Employment Opportunity Commission (“EEOC”) brought a lawsuit against iTutorGroup, alleging that iTutorGroup’s AI hiring tool discriminated against job applicants based on age. iTutorGroup answered the complaint, after which the parties agreed to a settlement under which iTutorGroup remedied its AI hiring process and paid out a settlement fund to rejected job applicants. EEOC v iTutorGroup, Inc., et al., 2023 WL 6998296 (E.D.N.Y. Sept. 8, 2023). This case was the first lawsuit brought as part of a broader AI and Algorithmic Fairness Initiative by the EEOC, which aimed to “guide applicants, employees, employers, and technology vendors in ensuring that these technologies are used fairly, consistent with federal equal employment opportunity laws.” (EEOC, “EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness,” (Oct. 28, 2021), available at https://www.eeoc.gov/newsroom/eeoc-launches-initiative-artificial-intelligence-and-algorithmic-fairness. However, the EEOC’s AI and Algorithmic Fairness Initiative was subsequently ended by the Trump Administration, after the President issued an executive order mandating that agencies “deprioritize enforcement of all statutes and regulations to the extend they include disparate-impact liability.” Executive Order 14281, 90 F.R. 17537 (Apr. 28, 2025).
In addition, in March of this year the ACLU Colorado announced that it had filed a complaint with the EEOC and the Colorado Civil Rights Division against Intuit, Inc. and its AI vendor HireVue. HireVue’s AI tool conducted video interviews of job applicants and automatically provide Intuit with a score and report based on its assessment of the interviewee’s performance. The ACLU brought the complaint on behalf of an Indigenous and deaf job applicant, who was rejected following her AI interview and given feedback that she needed to “practice active listening.” The ACLU alleges that the AI interview tool was inaccessible to deaf applicants and was also likely to perform worse when evaluating non-white applicants, including those who spoke dialects like Native American English which have different speech patterns, word choices, and accents. ( ACLU, Complaint of Discrimination (Redacted) Against Hirevue & Intuit (March 19, 2025), available at https://www.aclu.org/documents/complaint-of-discrimination-redacted-against-hirevue-intuit. The complaint contends that this conduct violated Title VII of the Civil Rights Act, the ADA, and the Colorado Anti-Discrimination Act. In light of the rulings in Mobley, other complaints are likely to follow that similarly allege discrimination by AI in hiring.
Cases Alleging Discrimination Due to AI Bias in Other Areas
Litigation concerning AI bias has not been limited to cases involving the use of AI in hiring. Lawsuits that claim AI bias in industries like insurance and housing have also been found to state claims that survive motions to dismiss. For example, in 2022, plaintiffs filed a putative class action in the Northern District of Illinois alleging that AI software used by State Farm to detect fraudulent insurance claims was biased against Black homeowners in violation of the Fair Housing Act (“FHA”). Huskey, et al. v. State Farm Fire & Casualty Co., 2023 WL 5848164 (N.D. Ill. Sept. 11, 2023.) Specifically, Plaintiffs allege that State Farm utilized a machine-learning algorithm to screen for potentially fraudulent claims, and that the algorithm relied on biometric data, behavioral data, and housing data that each functioned as a proxy for race. This allegedly subjected Black policyholders to additional administrative hurdles and delays in processing their claims, which plaintiffs contend violated the FHA.
State Farm moved to dismiss, and in 2023 the Court allowed the disparate impact claim against State Farm to proceed. The Court found that plaintiffs had demonstrated a statistical disparity against Black policyholders and had plausibly alleged it was the result of bias in State Farm’s AI algorithm, writing, “From Plaintiffs’ allegations describing how machine-learning algorithms—especially antifraud algorithms—are prone to bias, the inference that State Farm’s use of algorithmic decision-making tools has resulted in longer wait times and greater scrutiny for Black policyholders is plausible.” Id. at *9. The case is currently in discovery.
Also in 2022, a putative class of plaintiffs brought a lawsuit against SafeRent Solutions, LLC alleging that their automated tenant-screening services were biased against Black and Hispanic rental applications in violation of the FHA. Louis, et al. v. SafeRent Solutions, LLC and Metropolitan Management Group LLC, 685 F.S upp. 3d 19 (D. Mass. 2023). SafeRent sold a tenant-screening tool called SafeRent Scores that it promised would “automate human judgement by assigning a value to positive and negative customer application data, credit and public record information.” Plaintiffs alleged that the algorithm behind SafeRent Scores relied on credit scoring data that reflected historic racial disparities, and that it did not consider whether applicants had federally funding housing choice vouchers. It alleged that both of these practices resulted in disproportionately more housing denials for Black and Hispanic applicants.
SafeRent moved to dismiss, but the court found that plaintiffs had plausibly alleged that SafeRent’s algorithm disparately impacted Black and Hispanic applicants. The court also rejected SafeRent’s argument that it could not be liable under the FHA because it did not make final housing decisions. On the contrary, the court held that SafeRent had liability because its SafeRent Scores product claimed to “automate human judgment” by making housing recommendations based on an undisclosed algorithm that housing providers could not alter. In 2024, SafeRent agreed to pay more than $2 million to settle the litigation.
A similar case was brought in 2023 against AI vendor PERQ and a number of Illinois-based apartment complex owners who utilized PERQ’s AI system to screen housing applicants. Open Communities v. Harbor Group Management Co., LLC, et al.¸ Case No. 23-CV-14070 (N.D. Ill.). Plaintiffs claimed that PERQ’s “conversational AI leasing agent” issued blanket rejections to rental applicants who used housing choice vouchers, which they alleged had a disparate impact on African-American renters in violation of the FHA. The case was settled shortly after filing, with the defendants agreeing to allow an outside review of their application systems, anti-bias monitoring, and training on FHA compliance.
Conclusion: The Future of AI Bias Litigation
Given courts’ holdings that have denied motions to dismiss discrimination claims premised on AI bias, it is likely that such lawsuits will continue to be filed in many areas that employ the technology. This potentially could include areas such as: applications for employment, housing, education, and loans; fraud detection in insurance and financial services; dynamic pricing in retail, financial services, and other industries; advertising in housing or employment; and other fields in which AI is used in a way that distinguishes between individuals who are potentially members of a protected class.
Although enforcement of disparate impact liability has been rolled back by the Trump Administration, federal anti-discrimination laws like the ADEA and FHA still provide a private right of action that protects workers and consumers by allowing them to bring their own disparate impact claims. In addition, several states have passed legislation to regulate how AI can be used in employment or insurance decisions. For example, Colorado, Illinois, and New York City have enacted laws that require employers to provide notice when AI systems are using in hiring decisions and require such systems undergo independent bias audits. And California enacted legislation in 2024 that prohibits health care coverage denials from being made solely by AI without an ultimate human decision-maker. More states are likely to follow and enact legislation to regulate the use of AI in a number of industries. This new legislation could provide new avenues and claims for potential plaintiffs impacted by AI bias.
Regardless of the current state of legislation in their jurisdictions, all companies that rely on AI should carefully assess the potential for bias in their systems. If their use of AI results in a disparate impact to a protected class of people, the companies and their AI vendors could face liability under federal or state law. In light of this growing litigation, companies would be wise to conduct independent AI bias audits where appropriate and ensure appropriate human involvement in decisions that could impact a protected class.