Workday and Amazon’s alleged AI employment biases are among myriad ‘oddball results’ that could exacerbate hiring discrimination
Key Takeaways
A recent collective action lawsuit against workplace management software company Workday claims its platform rejected candidates from jobs based on race, age, and disability.
Article Overview
Quick insights and key information
8 min read
Estimated completion
business news
Article classification
July 5, 2025
12:27 PM
Fortune
Original publisher
Success·Artificial IntelligenceWorkday and Amazon’s alleged AI employment biases are among myriad ‘oddball results’ that could exacerbate hiring discriminationBY Sasha RogelbergBY Sasha RogelbergReporterSasha RogelbergReporterSasha Rogelberg is a reporter and former editorial fellow on the news desk at Fortune, covering retail and the intersection of and culture
SEE FULL BIORecent re has presented stark evidence of AI hiring discrimination
Getty Imagesing allegations that workplace management software firm Workday has an AI-assisted platform that discriminates against spective employees, human resources and legal experts are sounding the alarm on AI hiring tools. “If the AI is built in a way that is not attentive to the risks of bias…then it can not only perpetuate those patterns of exclusion, it could actually worsen it,” law fessor Pauline Kim told Fortune
Despite AI hiring tools’ best efforts to line hiring cesses for a growing pool of applicants, the nology meant to open doors for a wider array of spective employees may actually be perpetuating decades-long patterns of discrimination
AI hiring tools have become ubiquitous, with 492 of the Fortune 500 companies using applicant tracking systems to line recruitment and hiring in 2024, according to job application platform Jobscan
While these tools can help employers screen more job candidates and help identify relevant experience, human resources and legal experts warn imper training and implementation of hiring nologies can liferate biases
Re offers stark evidence of AI’s hiring discrimination
The University of Washington Information School published a study last year finding that in AI-assisted resume screenings across nine occupations using 500 applications, the nology favored white-associated names in 85. 1% of cases and female associated names in only 11. 1% of cases
In some settings, Black male participants were disadvantaged compared to their white male counterparts in up to 100% of cases. “You kind of just get this positive back loop of, we’re training biased models on more and more biased data,” Kyra Wilson, a doctoral student at the University of Washington Information School and the study’s lead author, told Fortune. “We don’t really know kind of where the upper limit of that is yet, of how bad it is going to get before these models just stop working altogether. ” Some workers are claiming to see evidence of this discrimination outside of just experimental settings
Last month, five plaintiffs, all over the age of 40, claimed in a collective action lawsuit that workplace management software firm Workday has discriminatory job applicant screening nology
Plaintiff Derek Mobley alleged in an initial lawsuit last year the company’s algorithms caused him to be rejected from more than 100 jobs over seven years on account of his race, age, and disabilities
Workday denied the discrimination claims and said in a statement to Fortune the lawsuit is “without merit. ” Last month the company announced it received two third-party accreditations for its “commitment to AI responsibly and transparently. ” “Workday’s AI recruiting tools do not make hiring decisions, and our customers maintain full control and human oversight of their hiring cess,” the company said. “Our AI capabilities look only at the qualifications listed in a candidate’s job application and compare them with the qualifications the employer has identified as needed for the job
They are not trained to use—or even identify—tected characteristics race, age, or disability. ” It’s not just hiring tools with which workers are taking issue
A letter sent to Amazon executives, including CEO Andy Jassy, on behalf of 200 employees with disabilities claimed the company flouted the Americans with Disabilities Act
Amazon allegedly had employees make decisions on accommodations based on AI cesses that don’t abide by ADA standards, The Guardian reported this week
Amazon told Fortune its AI does not make any final decisions around employee accommodations. “We understand the importance of responsible AI use, and robust guidelines and review cesses to ensure we build AI integrations thoughtfully and fairly,” a spokesperson told Fortune in a statement
How could AI hiring tools be discriminatory
Just as with any AI application, the nology is only as smart as the information it’s being fed
Most AI hiring tools work by screening resumes or resume screening evaluating interview questions, according to Elaine Pulakos, CEO of talent assessment developer PDRI by Pearson
They’re trained with a company’s existing model of assessing candidates, meaning if the models are fed existing data from a company—such as demographics breakdowns showing a preference for male candidates or Ivy League universities—it is ly to perpetuate hiring biases that can lead to “oddball results” Pulakos said. “If you don’t have information assurance around the data that you’re training the AI on, and you’re not checking to make sure that the AI doesn’t go off the rails and start hallucinating, doing weird things along the way, you’re going to you’re going to get weird stuff going on,” she told Fortune. “It’s just the nature of the beast. ” Much of AI’s biases come from human biases, and therefore, according to Washington University law fessor Pauline Kim, AI’s hiring discrimination exists as a result of human hiring discrimination, which is still prevalent today
A landmark 2023 Northwestern University meta-analysis of 90 studies across six countries found persistent and pervasive biases, including that employers called back white applicants on average 36% more than Black applicants and 24% more than Latino applicants with identical resumes
The rapid scaling of AI in the workplace can fan these flames of discrimination, according to Victor Schwartz, associate director of nical duct management of remote work job platform Bold. “It’s a lot easier to build a fair AI system and then scale it to the equivalent work of 1,000 HR people, than it is to train 1,000 HR people to be fair,” Schwartz told Fortune. “Then again, it’s a lot easier to make it very discriminatory, than it is to train 1,000 people to be discriminatory. ” “You’re flattening the natural curve that you would get just across a large number of people,” he added. “So there’s an opportunity there
There’s also a risk. ” How HR and legal experts are combatting AI hiring biases While employees are tected from workplace discrimination through the Equal Employment Opportunity Commission and Title VII of the Civil Rights Act of 1964, “there aren’t really any formal regulations employment discrimination in AI,” said law fessor Kim
Existing law hibits against both intentional and disparate impact discrimination, which refers to discrimination that occurs as a result of a neutral appearing policy, even if it’s not int. “If an employer builds an AI tool and has no intent to discriminate, but it turns out that overwhelmingly the applicants that are screened out of the pool are over the age of 40, that would be something that has a disparate impact on older workers,” Kim said
Though disparate impact theory is well-established by the law, Kim said, President Donald Trump has made his hostility for this form of discrimination by seeking to eliminate it through an executive order in April. “What it means is agencies the EEOC will not be pursuing or trying to pursue cases that would involve disparate impact, or trying to understand how these nologies might be having a discrete impact,” Kim said. “They are really pulling back from that effort to understand and to try to educate employers these risks. ” The White House did not immediately respond to Fortune’s request for
With little indication of federal-level efforts to address AI employment discrimination, politicians on the local level have attempted to address the nology’s potential for prejudice, including a New York City ordinance banning employers and agencies from using “automated employment decision tools” unless the tool has passed a bias audit within a year of its use
Melanie Ronen, an employment lawyer and partner at Stradley Ronon Stevens & Young, LLP, told Fortune other state and local laws have focused on increasing transparency on when AI is being used in the hiring cess, “including the opportunity [for spective employees] to opt out of the use of AI in certain circumstances. ” The firms behind AI hiring and workplace assessments, such as PDRI and Bold, have said they’ve taken it upon themselves to mitigate bias in the nology, with PDRI CEO Pulakos advocating for human raters to evaluate AI tools ahead of their implementation
Bold nical duct management director Schwartz argued that while guardrails, audits, and transparency should be key in ensuring AI is able to conduct fair hiring practices, the nology also had the potential to diversify a company’s workforce if applied appriately
He cited re indicating women tend to apply to fewer jobs than men, doing so only when they meet all qualifications
If AI on the job candidate’s side can line the application cess, it could remove hurdles for those less ly to apply to certain positions. “By removing that barrier to entry with these auto-apply tools, or expert-apply tools, we’re able to kind of level the playing field a little bit,” Schwartz said
Introducing the 2025 Fortune 500, the definitive ranking of the biggest companies in America
Explore this year's list.
Related Articles
More insights from FinancialBooklet