
TA Oriented: I keep seeing people saying ai is rejecting candidates, and I keep seeing Talent Acquisition saying it isn’t.
Maybe you regularly audit your ai assisted ATS, maybe you still put eyeballs on every single applicant, maybe you’re crunching large numbers to verify operating in line with the law. But I suspect you’d be the outlier.
There are lawsuits in process that claim ai is rejecting candidates. We know from the EEOC settlement that one vendor’s ai was specifically rejecting women over 55 and men over 60. Another was screening people if you lived in a poorer zipcode. You can’t just turn on ai and declare “all done.”
Most disturbingly, new data shows recruiters are taking ai generated filters at face value, which functionally means ai is making decisions.
The question isn’t “can ai improve work” or “can ai drive efficiencies in the system.” The answer to both of those is assuredly “yes.” The questions we need to be asking are “what are the safeguards,” “what are audit best practices,” and “how do we reduce liability risk.”
That’s not even getting into the morality of screening off women, people of color, and the working poor, nor the data that shows diversity is itself a success driver.