The Latest Warning From AI Hiring Experts
For the last few years, we’ve been promised that AI would revolutionize the sourcing and screening process.
Vendors told us it would eliminate administrative work, match candidates to our jobs, and screen everyone in record time.
On the flip side, job seekers have sought their own advantages.
They’ve started using AI to write the perfect resume, mass apply to jobs, and leverage interview co-pilots to advance through initial stages of an interview.
Now, I truly believe that job seekers and recruiting teams typically have the best intentions. We’re all just looking for an edge!
But here's the reality…no one is benefiting from this AI standoff.
In fact, not only are the efforts of both parties canceling each other out, we’re actually going backwards, and the sourcing and screening process is getting worse.
WHAT’S THE UNDERLYING PROBLEM?
The past few weeks I’ve been down the AI rabbit hole listening to AI experts, lawyers, analysts, and my peers in the talent acquisition space discuss the pros and cons of using AI in hiring.
One particular expert pointed out that a major problem is incubating inside our applicant tracking systems.
He noted that recent data has begun to show that resume content is rapidly converging with job description content.
Put more simply, candidates are using AI to make themselves appear more qualified.
Perfectly qualified, in fact.
But why is this a problem?
What’s wrong with a little AI to help you polish up your resume?
Well, as it turns out, this behavior creates a chain reaction of failures in the sourcing and screening stages of a hiring process. And it’s hurting three key stakeholders in a major way.
I’ll clarify the negative impact to each party before suggesting how to approach it.
Recruiter’s Time Is Being Wasted
Because the candidate pool is starting to look identical, recruiters are spending more time screening people who look perfect on paper. The minute we get them on a basic phone screen; however, they fall apart. They don't actually have the experience the AI claimed they did.
To make matters worse, some candidates are doubling down.
They’re using deep fake technology or interview co-pilots to help them pass the latter stages of an interview for jobs they weren’t qualified for anyway.
The end result is that talent acquisition teams are now rushing to purchase AI fraud detection tools…to identify candidates who are using AI…to beat the AI we bought to screen them in the first place. LOL
It is absolute madness.
Sourcing/Matching Tools Are Becoming Less Effective
In a sea of look-a-like resumes, the algorithms built to surface on-target talent in our databases are beginning to lose their effectiveness.
They can no longer differentiate AI-optimized profiles from authentic candidates who actually have the requisite experience.
The tools that were originally designed to speed up the sourcing process are now struggling to find other signals that indicate a qualified match.
Those aren’t just my words…that’s coming directly product teams building the software.
Job Seekers Feel Continued Frustration
And at the end of the day, the very people who have always been at a disadvantage in the hiring process (the job seeker), have encountered yet another barrier to employment.
Only this time, they’re contributing to their own demise. Trying to game the system is making the system worse for everyone.
And the candidates who are truly qualified? The ones who took the time to write a thoughtful, authentic resume? They are getting buried in a sea of AI look-a-likes.
THE END RESULT - TRUST IS ERODING
black holes and bias in the hiring process. This created a trust gap between candidates and hiring teams. So, technology vendors stepped in to help recruiters surface the best candidates.
But now that candidates are using AI, recruiters don’t trust the matching tools and they’re fearful of hiring someone who isn’t who they say they are.
It’s a bad cycle driven by the erosion of trust.
If we want to fix this, I think TA leaders need to rethink the speed at which they adopt the latest AI tools. At least when it comes to using sourcing and matching tech inside CRMs and ATSs.
At a recent conference, I heard ICIM’s Chief Legal Officer, Courtney Dutter, deliver some great guidance to TA leaders. She stressed that organizations should categorize their tools on a risk continuum. Then, start adoption in the lowest risk categories first.
For example:
Low Risk: AI chatbots, interview scheduling, research.
Medium Risk: AI-assisted sourcing, interview question generation, job description writing.
High Risk: Candidate matching, ranking, selection, assessments, and agents.
Starting in the lowest risk category is an easy hurdle for most organizations. It allows us to experiment with new tools and build business cases while giving our legal teams a few reps on AI contracts and policy writing. It also allows the product developers to work out the kinks.
Incidentally, this is where I’m at today. I’d love to say I’m an early adopter of AI, but the resume-meets-job-description problem is a perfect example of why many AI tools still need time to bake.
Especially the medium and high-risk plays.