Photo by Vitaly Gariev on Unsplash Source

Ninety percent of employers now use automated systems to filter candidates, yet cost-per-hire and time-to-hire have both increased since 2022. That is the central paradox laid out in Harvard Business Review’s January 2026 analysis, “AI Has Made Hiring Worse, But It Can Still Help.” Author Tomas Chamorro-Premuzic, a professor of business psychology at Columbia and UCL, does not argue against using AI in recruiting. He argues that most companies are using it wrong: as a volume play rather than a quality play, automating bad processes instead of fixing them.

The result, in his words, is “an ecosystem where both sides are inundated, sometimes fooled, occasionally impressed, and mostly exhausted, with a rising crisis of trust.”

Related: The AI Hiring Arms Race: When Candidates and Recruiters Both Use AI

The Misdiagnosis: Treating AI as a Cure-All

Most companies adopted AI recruiting tools to solve a specific problem: too many applications, too few recruiters. The logic was straightforward. If AI can screen 500 resumes in seconds, you need fewer people doing that work. If AI can schedule interviews automatically, you eliminate calendar tag. If AI can score candidates against role requirements, you remove human inconsistency.

That logic is not wrong on paper. Where it breaks down is in practice, because companies bolted AI onto fundamentally broken processes and expected magic.

Automating Bias at Scale

Amazon learned this the hard way. The company built a proprietary resume-screening algorithm trained on a decade of historical hiring data. The algorithm learned what Amazon’s past hiring decisions looked like, and since those decisions were overwhelmingly male, the system penalized resumes that included women’s colleges or terms associated with female candidates. Amazon scrapped the tool in 2018, but the lesson still has not landed everywhere.

A 2025 Stanford study testing five major large language models (GPT-3.5, GPT-4o, Gemini 1.5 Flash, Claude 3.5 Sonnet, Llama 3-70b) found that AI hiring tools exhibit complex intersectional biases. Older male candidates received consistently higher ratings than female or younger candidates, even when resumes were generated from identical underlying data. The biases were not simple or uniform; they varied by role, industry, and the specific combination of gender, race, and age.

HireVue, used by over 700 companies including Goldman Sachs and Unilever for video-based hiring, faced scrutiny when research showed its speech recognition algorithms disadvantaged non-white and deaf applicants. The company dropped its controversial facial analysis feature in 2021, and the EU AI Act now bans emotion recognition in hiring contexts entirely.

The Volume Trap

Here is a more insidious problem that HBR highlights: AI tools designed to handle volume actually create more volume. When auto-apply bots let candidates spray hundreds of applications per week, employers respond with more aggressive AI filtering, which prompts candidates to optimize their resumes even harder for those filters. Chamorro-Premuzic calls this a “noisy, crowded arms race of automation.”

SHRM’s 2025 benchmarking data confirms the result: average cost-per-hire has climbed steadily since 2022, exactly the period when generative AI tools flooded the recruitment market. Sixty percent of organizations saw time-to-hire increase in 2025, according to GoodTime’s hiring data. Only one in nine companies managed to reduce it.

The Trust Collapse in Numbers

The data on candidate trust is bleak. Sixty-six percent of U.S. adults say they would avoid applying to companies that use AI in hiring decisions. Only 8% of job seekers consider AI-driven hiring to be fair, while 70% of hiring managers say they trust AI for hiring decisions. That gap between employer confidence and candidate skepticism is a recruiting risk that few companies are measuring.

On the employer side, 19% of organizations admit their AI tools accidentally screen out qualified candidates. Only 29% maintain full human oversight on all AI-driven rejection decisions. Half use AI exclusively for initial screening rejections, and 21% allow AI to reject candidates at every stage without any human review.

This is not a technology problem. It is a governance problem. The tools work exactly as configured; the configuration just happens to be reckless.

Related: AI Recruiting and the EU AI Act: What's Legal in 2026

Where AI Actually Helps: HBR’s Prescription

Chamorro-Premuzic does not call for abandoning AI in hiring. His argument is more specific: AI works when it reduces noise and enforces consistency, not when it replaces human judgment on questions that require it.

Structured Screening, Not Keyword Matching

The Stanford-USC field experiment cited in HBR found that 20% more candidates advanced through structured, AI-led interviews compared to traditional resume screening. The key distinction is that these AI interviews evaluated candidates against consistent criteria rather than parsing resumes for keyword matches. The AI asked the same questions, in the same order, and scored responses against the same rubric. That consistency is something humans are notoriously bad at maintaining across 50 interviews.

Chipotle achieved a 75% speed increase in high-volume hiring using conversational AI for initial screening. The difference between Chipotle’s approach and the failing model is scope: Chipotle used AI for a specific, well-defined task (initial screening for hourly roles with clear requirements), not as a replacement for the entire evaluation process.

Skills Over Signals

The 2026 NACE Job Outlook survey reports that 70% of employers now use skills-based hiring, up from roughly 50% in 2023. Four in ten respondents are actively moving away from CV-first hiring. The reason is partly philosophical (focus on what people can do, not where they went to school) but also practical: when every resume is AI-polished to perfection, the resume itself stops being a useful signal.

Live behavioral interviews remain the most trusted indicator, cited by 68% of hiring managers. Skills demonstrations and real-time problem solving come second. The companies getting AI right are using it to standardize the assessment of these skills rather than to replace the assessment itself.

Related: Human-in-the-Loop AI: Why Full Automation Fails

The Hybrid Model

HBR’s recommended approach resembles what Chamorro-Premuzic calls the “real estate platform” model. Just as Zillow shortlists properties based on your criteria but a human agent guides the final decision, AI should shortlist candidates based on structured assessments but a human recruiter should make the final call.

This is not a compromise position. It is a recognition that AI and humans are good at fundamentally different things. AI is better at consistency, pattern detection, and processing volume without fatigue. Humans are better at evaluating motivation, reading context, and building the trust that convinces a candidate to accept the offer. Eighty percent of hiring professionals now insist that final decisions must remain human-led.

What Your Recruiting Team Should Do Now

The gap between companies using AI well and companies using AI badly in recruiting is widening. Based on HBR’s analysis and the supporting data, here are four concrete changes that separate the two groups.

Audit your rejection pipeline. If your ATS rejects candidates without human review at any stage, you have a problem. Twenty-one percent of companies let AI reject at all stages. Be explicit about where the human checkpoint sits and document it. The EU AI Act classifies hiring AI as high-risk, and enforcement begins in 2026.

Replace keyword screening with structured assessments. Resume keyword matching was always a proxy for skills evaluation. Now that every resume is AI-optimized, it is not even a useful proxy. Move to structured interviews, work sample tests, or AI-led behavioral assessments that evaluate skills directly. Tools like CodeSignal, Sapia.ai, and iMocha offer skills-first assessment platforms.

Measure what changed. Track cost-per-hire, time-to-hire, quality-of-hire (90-day retention), and candidate NPS before and after each AI tool you deploy. If a tool increases volume but not quality, it is making your pipeline noisier, not better. Ninety percent of companies missed their hiring goals in 2025. Measure whether your AI tools are helping you hit yours.

Be transparent with candidates. The 66% of job seekers who avoid AI-reliant employers are not anti-technology. They are anti-opacity. Tell candidates which parts of the process use AI, what the AI evaluates, and how to contact a human if they believe the assessment was wrong. Only 26% of applicants trust AI to evaluate them fairly. Transparency will not fix that number overnight, but secrecy guarantees it keeps falling.

Related: AI Recruiting Tools: How Automation Changes Hiring

Frequently Asked Questions

What does HBR say about AI in hiring?

Harvard Business Review’s January 2026 analysis by Tomas Chamorro-Premuzic argues that AI has made hiring worse by creating a noisy arms race between candidates and employers. However, it also states AI can help when used to enforce consistency and reduce bias in structured assessments, rather than as a replacement for human judgment.

Why has AI made recruiting worse?

AI made recruiting worse because companies automated fundamentally broken processes. Auto-apply bots flood employers with applications, AI screening filters reject qualified candidates (19% of organizations report this), and both cost-per-hire and time-to-hire have increased since 2022 despite record AI adoption. The core problem is using AI as a volume play rather than a quality play.

Do job seekers trust AI hiring tools?

No. 66% of U.S. adults say they would avoid applying to companies that use AI for hiring decisions, and only 8% of job seekers consider AI-driven hiring fair. Meanwhile, 70% of hiring managers trust AI for hiring decisions, creating a significant perception gap between employers and candidates.

What is skills-based hiring and why does it matter for AI recruiting?

Skills-based hiring evaluates candidates based on demonstrated abilities rather than resume credentials. 70% of employers now use this approach. It matters because AI-polished resumes have made traditional resume screening unreliable. Structured skills assessments, work samples, and behavioral interviews provide signals that AI resume optimization cannot fake.

How should companies use AI in recruiting according to HBR?

HBR recommends a hybrid model: AI should handle structured screening, enforce consistency in assessments, and shortlist candidates based on skills criteria. Humans should make final decisions, evaluate cultural fit and motivation, and build trust with candidates. The key is using AI for specific, well-defined tasks rather than as a cure-all replacement for the entire hiring process.