Eightfold AI, the hiring platform used by Microsoft, Morgan Stanley, Starbucks, and PayPal, is being sued for secretly building AI-generated dossiers on job applicants. The class action, filed January 20, 2026 in California state court, claims Eightfold collects data from over 1.5 billion profiles, scores candidates on a 0-to-5 scale, and hands those scores to employers without ever telling applicants the assessment exists. No disclosure, no consent, no way to dispute errors.

The legal theory is straightforward: if an AI system pulls external data about a person and sells the resulting profile to employers for hiring decisions, it is a consumer reporting agency under the Fair Credit Reporting Act (FCRA). And consumer reporting agencies have obligations that Eightfold allegedly ignored completely.

This is not a discrimination case. It is a procedural case, which makes it harder to defend and potentially more damaging to the industry.

Related: AI in Recruiting: What Is Actually Legal Under the EU AI Act?

What Eightfold Allegedly Did

The complaint, filed by plaintiffs Erin Kistler and Sruti Bhaumik, paints a specific picture of how Eightfold’s system operates behind the scenes.

The Hidden Profile Machine

Eightfold’s platform scrapes data from LinkedIn, GitHub, Stack Overflow, and other publicly accessible sources. It feeds this data into a proprietary large language model along with whatever information applicants submit through an employer’s job portal. The system then generates a “likelihood of success” score on a 0-to-5 scale for each candidate against each role.

Employers see these scores. Candidates never do. As plaintiff Kistler put it: “I’ve applied to hundreds of jobs, but it feels like an unseen force is stopping me.”

Why FCRA Matters Here

The FCRA, enacted in 1970, was designed for a simpler world of credit checks and background reports. But its definition of a consumer report is broad: “any communication by a consumer reporting agency” that includes information about a person’s “personal characteristics, or mode of living” used for employment purposes.

That definition does not require a credit score. It covers any third-party data assembly used for hiring decisions. The plaintiffs argue Eightfold’s AI profiles fit squarely within it.

If a court agrees, Eightfold violated multiple FCRA requirements:

  • Disclosure: Employers must tell applicants in writing, in a standalone document, that a consumer report will be used.
  • Authorization: Applicants must consent in writing before the report is pulled.
  • Access: Applicants must be able to see the report and dispute inaccuracies.
  • Pre-adverse action notice: Before rejecting someone based on a report, employers must share the report and a summary of the applicant’s rights.
  • Post-adverse action notice: After a final rejection, employers must notify the applicant again.

Eightfold’s system allegedly provided none of these protections.

Why This Lawsuit Hits Different Than Bias Cases

Most AI hiring lawsuits, like the ongoing Workday age discrimination case, try to prove the algorithm produces discriminatory outcomes. That is hard. You need statistical evidence, expert witnesses, and a theory of causation connecting the algorithm to protected-class harm.

The Eightfold FCRA case sidesteps all of that. The plaintiffs do not need to prove bias. They only need to prove three things:

  1. Eightfold assembled information from third-party sources about individuals.
  2. Eightfold provided those profiles to employers for hiring decisions.
  3. Eightfold did not follow FCRA procedures.

If those facts are true, the violation is automatic. No bias analysis required, no disparate impact test, no intent to discriminate. Just a procedural failure that carries statutory damages of $100 to $1,000 per class member, plus potential punitive damages.

Jenny R. Yang, former Chair of the U.S. Equal Employment Opportunity Commission and now a partner at Outten & Golden representing the plaintiffs, framed it bluntly: “Qualified workers across the country are denied job opportunities based on automated assessments they have never seen and cannot challenge.”

Who Is Exposed: It Is Not Just Eightfold

The lawsuit names Eightfold, but the legal theory applies to any AI hiring platform that enriches candidate profiles with external data. If your vendor scrapes LinkedIn profiles, cross-references GitHub activity, or aggregates social media signals to score candidates, the same FCRA argument could apply.

Employers, Not Just Vendors

Here is the part that should worry HR teams: under the FCRA, employers share liability. If an employer uses a consumer report without providing proper disclosures, the employer is on the hook too, regardless of what the vendor told them about compliance.

Fisher Phillips outlined five steps employers should take immediately:

  1. Audit your AI vendor’s data sources. Ask exactly what external data the platform ingests beyond what candidates submit directly.
  2. Check vendor contracts for FCRA certifications. “Simply because the vendor says it does not believe the FCRA applies does not mean that is true.”
  3. Do not assume existing background check compliance covers AI tools. Your HR team may have FCRA procedures for criminal records checks but not for the AI screening platform running in a different department.
  4. Document everything. Federal and state agencies are watching this space, and plaintiffs’ attorneys are looking for test cases.
  5. Assess reputational risk. Even if FCRA technically does not apply, opaque AI tools damage employer branding. 77% of job seekers already worry their resumes are filtered before a human ever sees them.
Related: AI Recruiting Tools: How Automation Changes Hiring

The Regulatory Pile-On: FCRA Is Just the Start

The Eightfold lawsuit does not exist in a vacuum. It lands in a regulatory environment where AI hiring tools face pressure from multiple directions simultaneously.

U.S. State Laws

New York City’s Local Law 144 already requires annual bias audits and candidate notification for automated employment decision tools. Colorado mandates “reasonable care” against algorithmic discrimination. Illinois requires applicant notification before AI-driven video analysis.

The CFPB Question Mark

In 2024, the Consumer Financial Protection Bureau issued guidance stating that employment algorithmic scores fall under FCRA protections. That guidance was rescinded in 2025 under the Trump administration. But the underlying statute has not changed, and courts are not bound by agency guidance either way.

EU AI Act: The Global Precedent

For companies operating in Europe, the picture is even stricter. The EU AI Act classifies all AI hiring tools as high-risk under Annex III. By August 2, 2026, any AI system used to filter applications, rank candidates, or evaluate job seekers must comply with mandatory risk assessments, decision logging, human oversight requirements, and candidate notification rules. Penalties reach EUR 15 million or 3% of global revenue.

Unlike the FCRA’s focus on consumer reports, the EU AI Act covers all AI-assisted hiring decisions, whether the data comes from external scraping or from the candidate’s own application. The scope is broader, the requirements are stricter, and the compliance deadline is six months away.

Related: EU AI Act 2026: What Companies Need to Do Before August

What Eightfold Says

Eightfold denies the lawsuit’s characterizations. The company stated: “Eightfold does not ’lurk’ or scrape personal web history” to build dossiers. It contends the platform operates solely on data candidates submit or that employers provide directly.

The company also disputes bias allegations, claiming it masks identifying information and flags potential biases. Eightfold positions itself as a skills-based hiring platform that evaluates candidate potential rather than matching keywords.

But the plaintiffs’ complaint tells a different story, citing Eightfold’s own marketing materials about analyzing “more than 1.5 billion global data points” from publicly available sources. That tension between marketing claims and legal defense will likely be central to the case.

What Happens Next

The case is in early stages. The plaintiffs seek class certification, a declaratory judgment that Eightfold’s practices are unlawful, an injunction halting the AI tool until it complies with FCRA, and compensatory plus punitive damages.

If this reaches the appellate level and courts rule that AI-driven candidate enrichment platforms are consumer reporting agencies, the consequences ripple far beyond Eightfold:

  • Every AI hiring vendor that pulls external data will need standalone FCRA disclosures, written authorization, and dispute resolution mechanisms.
  • Employers will need to audit whether their AI recruiting stack triggers FCRA obligations they currently ignore.
  • Candidates will gain the right to see and challenge the AI-generated profiles that may be silently filtering them out of jobs.

Professor Pauline Kim, an employment law scholar, cautioned that even full FCRA compliance would provide “only limited transparency, likely not enough to ensure the fairness of these systems.” The lawsuit may force procedural compliance, but the deeper question of whether opaque AI hiring is good for anyone remains open.

For now, the practical advice is clear: if your company uses an AI hiring tool, find out exactly what data it collects, where that data comes from, and whether your current FCRA procedures account for it. The cost of an audit is trivial compared to the cost of being the next test case.

Frequently Asked Questions

What is the Eightfold AI lawsuit about?

A January 2026 class action alleges Eightfold AI creates hidden consumer reports on job applicants by scraping data from LinkedIn, GitHub, and other sources to score candidates on a 0-to-5 scale without disclosure, consent, or dispute rights, violating the Fair Credit Reporting Act (FCRA).

Does the FCRA apply to AI hiring tools?

The FCRA defines a consumer report as any communication containing information about personal characteristics used for employment purposes. If an AI hiring tool assembles third-party data about candidates and provides it to employers for hiring decisions, it likely qualifies as a consumer reporting agency under the FCRA, regardless of whether it produces a traditional credit report.

Are employers liable if their AI hiring vendor violates the FCRA?

Yes. Under the FCRA, employers who use consumer reports for hiring decisions must provide standalone written disclosures, obtain written authorization, and follow pre-adverse and post-adverse action procedures. Employers share liability if these steps are missing, even if the vendor claimed FCRA did not apply.

How does the EU AI Act affect AI hiring tools compared to the FCRA?

The EU AI Act classifies all AI hiring systems as high-risk under Annex III, requiring risk assessments, decision logging, human oversight, and candidate notification by August 2026. Unlike the FCRA, which focuses on third-party consumer reports, the EU AI Act covers all AI-assisted hiring decisions regardless of data source, with penalties up to EUR 15 million or 3% of global revenue.

What should employers do right now about AI hiring compliance?

Audit your AI vendor’s data sources to understand what external data it ingests. Review vendor contracts for FCRA certifications. Ensure standalone FCRA disclosures are provided if applicable. Do not assume existing background check compliance covers AI screening tools. Document all compliance efforts and consult legal counsel about potential exposure.

Source