A mid-sized staffing agency in Manhattan paid $18,000 in penalties last month for running a single AI resume screener without an independent bias audit. The tool was screening candidates for a remote role that happened to be connected to a New York City office. That connection was enough: NYC Local Law 144 applies to any employer hiring for positions based in the city, regardless of where candidates sit.
For nearly three years, Local Law 144 was widely treated as a suggestion. The Department of Consumer and Worker Protection (DCWP) lacked the staff and processes to enforce it. That changed in December 2025, when the New York State Comptroller published an audit calling DCWP’s enforcement “ineffective.” Since January 2026, the agency has been running proactive investigations, and the fines are finally real.
What Local Law 144 Actually Requires
Local Law 144 (also called the AEDT law) covers any Automated Employment Decision Tool used to screen candidates or make promotion decisions for positions in New York City. The law has three core requirements, and most violations stem from the second one.
The Annual Bias Audit
Every AEDT must undergo an independent bias audit conducted within the previous 12 months. The audit must test for disparate impact across race/ethnicity and sex categories. Results must be published on the employer’s website. “Independent” means the auditor cannot be an employee of the company or the AEDT vendor. Firms like Holistic AI, DCI Consulting, and BLDS specialize in these audits, which typically cost between $5,000 and $30,000 depending on complexity.
The catch that trips up most employers: the audit must cover the tool as deployed, not just the vendor’s generic model. If you customize scoring weights or add your own screening criteria, the audit must reflect your configuration.
Notice and Disclosure
Employers must notify candidates at least 10 business days before the AEDT is used. The notice must explain what data the tool collects and how it is used. Candidates must have the option to request an alternative selection process. Each day a tool runs without proper notice counts as a separate violation.
Who Is Covered
The law applies to employers, employment agencies, and staffing firms hiring for NYC-based positions. Remote roles count if the position is tied to a New York City office. This is the detail that caught the Manhattan staffing agency: their candidate screening happened entirely online, but the role reported to a Manhattan team.
The Comptroller’s Audit That Broke the Dam
Before December 2025, DCWP’s enforcement was complaint-driven. The Comptroller’s audit found that system was broken at every level.
75% of Complaints Were Lost
The Comptroller’s office made test calls to NYC 311, the city’s general hotline, reporting AEDT concerns. Three out of four calls were misrouted and never reached DCWP. Complaints vanished into the system.
DCWP Found 1 Violation; Auditors Found 17
DCWP surveyed 32 companies about their AEDT usage. The agency found just one case of non-compliance. The Comptroller’s auditors reviewed the same 32 companies and identified at least 17 potential violations. The gap is staggering: DCWP’s compliance reviews were so superficial that they missed over 90% of likely violations.
What Changed
DCWP has committed to overhauling its processes: better complaint routing, cross-trained investigators, and proactive sweeps instead of waiting for complaints. DLA Piper’s analysis warns employers to “expect a new phase of stringent enforcement, potentially including more frequent investigations and higher civil penalties.” The penalties are $500 for the first violation and up to $1,500 for each subsequent violation, and each day an unaudited tool remains in use counts as a separate violation. A tool running for 30 days without an audit could trigger $45,000+ in fines.
NYC Is Not Alone: The Patchwork Is Growing
Local Law 144 was the first AI hiring law in the U.S., but it is no longer the only one. Two major state laws take effect in 2026, and the EU AI Act classifies hiring AI as high-risk.
Illinois HB 3773 (Effective January 1, 2026)
Illinois amended its Human Rights Act to ban AI-driven discrimination in hiring, promotion, and termination decisions. Unlike NYC’s law, Illinois does not require a formal bias audit, but it does ban using zip codes as a proxy for protected classes and requires employers to notify employees when AI is used. The Illinois Department of Human Rights (IDHR) enforces violations as civil rights complaints, not just administrative penalties.
Colorado SB 25B-004 (Effective June 30, 2026)
Colorado’s law is the most comprehensive state-level AI regulation. It covers “high-risk” AI systems making consequential employment decisions and requires impact assessments, ongoing monitoring, and disclosure to affected individuals. The law was delayed from its original date but is now on track for mid-2026.
EU AI Act: Hiring AI Is Automatically High-Risk
Under the EU AI Act, AI systems used for recruitment, candidate screening, or promotion decisions are classified as high-risk under Annex III. Obligations include risk management systems, data governance requirements, human oversight, and full technical documentation. Fines reach up to EUR 35 million or 7% of global turnover. Any company with operations in the EU that uses AI hiring tools must comply, regardless of where the AI vendor is based.
For multinational HR teams, the practical challenge is that these four regulatory regimes overlap but do not align. NYC requires a specific bias audit format. Illinois prohibits proxy discrimination without mandating audits. Colorado demands impact assessments. The EU requires comprehensive risk management. A single AI recruiting tool used across all four jurisdictions must satisfy all four sets of requirements.
What HR Teams Should Do Right Now
The enforcement shift is real, and waiting is the most expensive strategy. Here is what to prioritize.
1. Inventory Every AI Tool That Touches Hiring
Most HR departments use more AEDTs than they realize. Resume parsers, video interview analyzers, chatbot screeners, skills assessment platforms, and even some applicant tracking systems qualify. If the tool substantially assists or replaces discretionary decision-making, it is likely an AEDT under Local Law 144. Map every tool, document its function, and identify which jurisdictions it operates in.
2. Commission an Independent Bias Audit
Choose an auditor that is independent from both your company and the AEDT vendor. The audit must use your actual deployment configuration, not the vendor’s default settings. Budget $5,000-$30,000 depending on tool complexity. Schedule the audit annually; set calendar reminders because a lapsed audit exposes you to daily penalties.
3. Post Results and Notify Candidates
Publish bias audit results on your careers site. Add AEDT disclosure to your application process, giving candidates at least 10 business days’ notice. Offer an alternative evaluation path for candidates who request one. Document everything: if DCWP investigates, your paper trail is your defense.
4. Build a Cross-Jurisdictional Compliance Framework
If you hire in NYC, Illinois, Colorado, or the EU, build a single compliance framework that satisfies all applicable laws. Holistic AI’s comparison analysis shows that EU AI Act compliance typically covers most requirements of the other laws, so starting with the strictest standard and adapting downward is more efficient than building separate compliance programs.
Frequently Asked Questions
What is NYC Local Law 144 and who does it apply to?
NYC Local Law 144 regulates Automated Employment Decision Tools (AEDTs) used in hiring or promotion for positions based in New York City. It applies to employers, employment agencies, and staffing firms, including those screening for remote roles tied to a NYC office. The law requires an annual independent bias audit, public disclosure of audit results, and candidate notification at least 10 business days before the tool is used.
What are the penalties for violating NYC’s AI hiring law?
Penalties start at $500 for the first violation and range from $500 to $1,500 for each subsequent violation. Each day an unaudited AEDT remains in use counts as a separate violation, so fines accumulate rapidly. A tool running unaudited for 30 days could trigger over $45,000 in penalties.
How does NYC’s AI hiring law compare to the EU AI Act?
Both classify AI hiring tools as requiring special oversight. NYC requires an annual bias audit, candidate notice, and public disclosure. The EU AI Act classifies hiring AI as high-risk, requiring risk management systems, technical documentation, human oversight, and data governance, with fines up to EUR 35 million or 7% of global turnover. The EU AI Act is broader in scope, but Local Law 144 is more prescriptive about the specific audit format.
What changed with NYC AI hiring law enforcement in 2026?
A December 2025 audit by the New York State Comptroller found that DCWP’s enforcement was ineffective, with 75% of complaints being misrouted and the agency missing over 90% of likely violations. Since January 2026, DCWP has shifted to proactive investigations, better complaint handling, and cross-trained investigators, resulting in the first real fines being issued.
Which other U.S. states have AI hiring laws taking effect in 2026?
Illinois HB 3773 took effect January 1, 2026, banning AI-driven employment discrimination and requiring employer disclosure when AI is used. Colorado SB 25B-004 takes effect June 30, 2026, requiring impact assessments and ongoing monitoring for high-risk AI employment systems. Together with NYC Local Law 144 and the EU AI Act, these laws create a complex compliance patchwork for multinational employers.
