For large employers — and for tech companies providing hiring platform technology for employers — the risk profile around AI-assisted hiring has shifted from “future concern” to something you need to worry about right now. The core point is simple: using an algorithm does not reduce anti-discrimination duties; it often increases the need for validation, monitoring, documentation and vendor oversight.
States and even municipalities have moved from general statements to targeted requirements (e.g., audits, notices, recordkeeping and “high-risk” system duties), creating a patchwork of which large, multi-state employers must be aware. And preexisting state and federal anti-discrimination laws are still very much in place and apply to AI-assisted employment decisions.
Finally, the most closely watched private litigation in this space — Mobley v. Workday — illustrates the direction of liability: plaintiffs are targeting not only employers, but also the vendors and platforms that significantly influence hiring outcomes
1. The federal baseline: AI does not change the rules
Even without a single comprehensive federal “AI hiring law,” the existing federal framework already creates meaningful exposure:
- Title VII and disparate impact. The Equal Employment Opportunity Commission (EEOC) has emphasized that employers using software/algorithms/AI as “selection procedures” can face disparate impact liability if outcomes disproportionately exclude protected groups and the employer cannot demonstrate job-relatedness business necessity (and cannot show there was no less-discriminatory alternative).
- Americans with Disabilities Act (ADA) and accommodations. Algorithmic tools can create federal discrimination law risk in at least three recurring ways: (1) “screening out” individuals because of disability-related traits; (2) using tools that effectively conduct disability-related inquiries or medical examinations pre-offer; and (3) failing to provide reasonable accommodations in an AI-driven process (for example, an alternative assessment method or human review).
In the AI context, your defense posture may depend less on whether you “intended” discrimination and more on whether you can demonstrate that your hiring system (including vendor systems) is measurable, monitored and defensible as job-related.
2. The Workday litigation: why vendors and “platform operators” are now in the frame
The Mobley v. Workday case (N.D. Cal.) is widely viewed as a bellwether because it tests whether an HR technology provider can be treated as an “agent” performing hiring functions for employers and therefore face liability under federal anti-discrimination statutes. The court allowed claims to proceed on a theory that Workday could be considered an “agent” carrying out delegated hiring functions.
Two aspects matter for general counsel at large enterprises:
- Delegation creates potential liability. The more a company operationally relies on a system to reject, rank or route candidates with minimal human intervention, the easier it becomes for plaintiffs to argue the tool is effectively performing a hiring function — and that both the employer and the vendor/platform should be accountable.
- Collective/class posture increases settlement pressure and discovery burdens. The court granted conditional certification of an Age Discrimination in Employment Act (ADEA) collective action, which is a meaningful escalation because it increases notice, discovery scope and downside exposure, even before merits are decided.
3. AI-Specific State and Municipal Hiring Laws
New York City had the first AI-related hiring law in the country. At a high level, employers and employment agencies may not use an “automated employment decision tool” (AEDT) unless (i) the tool has undergone a bias audit within the prior year, (ii) information about the audit is made publicly available, and (iii) candidates (or employees for promotions) receive prescribed notices.
For large employers, the operational challenge is less “can we do an audit” and more “can we do an audit that aligns with how the tool actually works and how we actually use it,” including clarifying the relevant stage (screen, rank, recommend), the data categories available (and missing), and what is being published. NYC’s Department of Consumer and Worker Protection (DCWP) materials and the final rule process reflect attention to impact ratio methodology and disclosure mechanics.
California: FEHA “automated-decision systems” regulations
California’s Civil Rights Council secured final approval for regulations that address discrimination risk from AI/algorithms/automated-decision systems, with an effective date of October 1, 2025. The stated purpose is not to create a brand-new anti-discrimination regime, but to clarify how existing Fair Employment and Housing Act (FEHA) principles apply when hiring, promotion, and other employment decisions are made or influenced by automated systems. In practice, for multi-state employers, California often becomes the “highest common denominator” jurisdiction that drives recordkeeping, vendor diligence and testing cadence.
Illinois: two different regimes to keep distinct
Illinois is notable because it has (at least) two different, easily-confused tracks:
- Artificial Intelligence Video Interview Act (AIVIA). This law focuses on notice/consent and related obligations when employers use AI to analyze video interviews.
- Illinois Human Rights Act amendments (HB 3773). These amendments expressly address discriminatory impact from the use of AI (including generative AI) in employment decisions.
For large employers, the important point is governance: ensure the HR tech stack is inventoried so you know which tools are “video interview analysis,” which are “resume ranking,” which are “assessment scoring,” and which are “workflow automation,” because different Illinois duties may trigger depending on functionality.
Colorado: “high-risk AI systems” and algorithmic discrimination duties
Colorado’s SB 24-205, set to take effect this year, establishes obligations for developers and deployers of “high-risk” AI systems used to make or substantially influence “consequential decisions,” including employment-related decisions. The Colorado legislature describes deployer duties, including reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination, and it contemplates a compliance framework with documentation and notices. Subsequent developments have pushed back the effective date of this law.
4. Rather than creating new regimes, some states clarify that existing laws apply
New Jersey: New Jersey’s Division on Civil Rights issued formal guidance making clear that the NJ Law Against Discrimination can reach algorithmic discrimination, including where tools create disparate impact or impede accommodations.
Oregon: Oregon’s Attorney General similarly issued guidance emphasizing that companies using AI still must comply with existing Oregon laws (consumer protection, privacy and other requirements), framing AI as a risk multiplier rather than a separate legal silo.
Even though guidance is not the same as a statute with detailed audits and notices, it is still highly relevant to a company’s risk calculus.
5. Practical compliance programs for large employers in 2026
For in-house counsel, the goal is not to prove AI is fair in the abstract, but to create a defensible system that reduces the likelihood of discriminatory outcomes and improves your litigation/regulatory posture if challenged.
Start with an inventory. Document every tool that screens, ranks, recommends, schedules, scores or routes candidates — including “simple” rules engines and third-party plugins. Then, classify tools by how much they influence outcomes (informational vs. determinative). This classification matters directly in agency-style arguments (as in Mobley) and in state “high-risk” frameworks.
Build a repeatable testing and monitoring cycle. Treat adverse impact analysis as a recurring control, not a one-time project. The EEOC has explicitly connected AI hiring tools to disparate impact concepts. In practice, employers should (i) test at each selection stage, (ii) document thresholds and escalation triggers and (iii) preserve prior versions of models and configurations to explain changes over time.
Plan accommodations into the workflow, not as an afterthought. The ADA risk is often operational: a candidate cannot complete a timed assessment, a video interview tool screens out speech patterns or an automated system rejects nonstandard career paths that correlate with disability. Embed (and document) a reasonable accommodation pathway and an alternative evaluation method.
Spread risk appropriately in vendor contracts. In addition to security/privacy clauses, vendor terms should address: transparency about features used; audit cooperation; record retention; allocation of responsibility for jurisdiction-specific disclosures (e.g., NYC); and access to validation evidence.
Align a litigation-ready record. Plaintiffs will seek: (a) model inputs/features, (b) training/validation documentation, (c) adverse impact analyses, (d) human override practices and (e) vendor communications. The Mobley litigation shows these cases are being litigated with an expectation of deep discovery and potential aggregate exposure.
Summary
In 2026, the question is whether you can demonstrate governance over automated selection procedures that is commensurate with their influence on hiring outcomes. The most defensible posture for a large employer is to treat AI-assisted hiring as a regulated selection system so that when the inevitable challenge arrives (from an applicant, regulator or class counsel), you can adequately document processes.
Our Artificial Intelligence Industry Team tracks AI issues throughout the country. If you have questions or concerns about AI-related matters, please reach out to attorney Brendan M. Palfreyman at (315) 214-2161 and bpalfreyman@harrisbeachmurtha.com, or the Harris Beach Murtha attorney with whom you most frequently work.
This alert is not a substitute for advice of counsel on specific legal issues.
Harris Beach Murtha’s lawyers and consultants practice from offices throughout Connecticut in Bantam, Hartford, New Haven and Stamford; New York state in Albany, Binghamton, Buffalo, Ithaca, New York City, Niagara Falls, Rochester, Saratoga Springs, Syracuse, Long Island and White Plains, as well as in Boston, Massachusetts, and Newark, New Jersey.