September 18, 2025
By T.J. Pyzyk, Capstone TMT Analyst
Capstone believes that AI-based human capital management (HCM) platforms face growing liability risks under US employment discrimination laws and a patchwork of regulations, including state AI employment laws and the EU AI Act. These changes will increase litigation risk for HCM platforms and their customers, add to compliance costs, and could necessitate architectural overhauls of AI-driven HCM platforms.
- Human capital management (HCM) platforms using AI for applicant screening and hiring decisions face a fragmented regulatory landscape as states and the European Union (EU) work to address AI in hiring, with approaches ranging from requiring disclosures to regulating model training and data sets. The role and liability of AI-based platforms in hiring decisions is an outstanding question in federal courts, with Mobley v. Workday, Inc. serving as a likely bellwether.
- We anticipate increased litigation against AI HR tools under existing federal hiring discrimination laws, including settlements in cases like Mobley v. Workday, Inc., but find comprehensive federal AI regulation unlikely in the short- to medium-term. As a result, we expect states to continue to pass AI-in-hiring legislation. We also believe the EU will move forward with the implementation of the EU AI Act, classifying AI used in hiring as “high risk.”
- Capstone believes that AI-driven HR solutions face heightened risk in the coming years as vendors navigate potential class-action lawsuits, mandatory algorithmic redesigns to meet varying state requirements, and significant compliance costs for EU market access, potentially eliminating AI hiring tools as a growth driver for HCM platforms and creating a preference for vendors with traditional screening capabilities or hybrid models that minimize automated decision-making.
Federal Hiring Discrimination Protections
We anticipate that eventual settlements in high-profile AI HR discrimination cases, such as Mobley v. Workday, Inc. (Case No. 3:23-cv-00770-RFL), will spark a stream of litigation against AI-enabled HCM platforms. Algorithms have the potential to introduce bias to hiring processes and run afoul of existing federal employment protections under Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA, see Exhibit 1).
In Mobley v. Workday, Inc., the plaintiff originally alleged that Workday’s AI-driven applicant tracking system (ATS) violated federal antidiscrimination laws by showing bias against Mobley due to race, age, and disability. In July 2024, US District Court for the Northern District of California Judge Rita Lin accepted a novel theory that Workday was an “agent” of employers and could be held liable for employment discrimination. In May 2025, the court granted conditional class certification under the ADEA claims, including all individuals aged 40 and over who applied for jobs using Workday and were denied employment recommendations, a class potentially containing tens of millions.
Exhibit 1: Federal Hiring Discrimination Regulations
Name | Year | Employment Discrimination Protection |
---|---|---|
Title VII of the Civil Rights Act of 1964 | 1964 | Race, color, religion, sex, and national origin |
Americans with Disabilities Act | 1990 | Disability |
Age Discrimination in Employment Act | 1967 | 40 years of age or older |
Genetic Information Nondiscrimination Act | 2008 | Genetic information, including family medical history and genetic test results |
Immigration and Nationality Act | 1952 | Citizenship or national origin |
Source: Federal Registrar
Federal AI Regulation
While comprehensive federal AI regulation could clarify liability and restrictions around AI in HR applications, Capstone does not expect Congress to pass AI regulation in the near- or medium-term. Congress has exhibited a failure to coalesce around AI and adjacent issues, even on an intra-party basis. Senate Republicans were unable to reach a consensus on a state AI regulation moratorium in July 2025, and Congress failed to advance the bipartisan American Data Privacy and Protection Act (ADPPA) out of committee in 2022.
Comprehensive AI regulation would need to be even more expansive than the ADPPA. That would lower the likelihood that Congress could agree and move a bill forward and effectively kicks regulation to the states.
State Regulatory Framework
We anticipate states will continue to advance AI HR legislation, including bills that require AI disclosures during the interview process and restrict the bases for model training. Illinois, Colorado, Maryland, and New York City have already passed laws, and several other states, including Massachusetts and New Jersey, have proposed similar legislation (see Exhibit 2). Disclosure laws require employers to notify job candidates when algorithmic decision-making tools are being used in their evaluation, creating additional compliance overhead for platforms and causing a point of friction in the candidate collection process. Legislation mandates that algorithms not be trained on biased material that could lead to discrimination.
Legislation targeting algorithms poses a more fundamental challenge for companies, as it constrains the architecture and training processes of AI models. These restrictions, such as mandatory bias thresholds, required feature exclusions, or specific fairness metrics, often force trade-offs between model accuracy and compliance and require a rework or retrain of AI systems and models.
Exhibit 2: Select Existing and Proposed State and Local AI Employment Decision Laws
State / City | Bill | Year | Status | Disclosure / Consent Requirement or Algorithm Restrictions |
---|---|---|---|---|
California | AB 2930 | 2024 | Failed | Algorithm Restrictions |
SB 7 | 2023 | Pending | Both | |
New York City | Local Law 144 | 2021 | Signed into Law | Both |
Illinois | HB 2557 | 2019 | Signed into Law | Disclosure / Consent Requirement |
HB 3773 | 2024 | Signed into Law | Algorithm Restrictions | |
Colorado | SB 24-205 | 2024 | Signed into Law | Both |
Maryland | HB 1202 | 2020 | Signed into Law | Disclosure / Consent Requirement |
New Jersey | S 1588 | 2024 | Pending | Both |
Massachusetts | H 1873 | 2023 | Pending | Both |
Source: Legiscan
European Union AI Act
Capstone believes that the EU AI Act will increase compliance costs for HCM firms operating in or expanding to the EU. The legislation takes a risk-based approach, categorizing AI systems into four tiers: minimal risk, limited risk, high risk, and unacceptable risk. Systems dealing with human capital are deemed high risk by the act and must comply with strict requirements (see Exhibit 3). Non-compliance can result in fines up to €35 million or 7% of global annual revenue, forcing vendors to fundamentally restructure their AI systems for European markets. The act also represents a potential regulatory model for other jurisdictions, creating pressure for companies to adapt to these stricter standards globally rather than maintain separate systems for different markets.
Exhibit 3: Key EU AI Act Requirements for High-Risk Systems
Requirement | Summary |
---|---|
Quality Management System | Documented strategy for regulatory compliance, encompassing techniques, procedures, and systemic actions to address requirements |
Risk Management System | Established iterative process of risk management, including identifying, analyzing, and addressing known and reasonably foreseeable risks |
Registration | Register in an EU database of high-risk AI systems |
Data and Data Governance | Using and maintaining high-quality training, validation, and testing data sets |
Technical documentation | Documentation demonstrating that the system is compliant, such that authorities could assess compliance |
Record-keeping | Automatic event logging for monitoring and traceability |
Transparency and Provision of Information to Deployers | Provide enough transparency to allow deployers to interpret a system’s output and use it appropriately, and create instructions for deployers to understand the system and its capabilities |
Human oversight | Design the system to allow for appropriate human monitoring, including oversight mechanisms to limit risks to health, safety, or fundamental rights, and ensure adequate training of humans |
Accuracy, robustness, and Cybersecurity | Identify and declare levels of accuracy, redundancy measures where appropriate, and sufficient cybersecurity measures |
What’s Next
- Illinois HB 3773, mandating algorithmic restrictions, goes into effect in January 2026.
- Massachusetts’s H 1873, New Jersey’s S 1588, and California’s SB 7 are being debated in legislative committees. We will be monitoring whether they progress to the floors of their respective chambers.
- In Mobley v. Workday, Inc., a Case Management Conference has been scheduled for October 1, 2025, to determine the specific plan for a 60-day opt-in period, during which notice will be disseminated and individuals can join the lawsuit. A discovery period will follow for the individuals who choose to opt into the suit.
- Requirements for employment-related systems under the EU AI Act come into effect in August 2026.

T.J. Pyzyk, Capstone TMT Analyst
Read more from T.J.:
Firewall Frenzy: Cyber Vendors Benefit Amid Rising Global Tensions
Crypto Surge: Trump’s SEC to Boost Industry and Bank Involvement