Growing Regulatory Risks for AI-Driven Human Capital Management Platforms

Growing Regulatory Risks for AI-Driven Human Capital Management Platforms

September 18, 2025

By T.J. Pyzyk, Capstone TMT Analyst

Capstone believes that AI-based human capital management (HCM) platforms face growing liability risks under US employment discrimination laws and a patchwork of regulations, including state AI employment laws and the EU AI Act. These changes will increase litigation risk for HCM platforms and their customers, add to compliance costs, and could necessitate architectural overhauls of AI-driven HCM platforms.

  • Human capital management (HCM) platforms using AI for applicant screening and hiring decisions face a fragmented regulatory landscape as states and the European Union (EU) work to address AI in hiring, with approaches ranging from requiring disclosures to regulating model training and data sets. The role and liability of AI-based platforms in hiring decisions is an outstanding question in federal courts, with Mobley v. Workday, Inc. serving as a likely bellwether.
  • We anticipate increased litigation against AI HR tools under existing federal hiring discrimination laws, including settlements in cases like Mobley v. Workday, Inc., but find comprehensive federal AI regulation unlikely in the short- to medium-term. As a result, we expect states to continue to pass AI-in-hiring legislation. We also believe the EU will move forward with the implementation of the EU AI Act, classifying AI used in hiring as “high risk.”
  • Capstone believes that AI-driven HR solutions face heightened risk in the coming years as vendors navigate potential class-action lawsuits, mandatory algorithmic redesigns to meet varying state requirements, and significant compliance costs for EU market access, potentially eliminating AI hiring tools as a growth driver for HCM platforms and creating a preference for vendors with traditional screening capabilities or hybrid models that minimize automated decision-making.

Federal Hiring Discrimination Protections

We anticipate that eventual settlements in high-profile AI HR discrimination cases, such as Mobley v. Workday, Inc. (Case No. 3:23-cv-00770-RFL), will spark a stream of litigation against AI-enabled HCM platforms. Algorithms have the potential to introduce bias to hiring processes and run afoul of existing federal employment protections under Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA, see Exhibit 1).

In Mobley v. Workday, Inc., the plaintiff originally alleged that Workday’s AI-driven applicant tracking system (ATS) violated federal antidiscrimination laws by showing bias against Mobley due to race, age, and disability. In July 2024, US District Court for the Northern District of California Judge Rita Lin accepted a novel theory that Workday was an “agent” of employers and could be held liable for employment discrimination. In May 2025, the court granted conditional class certification under the ADEA claims, including all individuals aged 40 and over who applied for jobs using Workday and were denied employment recommendations, a class potentially containing tens of millions.

Exhibit 1: Federal Hiring Discrimination Regulations

NameYearEmployment Discrimination Protection
Title VII of the Civil Rights Act of 19641964Race, color, religion, sex, and national origin
Americans with Disabilities Act1990Disability
Age Discrimination in Employment Act196740 years of age or older
Genetic Information Nondiscrimination Act2008Genetic information, including family medical history and genetic test results
Immigration and Nationality Act1952Citizenship or national origin

Source: Federal Registrar

Federal AI Regulation

While comprehensive federal AI regulation could clarify liability and restrictions around AI in HR applications, Capstone does not expect Congress to pass AI regulation in the near- or medium-term. Congress has exhibited a failure to coalesce around AI and adjacent issues, even on an intra-party basis. Senate Republicans were unable to reach a consensus on a state AI regulation moratorium in July 2025, and Congress failed to advance the bipartisan American Data Privacy and Protection Act (ADPPA) out of committee in 2022.

Comprehensive AI regulation would need to be even more expansive than the ADPPA. That would lower the likelihood that Congress could agree and move a bill forward and effectively kicks regulation to the states.

State Regulatory Framework

We anticipate states will continue to advance AI HR legislation, including bills that require AI disclosures during the interview process and restrict the bases for model training. Illinois, Colorado, Maryland, and New York City have already passed laws, and several other states, including Massachusetts and New Jersey, have proposed similar legislation (see Exhibit 2). Disclosure laws require employers to notify job candidates when algorithmic decision-making tools are being used in their evaluation, creating additional compliance overhead for platforms and causing a point of friction in the candidate collection process. Legislation mandates that algorithms not be trained on biased material that could lead to discrimination.

Legislation targeting algorithms poses a more fundamental challenge for companies, as it constrains the architecture and training processes of AI models. These restrictions, such as mandatory bias thresholds, required feature exclusions, or specific fairness metrics, often force trade-offs between model accuracy and compliance and require a rework or retrain of AI systems and models.

Exhibit 2: Select Existing and Proposed State and Local AI Employment Decision Laws

State / CityBillYearStatusDisclosure / Consent Requirement or Algorithm Restrictions
CaliforniaAB 29302024FailedAlgorithm Restrictions
SB 72023PendingBoth
New York CityLocal Law 1442021Signed into LawBoth
IllinoisHB 25572019Signed into LawDisclosure / Consent Requirement
HB 37732024Signed into LawAlgorithm Restrictions
ColoradoSB 24-2052024Signed into LawBoth
MarylandHB 12022020Signed into LawDisclosure / Consent Requirement
New JerseyS 15882024PendingBoth
MassachusettsH 18732023PendingBoth

Source: Legiscan

European Union AI Act  

Capstone believes that the EU AI Act will increase compliance costs for HCM firms operating in or expanding to the EU. The legislation takes a risk-based approach, categorizing AI systems into four tiers: minimal risk, limited risk, high risk, and unacceptable risk. Systems dealing with human capital are deemed high risk by the act and must comply with strict requirements (see Exhibit 3). Non-compliance can result in fines up to €35 million or 7% of global annual revenue, forcing vendors to fundamentally restructure their AI systems for European markets. The act also represents a potential regulatory model for other jurisdictions, creating pressure for companies to adapt to these stricter standards globally rather than maintain separate systems for different markets.

Exhibit 3: Key EU AI Act Requirements for High-Risk Systems

RequirementSummary
Quality Management SystemDocumented strategy for regulatory compliance, encompassing techniques, procedures, and systemic actions to address requirements
Risk Management SystemEstablished iterative process of risk management, including identifying, analyzing, and addressing known and reasonably foreseeable risks
RegistrationRegister in an EU database of high-risk AI systems
Data and Data GovernanceUsing and maintaining high-quality training, validation, and testing data sets
Technical documentationDocumentation demonstrating that the system is compliant, such that authorities could assess compliance
Record-keepingAutomatic event logging for monitoring and traceability
Transparency and Provision of Information to DeployersProvide enough transparency to allow deployers to interpret a system’s output and use it appropriately, and create instructions for deployers to understand the system and its capabilities
Human oversightDesign the system to allow for appropriate human monitoring, including oversight mechanisms to limit risks to health, safety, or fundamental rights, and ensure adequate training of humans
Accuracy, robustness, and CybersecurityIdentify and declare levels of accuracy, redundancy measures where appropriate, and sufficient cybersecurity measures

What’s Next

  • Illinois HB 3773, mandating algorithmic restrictions, goes into effect in January 2026.
  • Massachusetts’s H 1873, New Jersey’s S 1588, and California’s SB 7 are being debated in legislative committees. We will be monitoring whether they progress to the floors of their respective chambers.
  • In Mobley v. Workday, Inc., a Case Management Conference has been scheduled for October 1, 2025, to determine the specific plan for a 60-day opt-in period, during which notice will be disseminated and individuals can join the lawsuit. A discovery period will follow for the individuals who choose to opt into the suit.
  • Requirements for employment-related systems under the EU AI Act come into effect in August 2026.

Have a question?

We want to hear from you. Let us know your question and a research analyst will get back to you promptly. We love to discuss our research.

Connect

Our Latest Insights

Google Search Remedy: Data-Sharing Creates Opportunities for Rivals

Google Search Remedy: Data-Sharing Creates Opportunities for Rivals

September 16, 2025 By Ian Tang and Kate Lardner, TMT Analysts Capstone believes a federal court’s decision requiring  Alphabet Inc.’s Google to share the index that powers its search engine with search engine rivals and generative artificial intelligence (GenAI)...