AI in HR: When Efficiency Adds to Risks

By: | November 25, 2025

Sarah is the Vice President of Risk Management at United Educators. She is a member of the University Risk Management and Insurance Association, the National Association of College and University Attorneys, and the Kentucky Bar Association. Sarah has written extensively on enterprise risk management matters and managed complex insurance claims and high exposure litigation.

Artificial intelligence is reshaping how organizations recruit, hire and manage people. For human resources and risk leaders, the appeal is clear: faster screening, leaner processes and data-driven decisions. But recent lawsuits suggest that AI efficiency can come with a hidden price, one that can raise liability, not lower it.

Two high-profile cases illustrate the risk. In August, SiriusXM was sued for alleged age discrimination tied to its use of AI screening tools, according to employment law firm Fisher Phillips. Workday faced a proposed class action claiming that its AI systems disproportionately screened out job seekers by race, age and disability, as noted by Holland & Hart. Together, these cases signal that plaintiffs’ lawyers are catching up with possible AI-driven bias and employers cannot rely on vendors to shoulder that risk.

When Algorithms Go Off Script

AI tools promise objectivity by removing human subjectivity. Yet, algorithms learn from historical data, and those data sets can reflect the inequities organizations hope to avoid. The result: models that unintentionally replicate bias, particularly in HR decisions.

Even well-intentioned automation can trigger claims under Title VII of the Civil Rights Act, the Age Discrimination in Employment Act or comparable state laws. Plaintiffs’ attorneys already have seized on potential algorithmic bias as a new form of systemic discrimination.

The HR Lifecycle Under the Microscope

Each stage of the HR lifecycle carries potential exposure:

  • Recruiting and hiring: Automated résumé filters and video interview scoring can unintentionally disadvantage protected groups.
  • Performance management: Predictive analytics may rate employees using incomplete or skewed data, prompting claims of unfair treatment.
  • Termination decisions: Overreliance on AI outputs without human oversight can lead to wrongful termination or retaliation allegations.

Using AI tools in the HR lifecycle without human review can undermine fairness and compliance at multiple decision points. When employees or applicants believe an opaque system made the call, they often perceive discrimination, even if none was intended. That perception can drive litigation and erode trust.

A New Frontier for Liability

How insurers and brokers weigh the employment practices liability (EPL) risks associated with AI-related bias is an emerging question.

The reputational risk can be just as damaging. Organizations seen as using “black box” technology for personnel decisions may face public backlash and increased legal scrutiny. This is especially true as states like California advance new rules requiring transparency and fairness audits for automated employment tools, according to Holland & Hart’s review of the Workday case.

Guardrails That Work

Mitigating the risk of AI in HR requires deliberate governance. Some organizations may need to begin by building a foundation and establishing an AI use policy. As always, working with legal counsel on compliance is key and part of keeping abreast of state laws in this area and determining which may apply to your company.

Savvy organizations can reduce exposure by embedding risk management early in the process:

  1. Map the tools. Identify every point in the HR lifecycle where AI is applied and list who owns oversight.
  2. Demand transparency. Require vendors to disclose how algorithms are trained, tested and monitored for bias.
  3. Audit regularly. Conduct and document independent bias testing, especially before and after deployment.
  4. Keep humans in the loop. Maintain human review for final employment decisions.
  5. Bridge silos. Ensure HR, legal, compliance and risk teams collaborate on evaluating and approving AI tools.
  6. Train for accountability. Educate HR staff on both the benefits and limits of AI systems.

Balancing Innovation and Risk

AI can help organizations attract talent and improve workforce decisions when used with the same rigor applied to any other enterprise risk.

As the SiriusXM and Workday cases show, the question is no longer whether AI can create liability. It is whether organizations are managing that liability as thoughtfully as they are embracing the technology.

These insights are drawn in part from Using Artificial Intelligence Tools in the HR Lifecycle: Risks to Consider | United Educators by Heather Salko. &

More from Risk & Insurance