Hiring Someone Using AI? Watch Out for Unconscious Bias

Artificial intelligence used in employment practices promises faster, more efficient days, but there is an element of discrimination bias that must be addressed before this tech is fully adopted.
By: | April 23, 2023

Artificial intelligence is booming, if the recent popularity of ChatGPT has shown us anything. And its capabilities appear to be endless: a new(ish) technology with seemingly endless applications, from analyzing and organizing data sets to processing claims to helping decision-making.

Additionally, AI is saving time by pulling humans out of the paperwork weeds, freeing up their days to instead focus on innovation and the big tasks at hand.

It’s no wonder, then, that AI is being utilized to vet résumé applications for the best candidates, placing top qualifiers on the desks of the hiring committee. But like any technology, AI has its limits.

“AI is new and shiny, and it’s interesting,” said Laura Lapidus, management liability risk control director, CNA Insurance. “But there’s another side to it that a lot of employers may not fully be considering, and it’s a fact that the use of AI can lead to discrimination, regardless of whether it’s intentional or unintentional.”

Discrimination and disability lawsuits have been linked to biases in AI used during the hiring process. Not to mention, AI used during recruitment and even on the job after employees are hired can lead to unintentional bias and other employment suits. Employers looking to add in an element of this continuously budding technology will want to review best practices in order to avoid these risks.

Weighing the Pluses Against the Minuses

Even though AI can lead to issues, that does not make it an inherently “bad” thing. In fact, there’s a lot of benefit that comes from incorporating AI. Faster and more efficient than manual methods, this tech can sort through a stack of résumés in seconds as opposed to hours.

Laura Lapidus, management liability risk control director, CNA Insurance

“Some experts will say there is even less bias because it removes the human element of it,” Lapidus shared. “An employer looking at a résumé might have unconscious biases that may or may not impact the résumé they choose.”

If AI is programmed and trained to eliminate certain discriminatory practices — unconscious or not — such as removing a candidate based on their gender, name or place of education, there’s a better chance of hiring the best candidate for the position.

But, of course, AI is only as intelligent as its humans can make it. If the program is told to find certain qualities in candidates, there’s still a potential for bias to trickle through.

“It may mask a bias or perpetuate one that already exits. A lot of AI is based on the data set you provide,” said Lapidus. “It’s also constantly learning. It’s not just working off one set of data; that data increases over time, and the AI is continually learning from it.”

She gave an example: If a company is utilizing AI in its hiring practices, and the company tells it to find candidates who match the demographics of the “most successful” employees already within the company, there’s a chance that bias will leak through.

“If you have a company where most of the people in top positions are white men, the AI may start screening for white men. It may be overt like that, or it may be more subtle,” she said.

On the more subtle side, she gave the example of AI screening candidates for where they went to college — if the top execs are all Harvard and Princeton grads, the AI might overlook traditionally Black colleges like Howard, Langston and Tuskegee — all great schools in their own right, but if they don’t match the “most successful” demographic, they could be discarded by AI without a second thought.

Applicants With Disabilities Raise Another AI Hurdle

There’s also the risk of overlooking applicants with disabilities.

“Everyone’s disability is different. That makes it harder when you’re trying to check these tools for bias,” Lapidus shared.

As an example, if part of the hiring process requires a computerized knowledge test, applicants with visual impairment or even physical disabilities that inhibit their ability to type could lead the AI testing them to disqualify the candidate based on the length of time it takes them to complete the task.

The Americans with Disabilities Act holds employers responsible for providing reasonable accommodation if someone is a qualified applicant with a disability. The issue employers tend to run into here, however, is that these computerized knowledge tests will often come through a third-party vendor.

“The employer is relying on the vendor, who’s telling them, ‘Oh, it’s been bias-tested,’ but  it may not have been tested with respect to disabilities,” Lapidus said. “Employers need to ask a few more questions about that.”

Conversations to Be Had Around AI in Employment Practices

Even with the potential of bias, there’s a lot to gain from AI. The most important piece is that employers understand and actively work to eliminate disability biases and other biases that can lead to discrimination.

In order to do that, employers need to have continuing conversations with all their AI partners and stakeholders. Lapidus shared that at the start of the process, during purchasing, it’s often individuals in HR who are buying the AI product.

“You should want someone in the legal department looking at it, someone in IT, too,” she said. And then, once the tool is purchased and established, “whether it’s someone in IT or someone external, employers need to make sure they’re doing some type of audit to make sure the results they’re getting are not biased. That may be easier said than done, but it’s something they still need to do.”

It’s also crucial to ask the vendor the right questions to make sure the company and its constituents understand what the AI is doing, as well as how it may develop with time.

“What data is going to be used? Are they working with a disability expert, consulting with them?” Lapidus added.

Where to Learn More

Lapidus sees the potential with AI. She knows this is an exciting time for society at large, as more and more technology gains this capability to aid humans through a number of processes.

“I think it can really change the way we work. There’s a lot of benefit out there, but at the same time, employers have to be careful.”

That’s why Lapidus will be sharing her thoughts, as well as some statistics and recent data on AI use in the hiring process, at RISKWORLD 2023 this April 30-May 3. Her session, “Artificial Intelligence in Employment Decisions: Does Your Organization Run the Risk of Discrimination Claims?” will be held on Tuesday, May 2, at 1:30 p.m.

Those attending the Atlanta conference this year will be sure to learn more about AI’s potential and risks, including a look at how the EEOC Initiative on Artificial Intelligence and Algorithmic Fairness ties into this issue and how to best devise a plan to mitigate the risk of discrimination claims when using AI for employment decisions.

“There’s so much we don’t understand and so much we’re just starting to figure out,” she shared. “If you’re going to use AI, make sure you understand how it works, make sure you’ve checked that the results aren’t discriminatory, that bias is not going to result from the tool you’re using.” &

Autumn Demberger is a freelance writer and can be reached at [email protected].