Companies using artificial intelligence to assess video interviews should be aware of a new law on the books.
In May the Illinois Legislature unanimously passed the Artificial Intelligence Video Interview Act, which requires employers to notify candidates that AI will be used to assess their interview, be able to explain what elements it will look for, and secure their consent to do it.
Those that don’t could face future litigation.
The legislation, which is expected to be signed by Gov. J.B. Pritzker this summer, addresses the risk of hidden biases, explained Mark Girouard, a labor and employment attorney for Nilan Johnson Lewis in Minneapolis. “As with any use of AI in recruiting, this law come from concerns about how observations in the interview correlate to business value.”
AI assessments of a video interview use machine-learning algorithms that are taught what to look for by studying existing data sets and finding correlations. For example, it might determine that candidates who use certain phrases, or speak at a certain speed, have the right attributes to do well in a role, based on data captured about previous high performers.
This is a valuable and efficient way to prescreen candidates, and it can potentially eliminate human bias from the process. However, if the data sets the algorithm learns from are inherently biased, the algorithm can adopt those biases perpetuating the problem, Girouard says. For example, they might identify certain word choices, facial expressions or even skin tone as a consistent theme among high performers, even though those features don’t align with performance.
“If algorithms are trained correctly they shouldn’t replicate bias,” Girouard says. “But if they aren’t they can amplify disadvantage.”
Kevin Parker, CEO of Hirevue, a video interviewing software company that offers AI-driven assessment services, couldn’t agree more.
“We are in full support of this bill,” said Parker, who was invited by lawmakers to provide feedback on its content. He sees it as another way to address privacy and fairness in the recruiting process, and to set quality standards for the entire industry.
Hirevue addresses concerns about bias by including organizational psychologists on the teams that work with customers to first identify interview questions that will uncover the right criteria for success (empathy, problem solving, sociability), then to test those questions against a broad set of data to ensure they have no adverse impact.
Sometimes a problem will emerge, he noted. For example, when companies train algorithms using performance data from a predominantly middle-aged white male employee population, certain factors can introduce bias.
The testing process used to vet the interview questions can identify these biases, then the team will either eliminate the question or reduce the value of factors associated with those measures. “In this way we can neutralize biases before a single candidate is interviewed.”
A Flood of Legislation
While this law has only been introduced in Illinois, it is likely the first of many such laws being proposed as concerns about AI’s impact on recruiting bias grows, Girouard warned. “It is the first drip of what is likely to be a flood of legislation.”
To protect themselves against later litigation, employers should educate themselves on what the law requires, and how they are addressing the risk of AI-driven bias in their current operations. He noted that most employers today can’t explain how the AI assessment works, what criteria they look for or how those criteria align with performance success.
That’s a problem, he said. The law doesn’t just require employers to inform candidates about the technology, they also must be able to describe how the AI tool will interpret the interview and how it will be used in the selection process. “If you can’t explain it, it will be very hard for you to defend it in court.”