Looking for a new job? New rules may save you from the AI hiring bots.

Employers who use chat bots and Artificial Intelligence (AI) tools to screen job seekers should prepare for new laws geared to prevent bias in the hiring process.

In April, a New York City law that regulates how companies use AI in hiring is expected to kick in. The Biden Administration and the U.S. Equal Employment Opportunity Commission (EEOC) also recently published initiatives to make employers responsible for monitoring the tools—and any discriminatory practices they might trigger before a candidate even connects with a hiring manager.

“Nearly 1 in 4 organizations use automation or AI to support HR-related activities, with roughly 8 in 10 of those organizations using AI in recruitment and hiring,” Emily M. Dickens, chief of staff and head of public affairs at SHRM, The Society for Human Resource Management, told.

“Regulation is coming and it’s going to be messy,” said Avi Gesser, partner at Debevoise & Plimpton and co-chair of the firm’s Cybersecurity, Privacy and Artificial Intelligence Practice Group.

Starting in April, employers in New York City will have to tell job candidates and employees when they use AI tools in hiring, perform an independent bias audit—and notify them about “the job qualifications and characteristics that will be used by the automated employment decision tool,” according to a draft of the new Automated Employment Decision Tool law. The final set of rules has yet to be published.

“The New York law is the first time you have a broadly applicable set of restrictions on AI hiring in the way it’s normally used by lots of companies, which is to source and sort resumés in order to decide who you are going to interview for efficiency purposes,” Gesser said.

The aim of the New York law is transparency. “People who are using these tools need to disclose that they’re using the tools,” Gesser added. “They need to give people a chance to request an alternative arrangement if they don’t feel like the tools are going to be fair to them.”

And it means that employers will be the ones responsible for meeting the legal requirements around these AI tools, rather than the software vendors who create them, he said.

Employers on ‘warning’

New York City isn’t the only one putting employers on notice. Last fall, the White House released a blueprint of its AI Bill of Rights and recounts how a “hiring tool for a firm that had a predominately male workforce rejected resumés with the word “women’s,” such as “women’s chess club captain.”

In January, The U.S. Equal Employment Opportunity Commission (EEOC) published its Strategic Enforcement Plan in the Federal Register. The focus: artificial intelligence tools used by employers to hire workers that can also introduce discriminatory decision-making. And last spring, the EEOC issued guidance on the use of AI in hiring tools that focuses on the impact on people with disabilities.

Human reviewers aren’t perfect

The biggest concern about AI screening tools is that they can be biased against certain applicant classes–from race and gender to age and disability to those who have gaps in their resumés. This is what regulators want to fix.

Of course, AI isn’t alone in the business of bias. Human resumé sorters have been guilty as well.

In 2021, researchers from the University of California, Berkeley and the University of Chicago sent more than 83,000 fictitious applications with randomized characteristics to geographically dispersed jobs posted by 108 of the largest U.S. employers. The findings: Distinctively Black names reduce the probability of employer contact by 2.1 percentage points relative to distinctively White names.

“It’s a mistake to judge these AI tools against some aspirational standard that humans aren’t meeting anyway,” Gesser said. “If you give the exact same resumé to two different companies and just change the name, you may find that the human reviewers treat those differently.”

Still, the AI tools are getting more sophisticated and that’s what worries regulators and prospective employees. “Today’s AI models evaluate candidates just like a recruiter, comparing the full resumé against the job description,” Morgan Llewellyn, chief data scientist at recruiting technology company Jobvite, told.

The most widely used AI tool, the automated-hiring technology known as Applicant Tracking Systems (ATS), has caused job seekers the most angst — and is what the new regs for now are aiming to eliminate. They are designed to pare down the droves of applications and resumé employers receive electronically for open positions.

In addition to screening resumés, another concerning issue highlighted in the EEOC-issued AI guidance is the increasing use of AI-led automated video interviews and the impact on people with disabilities in particular.

Candidates may be appraised by a computer algorithm on their speech cadence, facial expressions and word selection.

For example, the video interviewing software can analyze applicants’ speech patterns in order to reach conclusions through the algorithmic decision-making tool about their ability to solve problems. It is “not likely to score an applicant fairly if the applicant has a speech impediment that causes significant differences in speech patterns,” according to the EEOC publication.

More than 90% of employers use this type of screening to filter or rank potential candidates, according to a study by Harvard Business School’s Project on Managing the Future of Work and the consulting firm Accenture; the organizations surveyed 8,000 workers and more than 2,250 executives.

Another problem: ATS can dismiss applications because of resumé gaps. That’s troubling, especially for women who have stepped out of the workforce for caregiving of children or aging relatives.

Onerous for employers?

“In order to use these tools going forward, you’re going to need to conduct a fairly onerous audit of how those tools’s results filter candidates by race, by gender, by ethnicity, and intersectionality by combinations of race, gender, and ethnicity,” Gesser predicted. “That’s going to be a big shift for a lot of companies who are using these tools and a pretty heavy burden.”

He added: “Every job, every candidate is different, and there’s a risk that there’ll be grounds for lawsuits and claims of discrimination.”

HR departments know the guard rails are coming. Little wonder that nearly half of employers surveyed by SHRM last year said they would like to see more information on how to identify any potential bias when using these tools.

And that may explain why the New York City law has pushed back enforcement of the law until April 15, 2023, despite the law’s January 1, 2023, effective date.

Said Dickens of SHRM: “We need to be assured the tools we use do not lead to bias in the hiring process.”

Source: finance.yahoo.com

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *