in

New York City Considers Regulating AI Hiring Tools

​Employers would be required to inform job applicants if and how they are using artificial intelligence (AI) technology in hiring decisions under a bill being considered by the New York City Council.

In addition, AI technology vendors would have to provide bias audits of their products before selling them and offer to perform ongoing audits after purchase.

Lawmakers like the bill’s sponsor, Councilwoman Laurie Cumbo, D-Brooklyn, and many others across industry, academia and government have expressed concerns about bias being embedded in employment-screening and assessment technologies using AI.

“It is really important that we address how artificial intelligence is going to be utilized to help and support and assist all people, so that we can ensure that we have equality in all forms of our hiring practices,” Cumbo said.

Specifically, the bill would require employers that use such tools to notify each candidate within 30 days of screening the specific product used to evaluate them, as well as the qualifications or characteristics considered by any algorithms used. If passed, the bill would take effect Jan. 1, 2022.

AI in HR Grows

As AI technology continues to improve, more and more employers have become interested in automating aspects of the recruiting and hiring process and taking advantage of the many benefits the technology brings. Employers use the automated algorithms in AI technology to screen resumes; conduct hiring assessments; and evaluate the facial expressions, body language and voices of candidates in video interviews.

“Increasingly in the last decade, commercial tools are being used by companies large and small to hire more efficiently, to source and screen candidates faster and with less paperwork, and successfully select candidates who will perform well on the job,” said Julia Stoyanovich, an assistant professor of computer science at New York University’s Tandon School of Engineering and the founding director of the school’s Center for Responsible AI.

“These tools are also meant to improve efficiency for the job applicants, matching them with relevant positions, allowing them to apply with a click of a button and facilitating the interview process,” she said. “Despite their potential to improve efficiency for both employers and job applicants, automated decision systems in hiring are also raising concerns.”

She said that there have been numerous cases of discrimination based on gender, race and disability during candidate sourcing, screening, interviewing and selecting using automated tools. “If left unchecked, automated hiring tools will replicate, amplify and normalize results of historical discrimination in hiring and employment.”

Anna Rothschild, an attorney in the Boston office of Hunton Andrews Kurth, said that the increased reliance on AI in employment-related decisions has come under scrutiny for potential legal and ethical risks, such as implicit bias and disparate impact, and in response, “various state and city governments are scrambling to regulate this new use of technology,” she said.

Illinois is currently the only state with laws that do so. The Biometric Information Privacy Act requires employers to let candidates know if they intend to collect biometric identifiers and allows candidates to opt out. The Artificial Intelligence Video Interview Act, also in Illinois, regulates how employers can use AI to analyze video interviews and went into effect in January.

Additional cities and states and the federal government have introduced initiatives to study bias in algorithms and the impact of AI on employment decisions, and the Equal Employment Opportunity Commission has announced at least two investigations of cases involving alleged algorithmic bias in recruitment.

Supporting Transparency

Stoyanovich backs the legislation, saying it will benefit job seekers by helping reduce bias in the hiring process and will benefit employers by helping them evaluate the claims made by vendors during procurement.

“Vendors of these tools frequently and confidently make claims that, because humans are known to have biases, algorithmic tools are our only viable option,” she said. “It is dangerous to take such claims on faith. Algorithmic systems themselves and audits of these systems are only as good as the standards and objectives to which we hold them.”

Two prominent New York City-based HR technology companies also support setting standards and the city’s push toward transparency. “Opaque and biased hiring tools can have real negative consequences on the lives of workers and workforce diversity,” said Athena Karp, CEO of HiredScore, an AI hiring platform.

One of the main reasons Karp started her company was to help eliminate bias in hiring and promotions, as well as cut down inefficiencies in the hiring process, she said. “Before technology tools existed, employers only had humans to review an increasingly large volume of candidates. Unfortunately, humans can’t unsee the things that often lead to unconscious biases that so many of us are striving to root out with technologies that are properly and carefully designed and tested.”

Frida Polli, the CEO and co-founder of Pymetrics, an assessment and talent-matching company in New York City, also supports the transparency and accountability provisions in the legislation. “As someone who has been building and selling hiring tools for the past several years, in my opinion, there is no reason why clear information about the bias and hiring tool should not be part of this equation,” she said. “We have to start changing systems, including hiring systems—not human minds—in order to fix diversity. And algorithms can be intentionally designed to mitigate bias in a way that human minds cannot, and with the audits proposed in this bill, we can ensure that algorithms are held to these higher standards.”

Stoyanovich and others, including the New York Civil Liberties Union, would like to see vendors’ bias audits broadened to go beyond disparate impact when considering fairness of outcomes and include other dimensions of discrimination, as well as information about a tool’s effectiveness. She also recommended employers provide information about the candidate’s qualifications or characteristics that the tool used for screening “in a manner that allows the job applicant to understand, and, if necessary, correct and contest the information.”

New York City employers should review their pre-employment selection tools to determine if they rely on learning algorithms and could be subject to the new law, if passed.

Rothschild said that as the use of AI in employment decisions continues to grow and likely prompts additional backlash, companies should also carefully consider what they can do to validate the effectiveness of the decisions being made by AI. “And if challenged, whether they can unpack the decisions reached using AI to justify its legality,” she said.

Source Article

Written by HR Today

Scandit Named to the 2020 CB Insights Retail Tech 100 — List of Most Innovative B2B Retail Startups

EXFO solves video streaming problems for service providers