Skip to main content
John Hausknecht

Explainer: Landmark NYC AI Law

ILR Professor John Hausknecht discusses how a new law aimed at reducing hiring discrimination will work.

A New York City law requiring that companies reveal use of artificial intelligence hiring tools and publicize results of hiring tool bias audits went into effect Jan. 1 and will be enforced beginning July 5.

The first of its kind in the nation, the law mandates testing by independent auditors of AI tools in order to screen for potential biases based on race/ethnicity and sex.

Employers will be required to post on their websites use of AI tools for hiring and promotion decisions, and results of an independent bias audit on the tools within the previous 12 months.

Candidates and employees who live in New York City must be notified in advance when AI tools will be used. Applicants have the right to request an alternative selection process or accommodations.

Professor John Hausknecht, who teaches human resources at ILR and researches workforce analytics and staffing-related topics such as employee selection, discussed in an interview AI-facilitated discrimination practices in hiring and a bias prevention law that has many employers scrambling.

What is the significance of this law in respect to hiring practices across the country?

The new law likely will be an impetus for legislation nationally and in other states and cities regarding the use of AI tools in hiring. In some ways, the law replicates existing legislation that already protects applicants against unlawful discrimination, but it is also unique in making bias audits public, notifying candidates about the use of AI, and more.

Are employment experts watching anything in particular as the law goes into effect?

The big question is around scope – how broadly or narrowly will this be applied? The language used in the revised legislation suggests that the effects on employment practices may be less severe than originally believed given that the AI tool has to be given substantial weight in affecting the employment decision. Often, this doesn't happen in practice. Overall, we likely will need to see several “test cases” to better understand the impact of the new law.

How prevalent is AI in hiring?

It’s hard to quantify, and depends on the AI definition, but it’s more common in large organizations and high-volume hiring situations when there are thousands of applicants.

Are some industries more entrenched than others in using AI as a hiring assessment measure?

Given AI’s prevalence in high-volume hiring, we tend to see more use cases in certain sectors such as retail, hospitality and technology.

How has AI evolved as a hiring tool?

Early versions of automated screening relied on fairly primitive tools such as resume scanning for keywords or automatically rejecting applicants who lacked minimum qualifications. Now the tools involve more sophisticated algorithms that use machine-learning to learn from past information and then “score” applicants. One common example that has become more prevalent is the use of computer-scored video interviews.

What does AI typically measure in a job candidate?

Basically, any data point or variable can be used as “input” into these models. That could include application data, interview data (such as language, eye contact and tone of voice), test or assessment data, resume information and more.

The law requires an adequate independent audit. Where would you find said auditors?

The definition of who is qualified to perform the audits is fairly broad, but we're seeing a lot of the major law firms weighing in to offer this service, as well as smaller boutique firms staffed with psychologists and other experts in employment testing, litigation support and so on. It’s still an open question, though, of who is qualified to do these audits.

Are penalties of $500 to $1,500 for single violations enough to enforce the law?

It appears that the penalties are applied per day of use and per candidate depending on the violation, so it could be substantial if the violations run unchecked over time.

Is the law a big step toward improved hiring transparency?

It’s a major step toward codifying guidance, regulations and processes for companies to follow when using automated employment decision tools. The notice to applicants and public sharing of the bias audits also increases transparency. On the other hand, this law only applies to candidates for positions in New York City, so once again, the scope may be somewhat limited.

 

Weekly Inbox Updates