Home > Regions > North America > New York City Regulates Workplace Artificial Intelligence Recruitment and Selection Tools

New York City Regulates Workplace Artificial Intelligence Recruitment and Selection Tools

Joining Illinois and Maryland, on November 10, 2021, the New York City Council approved a measure, Int. 1894-2020A (the “Bill”),  to regulate employers’ use of  “automated employment decision tools” with the aim of curbing bias in hiring and promotions.  The Bill, which is awaiting Mayor DeBlasio’s signature, is to take effect on January 1, 2023. Should the Mayor not sign the Bill within thirty days of the Council’s approval (i.e., by December 10), absent veto, it will become law.

Automated Employment Decision Tools

The Bill defines “automated employment decision tool” as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence,” which scores, classifies, or otherwise makes a recommendation, that is used to substantially assist or replace the decision-making process from that of an individual.  The Bill exempts automated tools that do not materially impact individuals, such as a junk email filter, firewall, calculator, spreadsheet, database, data set, or other compilation of data.  It is unclear whether passive recruitment tools, such as LinkedIn’s suggested jobs, are covered under the Bill.

The Bill applies only to decisions to screen candidates for employment or employees for promotion within New York City, and does not apply to other employment-related decisions.

Employer Requirements under the AI Bill

The Bill prohibits employers or employment agencies from using the automated decision tools to screen candidates or employees for employment decisions unless: (1) the tool has undergone an independent bias audit no more than one year prior to its use; and (2) a summary of the results from the audit as well as the distribution date of the tool to which the audit applies has been made publicly available on the employer’s or employment agencies’ website.  The Bill is unclear as to whether and when the bias audit should be updated, or whether a new bias audit must be obtained prior to each “use” by the employer.

The Bill defines an acceptable “bias audit” as an impartial evaluation by an independent auditor that includes the testing of the tool to assess its disparate impact on persons of any federal EEO-1 “component 1 category, ” i.e., whether the tool would have a disparate impact based on race, ethnicity, or sex.

Moreover, New York City employers using automated employment decision tools must notify each employee or candidate who resides in New York City of the following:

  • at least ten business days before such use, that the tool will be used in assessing or evaluating the individual and allow a candidate to request an alternative process or accommodation;
  • at least ten business days before such use, the job qualifications and characteristics that the tool will use in assessing or evaluating the individual; and
  • if not posted on the employer’s website, and within thirty days of a written request by a candidate or employee, information about the type of data collected for the tool and the source of such data.

Although the Bill allows candidates to request an “alternative process or accommodation,” it is silent as to what obligations, if any, an employer must take upon receiving a request.

Employers or employment agencies that fail to comply with any of the requirements of the Bill may be subject to a fine of up to $500 for a first violation by the New York City’s Corporation Counsel or by the Department of Consumer Affairs.  Employers may be penalized by fines from $500 to $1,500 for each subsequent violation.

In anticipation of the likely January 1, 2023 effective date of the Bill, employers using automated employment decision tools in their hiring and promotion practices in New York City should take steps to ensure they will be in compliance. This includes:

  • ensuring that an automated employment decision tool has gone through an independent “bias audit” no more than one year prior to the tool’s use, specifically to determine whether the tool would have a disparate impact based on race, ethnicity, or sex;
  • posting the audit results on the employer’s public website;
  • developing a method of providing notice to employees and candidates of the tool’s usage and of the qualifications and characteristics that the tool will assess or evaluate; and
  • carefully crafting job assessments to ensure only key knowledge, skills, and abilities are taken into account, and considering potential reasons for disparities.

Employers seeking to responsibly and ethically implement AI tools in recruitment and selection of talent should not limit their efforts to compliance with this Bill alone. Although not a requirement under New York City’s measure, employers should also consider the tools’ accessibility for persons with disabilities, and others.  They should also consider whether and how to use any passive recruitment tools, which may not comply with the Bill’s notice requirement to candidates.

Employers may also wish to consult with counsel before implementing any type of digital hiring or promotion-based platform, to ensure compliance with this and other newly enacted employment laws surrounding AI.

*Kamil Gajda, Law Clerk – Admission Pending (not admitted to the practice of law) in the firm’s New York office, contributed to the preparation of this post.