Insights
New York City Finalizes Rules for AI Bias in Employment Law and Postpones Enforcement Until July 2023
Apr 26, 2023Artificial Intelligence (AI) has enormous potential to make many operational tasks easier, including hiring and retention functions. Nonetheless, the technology – unchecked – could produce discriminatory outcomes. In response, New York City passed Local Law 144, which prohibits employers and employment agencies from using AI and algorithm-based technologies (referred to as AEDTs, or automated employment decision tools) for recruiting, hiring or promotion without those tools first being audited for bias. While enforcement of the law has been delayed multiple times pending finalization of the law’s implementing rules, on April 6, 2023 the Department of Consumer and Worker Protection (DCWP) published the law’s Final Rule. The law will now go into effect on May 6, and enforcement will begin on July 5.
The Final Rule is very similar to the December 2022 Proposed Rule, which BCLP reported on in January. However, there are some key differences.
Under the law, an “AEDT” is any process “derived from machine learning, statistical modelling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making.” The Final Rule broadens the definition of “machine learning, statistical modelling, data analytics, or artificial intelligence” by removing the requirement that such a technique must refine inputs and parameters by using cross-validation or training or test data. This means that certain technologies that were not considered AEDTs will now be considered AEDTs and will be covered by the law.
Another important difference in the Final Rule is an added requirement in connection with the mandated “bias audit” of the AEDT technology by an independent auditor. This bias audit summary must now indicate the number of individuals assessed by the AEDT who were not included in the audit because they fell into an unknown category. This will require careful record-keeping by businesses who commission such an audit. An additional change in the bias audit requirements will allow the independent auditor to exclude a category from the impact ratio calculations if this category comprises less than 2% of the data being used. The required elements in the published summary of bias audit results have also been expanded to include the number of applicants in a certain category and the scoring rate of that category. The Final Rule also restricts the circumstances in which an employer or employment agency may use a bias audit that uses historical data about the AEDT in question. This is only permissible if the employer or employment agency provided its own historical data for the use of the AEDT, or if the employer or employment agency has never used the AEDT at all.
The delay in implementation and enforcement gives in-scope organizations a little more time to work with outside counsel and AI vendors to assess their use of AI-related tools. Businesses should refine their plans for bias audits given the clarifications in the Final Rule and be mindful that certain technologies that were not originally covered may now be considered AEDTs in light of the broadened definition. The clock is ticking, though, so compliance should be a priority if it is not already.
Businesses should also stay alert for developments in other states that impose duties on employers for the use of AI, including California, Illinois and Maryland. BCLP actively tracks the proposed and enacted AI regulatory bills from across the United States; see our interactive AI Map. The US Department of Justice and the US Equal Employment Opportunity Commission have also indicated their intentions to focus on usage of AI-enabled tools in the employment context.
Related Practice Areas
-
Data Privacy & Security