Insights

AI in the Workplace: Using Artificial Intelligence Intelligently

AI in the Workplace: Using Artificial Intelligence Intelligently

Oct 08, 2024
Download PDFDownload PDF
Print
Share

Summary

Ready or not, artificial intelligence (“AI”) is here, and even if your company hasn’t introduced or approved the use of AI, chances are your employees are already using it. Companies and their employees are under intense pressure to increase demand, streamline production, quickly consider applicants and hire new employees, and improve efficiency and productivity. According to a recent survey, AI is used by more than 1/3 of all US businesses for numerous purposes: to draft requisitions for open positions; to create product content; to review and draft contracts; to interview, train, evaluate, discipline, and terminate employees; and to test customer satisfaction, among many other uses. Another 45% of businesses that are not currently using AI are considering implementing it.

With AI’s arrival in the workplace, concerns have surfaced regarding its potential risks. The use of AI can lead to multiple and varied issues, such as patent and copyright infringement claims, disparate impact discrimination claims, and violations of the Health Insurance Portability and Accountability Act (“HIPAA”) and other non-disclosure laws. AI also may result in the inadvertent exposure of company trade secrets. It is important that companies be aware of the risks associated with the use of AI in the workplace and are up to date on the constantly changing laws, regulations, and regulatory guidance related to such use. But without comprehensive federal laws, the only current legal framework is a patchwork of inconsistent state and local laws. Further, neither case law nor statutory law is keeping pace with AI advancements. Consequently, there is not currently any uniform and timely guidance with regard to using AI while working.

Federal and State AI Legislation

There is currently no comprehensive federal statutory law regulating the use of AI. However, in October 2023, President Biden signed Executive Order 14110, the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO 14110”), which provides eight “principles and priorities” to guide the use of AI. A White House Fact Sheet explains EO 14110 provides AI safety and security standards, establishes privacy protections, addresses discrimination and equity issues that may arise with the use of AI, provides support for consumers, patients, students, and employees, and promotes innovation and competition both in the US and abroad. The Biden administration states that it plans to continue working on comprehensive bipartisan federal legislation on AI. BCLP's AI legislation tracker is tracking numerous proposed state laws related to the development and use of AI.

Without federal statutory law, state and local legislatures have taken on the task themselves. Several states have introduced numerous bills involving AI, while others have enacted AI laws. For example, the Colorado Consumer Protections for Artificial Intelligence Act was passed in June 2024, with an effective date of February 2026 – making Colorado the first state to pass comprehensive AI legislation. Many more states are likely to follow. There is no lack of AI bills in recent legislative sessions. In 2024, more than 45 states have introduced AI bills – and the total number of such bills exceeds 600. In this legislative session alone, California lawmakers have introduced more than 30 pieces of AI legislation.

Several federal regulatory agencies have also gotten involved. The Department of Labor (“DOL”), Department of Justice (“DOJ”), Equal Employment Opportunity Commission (“EEOC”), National Labor Relations Board, Federal Trade Commission, and Consumer Financial Protection Bureau have all issued guidance on the use of AI.

Just as federal and state legislatures are having trouble keeping up with the constantly changing AI landscape, courts around the country are finding themselves in the position of creating new law or applying laws unrelated to AI to AI cases. Courts have developed case law on copyright infringement, privacy concerns, data protection, and disparate impact discrimination. This is expected to continue for years to come.

The Use of AI in Employment Decisions

As companies look to the use of AI to improve their productivity and efficiency, this increasingly includes the use of AI as human resources tools. Human resource professionals are using AI to assist with a number of functions: hiring, firing, performance evaluations, performance improvement plans, and drafting policies, job descriptions, and job duties – to name a few. Forbes reports that as of earlier this year, 80% of all US companies and every Fortune 500 company used some form of AI in its hiring decisions. But this use can have a detrimental effect on employees. According to a Harvard Business Report, millions of job applicants have been denied opportunities based on the use of AI applicant screening programs. With the prevalent use of AI comes concerns that the tools may be disparately impacting certain people.  

The Improper Use of AI in Employment Decisions Can Cause a Disparate Impact

Title VII of the Civil Rights Act of 1964 (“Title VII”) prohibits employers from using neutral tests or selection procedures that have the effect of disproportionately excluding persons based on membership in a protected class unless the exclusion is job-related and consistent with a business necessity. While using AI software creates the appearance of objective decision-making by taking out the potential for human conscious and unconscious bias, some employers have found that it is not foolproof.

For example, Amazon recently learned that bias can be “taught” to these AI programs. An Amazon team worked on job application screening software for several years in an effort to create a tool that would provide the initial review of job applicants. The software was fed information regarding applicants and hires at Amazon over a period of ten years and, based on this information, provided scores ranging from 1-5 for each applicant. Amazon discovered the program was not ranking applicants in a gender-neutral way. Because the software’s knowledge was based on resumes from a time when the majority of Amazon’s applicants were male, the software preferred and, therefore, provided higher rankings to, male applicants. The software went so far as to penalize applications that include the word “women,” such as, for example, “women’s soccer team captain.” Graduates of all-female colleges were also penalized. According to anonymous Amazon employees, the software was never used and the team developing the software was disbanded.

Conscious bias can also be injected into AI software. In 2023, iTutorGroup settled an Age Discrimination in Employment Act (“ADEA”) lawsuit brought against it by the EEOC for $365,000. EEOC v. iTutorGroup, Inc., et al., No. 1:22-cv-02565. The lawsuit alleged iTutorGroup programmed its job application software to automatically reject all female applicants over the age of 55 and all male applicants over the age of 60. According to the EEOC, this automatic rejection resulted in more than 200 qualified applicants being automatically rejected for positions based solely on the applicant’s age.

In addition to a monetary payment to the automatically-rejected applicants, the settlement requires iTutorGroup to provide training to those involved in hiring and create a new, robust anti-discrimination policy, and it enjoins iTutorGroup from requesting birth dates from applicants.

In Mobley v. Workday, Inc., a case currently pending before a California federal district court, the plaintiff attempts to assert liability over an AI vendor, Workday, that provides AI applicant screening tools to numerous companies. The plaintiff alleges that Workday’s AI systems caused him to be rejected from more than 100 jobs for discriminatory reasons – age, race, and/or his disability. The court recently denied Workday’s motion to dismiss the case, finding that the plaintiff sufficiently stated a disparate impact claim and that Workday could be held liable as an agent of the employers who rejected the plaintiff’s applications.

The ruling on Workday’s motion to dismiss does not mean the employers who rejected plaintiff are off the hook, and they could be added to the suit or be subject to separate litigation based on the same disparate impact theory of discrimination.

Federal Regulatory Guidance and State Legislation of the Use of AI in HR

Due to the increasing use of AI in employment decisions, and the risk of biased outcomes that can result (either intentionally or inadvertently), US agencies have taken it upon themselves to provide guidance regarding the use of AI by human resource professionals. Of particular importance to HR departments is the EEOC’s guidance on combatting disparate impact discrimination that could result from the use of AI in human resources functions, such as hiring, firing, disciplining, and determining rates of pay. Similarly, the U.S. Department of Justice also issued guidance on the use of AI and the Americans with Disabilities Act (ADA).

To further assist employers as they begin (or continue) to implement the use of AI in their human resources functions, on September 23, 2024, the DOJ announced a new AI & Inclusive Hiring Framework website, developed by the Partnership on Employment & Accessible Technology (“PEAT”), which is funded by the DOL. The website provides tips for employers who plan to implement AI-enabled hiring tools. It provides ten focus areas, such as identifying legal requirements, working with vendors, ensuring human oversight, and managing incidents. The website explains that employers should use the tools provided as a part of a “progressive effort” that “will evolve over time” as the employer continues to implement more AI programs.

There is no shortage of vendors who are willing to provide AI support for human resources functions. However, an employer cannot simply rely on the use of an outside AI vendor to insulate it from liability. Further, an AI vendor who provides the tools but does not make the employment decisions might also be held liable. All of the recent agency guidance confirms that employment laws apply when using AI and cautions employers that the improper, unregulated use of AI could lead to results that violate U.S. laws.

Further, states and municipalities have begun enacting legislation specifically regulating the use of AI in human resources. Recently, New York City began enforcement of its Automated Employment Decision Tools (“AEDT”) Law, which requires employers to complete a bias audit of AEDT used to assess candidates for hiring and employees for promotion. It also requires that all job candidates who reside in New York City receive notice that the employer uses AEDT.

Maryland law prohibits the use of facial recognition during job interviews unless the applicant consents to its use ahead of time. Recent amendments to the Illinois Human Rights Act add a similar requirement that expands beyond notification to job applicants but also to employees when AI is used in other employment functions, such promotions or discharge. Many states are likely to follow, with varying rules governing the use of AI in employment functions.

Does Your Company Have an AI Policy? If Not, Now is the Time to Implement One.

Given the lack of current and comprehensive governmental guidance, now is the time to ensure that your company is protected from the improper, even illegal, use of AI. A thorough, state-specific policy that applies to all employees is necessary to protect your company from both single-plaintiff and class action lawsuits and administrative actions, such as charges of discrimination. 

The AI landscape will continue to change rapidly – with respect to both the uses of AI and the regulation of that use. It is important to ensure that your company stays up-to-date with federal state, and local laws and guidance that apply to your business and your employees. Much currently exists, and more is sure to follow.

Best Practices

How does your business protect itself in this fast-paced legal landscape?

  • Determine whether your company will allow the use of AI by your employees and, if so, for what purposes. Keep in mind that even if your company has not explicitly introduced or approved the use of AI, it is likely at least some of your employees are using AI at work.
  • Review federal and applicable state and local statutes, regulations, and case law to ensure compliance.
  • Implement an AI policy that complies with applicable federal, state, and local law.
  • Train your employees on the policy.
  • Monitor legislation and case law to ensure continued compliance.

BCLP is tracking the developments in both state and federal AI legislation to help companies stay informed and is poised to answer questions, develop policy, and provide training and support on the proper uses of AI in your company.


How Do You Use, or Anticipate Using AI? Take a Brief Survey

Are you willing to take a short survey regarding your company’s use or contemplated use of AI? Your anonymized answers will be used to guide BCLP’s future insights and help us better protect the clients and industries we serve. Your participation would be greatly appreciated.

Take the survey >

US state-by-state AI legislation snapshot

US state-by-state AI legislation snapshot

Interactive map

BCLP actively tracks the proposed, failed and enacted AI regulatory bills from across the United States to help our clients stay informed in this rapidly-changing regulatory landscape.

Related Practice Areas

  • Employment & Labor

Meet The Team

Meet The Team

Meet The Team

This material is not comprehensive, is for informational purposes only, and is not legal advice. Your use or receipt of this material does not create an attorney-client relationship between us. If you require legal advice, you should consult an attorney regarding your particular circumstances. The choice of a lawyer is an important decision and should not be based solely upon advertisements. This material may be “Attorney Advertising” under the ethics and professional rules of certain jurisdictions. For advertising purposes, St. Louis, Missouri, is designated BCLP’s principal office and Kathrine Dixon (kathrine.dixon@bclplaw.com) as the responsible attorney.