Insights

US state-by-state AI legislation snapshot

US state-by-state AI legislation snapshot

Download PDFDownload PDF
Print
Share

Summary

BCLP actively tracks the proposed, failed and enacted AI regulatory bills from across the United States to help our clients stay informed in this rapidly-changing regulatory landscape. The interactive map is current as of June 7, 2024, and will be updated quarterly to include legislation that if passed would directly impact a businesses’ development or deployment of AI solutions.[2]

Artificial Intelligence (AI), once limited to the pages of science fiction novels, has now been adopted by more than 1/3 of businesses in the United States, and even more organizations are working to embed AI into current applications and processes.[1]  As companies increasingly integrate AI in their products, services, processes, and decision-making, they need to do so in ways that comply with the different state laws that have been passed and proposed to regulate the use of AI.

Click the image below to view detailed state-by-state AI legislation information.

As is the case with most new technologies, the establishment of regulatory and compliance frameworks has lagged behind AI’s rise. This is set to change, however, as AI has caught the attention of federal and state regulators and oversight of AI is ramping up. 

In the absence of comprehensive federal legislation on AI, there is now a growing patchwork of various current and proposed AI regulatory frameworks at the state and local level.  Even with the federal bill uncertain, it is clear that momentum for AI regulation is at an all-time high.  Consequently, companies stepping into the AI stream, face an uncertain regulatory environment that must be closely monitored and evaluated to understand its impact on risk and the commercial potential of proposed use cases. 

To help companies achieve their business goals while minimizing regulatory risk, BCLP actively tracks the proposed and enacted AI regulatory bills from across the Unites States to enable our clients to stay informed in this rapidly-changing regulatory landscape.  The interactive map below is current as of June 7, 2024, and will be updated quarterly to include legislation that if passed would directly impact a business’s development or deployment of AI solutions.[1] Click the states to learn more.

We have also created an AI regulation tracker for the UK and EU to keep you informed in this rapidly changing regulatory landscape.


[1]IBM Global AI Adoption Index 2023.

[2]We have also included laws addressing automated decision-making, because AI and automation are increasingly integrated, noting that not all automated decision-making systems involve AI, such businesses will need to understand how their particular systems are designed.  We have omitted biometric data, facial recognition, and sector-specific administrative laws.

Enacted

Introduced in 2018 as SB 1001, The Bolstering Online Transparency Act (BOT), went into effect in July 2019. BOT makes it unlawful for a person or entity to use a bot to communicate or interact online with a person in California in order to incentivize a sale or transaction of goods or services or to influence a vote in an election without disclosing that the communication is via a bot. The law defines a “bot” as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.”  The law applies only to communications with persons in California. In addition, it applies only to public-facing websites, applications, or social networks that have at least 10 million monthly U.S. visitors or users.   BOT does not provide a private right of action.

Enacted

The California Consumer Privacy Act, as amended by the California Privacy Rights Act (CCPA) governs profiling and automated decision-making. The CCPA gives consumers opt-out rights with respect to businesses’ use of “automated decision-making technology,” which includes “profiling” consumers based on their “performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” The CCPA defines “profiling” as “any form of automated processing of personal information, as further defined by regulations pursuant to paragraph (16) of subdivision (a) of Section 1798.185 [of the CCPA], to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements,” leaving the scope relatively undefined. The CCPA also requires businesses to conduct a privacy risk assessment for processing activities that present “significant risk” to consumers’ privacy or security. “Significant risk” is not defined by the CCPA but may be fleshed out by the regulations.

As of the date of publication, regulations addressing automated decision-making have not been published.

Enacted

Introduced on January 31, 2024, AB2013, would require, on or before January 1, 2026, a developer of an artificial intelligence system or service made available to Californians for use, regardless of whether the terms of that use include compensation, to post on the developer’s internet website documentation regarding the data used to train the artificial intelligence system or service,.

The law applies to AI developers, which is defined broadly to mean any person, government agency, or entity that either develops an AI system or service or “substantially modifies it,” which means creating “a new version, new release, or other update to a generative artificial intelligence system or service that materially changes its functionality or performance, including the results of retraining or fine tuning.” The law applies to generative AI released on or after January 1, 2022, and developers must comply with its provisions by January 1, 2026.

Failed

Introduced February 15, 2024, SB 1229 would require property and casualty insurers to disclose until January 1, 2030, if it has used AI to make decisions that affect applications and claims review, as specified.

Failed (Vetoed by Governor Newsom)

The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, SB 1047, introduced February 7, 2024, would in general authorize an AI developer of a covered model that is nonderivative to determine if the model qualifies for a limited duty exemption before training on that model can begin. The “limited duty exemption” would apply to a covered AI model defined by this bill that the develop can provide reasonable assurance the model does not, and will not, possess a hazardous capability. “Hazardous capability” means  the model creates or uses a “chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties”; causes at least $500,000,000 “of damages through cyberattacks on critical infrastructure via a single incident” or related incidents; causes at least $500,000,000 of damages by engaging in bodily harm to another human or theft of, or harm to, property with the requisite mental state; and other comparable “grave threats in severity to public safety and security. Before starting training, the developer must meet specified requirements, such as the capability to promptly shutdown, until the model falls under the limited duty exemption. If an incident occurs, the developer must report each AI safety incident to the Frontier Model Division, a subdivision of the Department of Technology.

Failed

Introduced on February 15, 2024, AB2930, would, among other things, require an entity that uses an automated decision tool (ADT) to make a consequential decision (deployer), and a developer of an ADT, to, before first using it, and annually thereafter, perform an impact assessment for any ADT used that includes, among other things, a statement of the purpose of the ADT and its intended benefits, uses, and deployment contexts.  The bill requires a deployer or developer to provide the impact assessment to the Civil Rights Department within 60 days of its completion. Before using an ADT to make a consequential decision deployers must notify any natural person that is the subject of the consequential decision that the deployer is using an ADT to make, or be a controlling factor in making, the consequential decision. Deployers are also required to accommodate a natural person’s request to not be subject to the ADT and to be subject to an alternative selection process or accommodation if a consequential decision is made solely based on the output of an ADT, assuming that an alternate process is technically feasible.  This bill would also prohibit a deployer from using an ADT in a manner that contributes to algorithmic discrimination.   AB2930 is nearly identical to AB331, which advanced from the House Committee on Privacy and Consumer Protection in 2023, but notably does not include a private right of action as AB331 did.

Enacted

Introduced on January 17, 2024, SB942, the California AI Transparency Act applies to businesses providing a generative AI system with over 1M monthly visitors during a 12-month period that is publicly accessibly within the state’s geographic boundaries. The law requires in-scope businesses to create an AI detection tool that allows a user to query the business about the which content was created by a generative AI system.  Additionally, the law requires these businesses to include in any AI-generated content a visible disclosure that has “clear and conspicuous” as well as appropriate notice based on the content’s medium stating AI has created the content. This disclosure must be understandable to a reasonable person, not avoidable, and consistent with the communication itself.  The law goes into effect on January 1, 2026.

Failed

SB 892, introduced on January 1, 2024, would impact businesses entering into a contract with state agencies to provide artificial intelligence services by prohibiting such a contract unless the business met California’s Department of Technology safety, privacy, and nondiscrimination standards relating to artificial intelligence services. The Department of Technology to date has not promulgated these standards.

Failed

Introduced on January 25, 2024, SB970, this bill would require any person or entity that sells or provides access to any artificial intelligence technology that is designed to create content to provide a consumer warning that misuse of the technology may result in civil or criminal liability for the user. The bill would require the Department of Consumer Affairs to specify the form and content of the consumer warning and would impose a civil penalty for violations of the requirement. Failure to comply with consumer warning requirement would be punishable by a civil penalty not to exceed twenty-five thousand dollars ($25,000) for each day that the technology is provided to or offered to the public without a consumer warning.

Failed

Introduced on January 30, 2023, AB 331, would, among other things, require an entity that uses an automated decision tool (ADT) to make a consequential decision (deployer), and a developer of an ADT, to, on or before January 1, 2025, and annually thereafter, perform an impact assessment for any ADT used that includes, among other things, a statement of the purpose of the ADT and its intended benefits, uses, and deployment contexts.  The bill requires a deployer or developer to provide the impact assessment to the Civil Rights Department within 60 days of its completion. Before using an ADT to make a consequential decision deployers must notify any natural person that is the subject of the consequential decision that the deployer is using an ADT to make, or be a controlling factor in making, the consequential decision. Deployers are also required to accommodate a natural person’s request to not be subject to the ADT and to be subject to an alternative selection process or accommodation if a consequential decision is made solely based on the output of an ADT, assuming that an alternate process is technically feasible.  This bill would also prohibit a deployer from using an ADT in a manner that contributes to algorithmic discrimination.

Finally, the bill includes a private right of action which would open the door to significant litigation risk for users of ADT.

Enacted

Introduced on February 15, 2024, AB2855 defines artificial intelligence as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objective, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” The purpose of this definition is to standardize the definition of AI across various California statutes, including the California Business and Professions Code, Education Code, and Government Code.  The law will take effect January 1, 2025.

Enacted

AB 1008 updates the definition of “personal information” as defined in the California Consumer Privacy Act to clarify that “personal information” can exist in various formats, including artificial intelligence (AI) systems that are capable of outputting personal information.

Enacted

Introduced on February 12, 2024, AB2355 requires that electoral advertisements using AI-generated or substantially altered content feature a disclosure that the material has been altered.   The law will be enforced by the Fair Political Practices Commission.

Enacted

Introduced on February 14, 2024, AB2602, provides that “a provision in an agreement between an individual and any other person for the performance of personal or professional services is unenforceable only as it relates to a new performance, fixed on or after January 1, 2025, by a digital replica of the individual of the voice or likeness of an individual in lieu of the work of the individual.”

Enacted

Introduced on February 15, 2024, AB2839 expands the timeframe in which a committee or other entity is prohibited from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content.

Enacted

Introduced on January 16, 2024, AB1836, prohibits commercial use of digital replicas of deceased performers in films, TV shows, video games, audiobooks, sound recordings, etc., without first obtaining the consent of those performers’ estates.

Enacted

In 2021, Colorado enacted SB 21-169, Protecting Consumers from Unfair Discrimination in Insurance Practices, a law intended to protect consumers from unfair discrimination in insurance rate-setting mechanisms. The law applies to insurers’ use of external consumer data and information sources (ECDIS), as well as algorithms and predictive models that use ECDIS in “insurance practices,” that “unfairly discriminate” based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.

On February 1, 2023, the Colorado Division of Insurance (CDI) released a draft of the first of several regulations to implement the bill.

On September 21, 2023, the CDI adopted Regulation 10-1-1 – Governance and Risk Management Framework Requirements for Life Insurers. The regulation governs the use of algorithms and predictive models that use external consumer data and information sources (ECDIS). Among other things, the regulation requires all Colorado-licensed life insurers to submit a compliance progress report on June 1, 2024, and an annual compliance attestation beginning on December 1, 2024.

Enacted

The Colorado Privacy Act (CPA), which goes into force on July 1, 2023, provides consumers the right to opt-out of the processing of their personal data for purposes of “profiling in furtherance of decisions that produce legal or similarly significant effects.” The law defines those decisions as “a decision that results in the provision or denial of financial and lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, health care services, or access to essential goods or services.”  The CPA further requires that controllers conduct a data protection impact assessment (DPIA) if the processing of personal data creates a heightened risk of harm to a consumer.  Processing that presents a heightened risk of harm to a consumer includes profiling if the profiling presents a reasonably foreseeable risk of:

  • Unfair or deceptive treatment of, or unlawful disparate impact on, consumers;
  • Financial or physical injury to consumers;
  • A physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if the intrusion would be offensive to a reasonable person; or
  • Other substantial injury to consumers.

All of which means that deployers of automated-decision making (which may or may not use AI) need to ensure that their design and implementation do not create the heightened risks outlined above, and are included in their DPIA. On March 15, 2023, the Colorado Attorney General’s Office finalized rules implementing the CPA.

Failed

Introduced January 10, 2024, HB 24-1057 would have prohibited a private landlord from employing or relying on AI or some algorithmic device to calculate rent to be charged to a tenant. Such use would have been an unfair or deceptive trade practice under Colorado Consumer Protection Act.

Enacted

Enacted on May 24, 2024, HB1147, creates a statutory scheme to regulate the use of deepfakes produced using generative artificial intelligence in communications about candidates for elective office. HB1147 prohibits the distribution of a communication that includes an undisclosed deepfake with actual malice as to the deceptiveness or falsity of the communication related to a candidate for public office.  Violators will be subject to civil penalties.  Additionally, a candidate who is the subject of a communication that includes a deepfake and does not comply with the disclosure requirements may bring a civil action for injunction or for general or special damages or both.

Enacted

Enacted May 17, 2024, SB24-205 is an artificial intelligence consumer protection bill. The bill requires both a developer and a deployer of a high-risk artificial intelligence system (high-risk system) to use reasonable care to avoid algorithmic discrimination in the high-risk system. A developer is a person doing business in Colorado who develops or substantially modifies certain AI models or systems, while a deployer is a person doing business in Colorado who deploys certain AI systems.

Algorithmic discrimination is when an AI system materially increases the risk of unlawful differential treatment or impact on an individual or group on the basis of certain protected classes like age, color, disability, ethnicity, race, religion, or sex.

There is a rebuttable presumption that a developer used reasonable care if the developer complied with certain provisions of the bill, including:

  • Making available to a deployer of the high-risk system information and documentation necessary to complete an impact assessment of the high-risk system;
  • Making a publicly available statement summarizing the types of high-risk systems that the developer has developed and how reasonably foreseeable risks of discrimination are managed;
  • Disclosing certain reasonably foreseeable risks of discrimination to the AG and deployers within 90 days after discovery of the risk.

There is a rebuttable presumption that a deployer used reasonable care if the deployer complied with certain provisions of the bill, including:

  • Implementing a risk management policy and program for the high-risk system;
  • Completing an impact assessment of the high-risk system;
  • Making a publicly available statement summarizing the types of high-risk systems that the deployer has deployed and how reasonably foreseeable risks of discrimination are managed;
  • Disclosing certain reasonably foreseeable risks of discrimination to the AG within 90 days after discovery of the risk.

A developer or business that makes available an AI system that is intended to interact with customers must disclose the consumer is interacting with an AI system.

There is no private right of action – the AG is exclusively responsible for enforcement. However, a developer or deployer has an affirmative defense if their system involved in the violation complies with federal or internal law and the developer or deployer has taken specified measures to discover any violations of this bill. 

Enacted

The Connecticut Privacy Act (CTPA) which goes into force on July 1, 2023, provides consumers the right to opt-out of profiling if such profiling is in furtherance of automated decision-making that produces legal or other similarly significant effects.  Controllers must also perform data risk assessments prior to processing consumer data when such processing presents a “heightened risk of harm.” These situations include certain profiling activities that present a reasonably foreseeable risk of unfair or deceptive treatment of or unlawful disparate impact on consumers, financial, physical or reputational injury to consumers, physical or other intrusion into the solitude, seclusion or private affairs or concerns of consumers that would be offensive to a reasonable person, or other substantial injury to consumers.

Failed

Introduced March 7, 2024, HB 5450 would, within a 90-day period preceding an election or primary, prohibit the distribution of certain deceptive synthetic media created by AI. “Deceptive synthetic media” constitutes “any image, audio or video of an individual, and any representation of such individual’s appearance, speech or conduct that is substantially derived from any image, audio or video” that (1) “a reasonable person” would attribute to a person and (2) was created by AI or by other means.

Failed

Introduced on February 21, 2024, SB 2, would regulate the development and use of automated decision tools (ADT) and high-risk artificial intelligence systems.  The following requirements would go into force as of July 1, 2025.

Development Requirements:

  • Documentation: Developers of certain AI systems must provide comprehensive documentation. This documentation should cover:
    • System Behavior: Detailed information about how the AI system operates.
    • Data Used: The datasets utilized by the AI system during development.
    • Risk Assessment: An assessment of potential risks associated with the AI system.
  • Transparency: Developers must ensure transparency in the development process, allowing stakeholders to understand the system’s inner workings.

Deployment Requirements:

  • High-Risk AI Systems: Deployers of high-risk AI systems (those impacting critical areas like criminal justice, education, employment, and healthcare) have additional responsibilities:
    • Risk Assessment: Conduct a thorough risk assessment before deploying the AI system.
    • Documentation: Provide detailed documentation to users and relevant authorities.
    • Transparency: Ensure transparency regarding the AI system’s functioning and potential biases.
    • Compliance: Comply with guidelines set forth by the bill to prevent unintended consequences.

Artificial Intelligence Advisory Council:

  • The bill establishes an Artificial Intelligence Advisory Councilto oversee compliance and provide guidance to developers and deployers.

SB 2 does not establish a qualified individual right to opt-out of covered decision-making systems.  SB 2 address various other AI topics, including synthetic images and provide for the establishment of a “Connecticut Citizens AI Academy”.

Enacted

The Delaware Personal Data Privacy Act which goes into force on January 1, 2025 provides consumers the right to opt-out of profiling if such profiling is in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the consumer. Controllers must also perform data protection assessments when data processing presents a “heightened risk of harm” including where Controller processes personal data for the purposes of profiling, where such profiling presents a reasonably foreseeable risk of any of the following: (a) unfair or deceptive treatment of, or unlawful disparate impact on, consumers, (b) financial, physical, or reputational injury to consumers, (c) a physical or other intrusion upon the solitude or seclusion, or private affairs or concerns, of consumers, where such intrusion would be offensive to a reasonable person; or (d) other substantial injury to consumers.

Failed

Introduced on February 2, 2023, B114, Stop Discrimination by Algorithms Act of 2023 (SDAA) would prohibit both for-profit and nonprofit organizations from using algorithms that make decisions based on protected personal traits. This bill makes it unlawful for a DC business to make a decision stemming from an algorithm if it is based on a broad range of personal characteristics, including actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income or disability in a manner that makes “important life opportunities” unavailable to that individual or class of individuals. Any covered entity or service provider who violates the act would be liable for a civil penalty of up to $10,000 per violation.

Failed

Introduced on January 19, 2024, SB 850, the Use of Artificial Intelligence in Political Advertising, would take effect July 1, 2024, if enacted, aims to require political campaigns to disclose through a disclaimer the use of AI in any “mages, video, audio, text, and other digital content used in ads. This bill seeks to address the rising concern of deceptive campaign advertising (deepfakes) by mandating disclaimers on political ads that contain certain content generated through artificial intelligence. Generative artificial intelligence is defined as a “machine based system that can for a given set of human defined objectives emulate the structure and characteristics of input data in order to generate derived synthetic content.” Violators of this proposed legislation could face civil penalties. Anyone can file a complaint with the Florida Elections Commission if they have suspicions of violations. This bill would apply to anyone person or entity releasing a political advertisement, electioneering communication, or other miscellaneous advertisement.

Failed

HB 1459, introduced January 7, 2024, would have required business entities that produce AI and make it available to the public to put out safety and transparency standards for AI-generated content and videos. The bill would have then required disclosure of certain AI-generated content to better inform consumers that they are using AI. And, more specifically, the bill would also have required political ads to be subject to certain requirements.

Enacted

Enacted April 29, 2024, HB 919 will require, if created by generative AI, certain political advertisements, electioneering communications, or other political content to include a disclaimer. Advertisements falling under this bill include depictions of “a real person performing an action that did not actually occur” and content that “was created with intent to injure a candidate or to deceive regarding a ballot issue,” etc. These advertisements must state the following disclaimer: “Created in whole or in part with the use of generative artificial intelligence (AI).” This disclaimer must be printed clearly, be readable, and occupy at least 4 percent of the communication based on the type of media. Failure to comply will result in civil and criminal penalties. This bill will take effect July 1, 2024.

Failed

Introduced on January 1, 2024, HB 887, would have prohibited the use of artificial intelligence in making certain decisions regarding insurance coverage, health care and public assistance. In particular, the bill would have prohibited health care activities from being based “solely on results derived from the use or application of artificial intelligence or utilizing decision tools.”  The bill further would have required that the Georgia Composite Medical Board review, and override, any decision resulting from AI, and to promulgate regulations on review activities. The bill advanced a similar approach regarding AI and automated decision-making tools in insurance coverage and public assistance.

Failed

Introduced on January 9, 2024, HB 890, places a prohibition on discrimination based on age, race, color, sex, sexual orientation, gender, gender expression, national or ethic origin, religion, creed, familial status, marital status, disability or handicap, or genetic information, and prohibition shall include discrimination resulting from the use of or reliance upon artificial intelligence or automated decision tools.

Enacted

Signed into law on May 2, 2023, and effective as of July 1, 2023, HB 203, permits an optometrist or ophthalmologist licensed in the state (a “prescriber”) to use an “assessment mechanism,” to conduct an eye assessment or generate a prescription for contact lenses or spectacles subject to the below conditions. An “assessment mechanism” means automated or virtual equipment, application, or technology designed to be used on a telephone, a computer, or an internet accessible device that may be used either in person or via telemedicine to conduct an eye assessment, and includes artificial intelligence devices and any equipment, electronic or nonelectronic, that are used to conduct an eye assessment. An assessment mechanism can be used; provided, however, that:

  • The data obtained from the assessment mechanism is not the sole basis for issuing the prescription.
  • The assessment mechanism alone is not used to generate an initial prescription or the first renewal of the initial prescription.
  • The assessment mechanism is only used where the patient has had a traditional eye examination in the past two years.

Failed

Introduced on January 20, 2023, SB974, the Hawaii Consumer Data Protection Act, would establish a framework to regulate controllers and processors’ access to personal consumer data and introduces penalties, as well as a new consumer privacy special fund.

The bill also provides consumers the option to opt-out of the processing of their personal data for the purposes of “profiling in furtherance of decisions made by the controller that results in the provision or denial by the controller of financial and lending services, housing, insurance; education enrollment, criminal justice, employment opportunities, health care services, or access to basic necessities, including food and water.”  “Profiling” is defined as any-form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person’s economic situation; health, personal preferences, interests, reliability, behavior, location, or movements.

The bill further requires covered entities to conduct a data protection assessment when they process personal data for purposes of profiling and the profiling presents “a reasonably foreseeable risk of: (A) Unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (B) Financial, physical, or reputational injury to consumers; (C) A physical intrusion or other intrusion upon the solitude or seclusion, or the private affairs or concerns; of consumers, where the intrusion would be offensive to a reasonable person; or (D) Other substantial injury to consumers[.]” The law goes into effect July 1, 2050, as currently drafted. The bill stalled in 2023 but was picked back up and carried over to the 2024 regular legislative session.

Failed

Introduced January 19, 2024, SB 2572 (Assembly version A2176) would have prohibited a person from deploying AI-generated products in Hawaii without submitting proof of the product’s safety to the office regulating AI. Violation of this bill would be subject to a monetary fine for each offense.

Failed

Introduced on January 20, 2023, SB1110, an alternate version of the Hawaii Consumer Data Protection Act, would create materially similar obligations with respect to “profiling” as SB974. The bill stalled in 2023 but was picked back up and carried over to the 2024 regular legislative session.

Failed

SB 2524, introduced January 19, 2024, would have prevented a covered entity, including an individual, firm, corporation, legal entity, or other commercial entity, from making an algorithmic eligibility determination or an algorithmic information availability determination on the basis of class, race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, wealth, or disability. “Algorithmic eligibility determination” is a determination about a person’s eligibility for important life opportunities based in whole or part on an algorithmic process using AI, machine learning, or similar technologies. “Algorithmic information availability determination” is an AI-generated determination of a person’s receipt of advertising, marketing, solicitations, or other information about important life opportunities. A violation shall be deemed an unlawful discriminatory practice.

Failed

HB 1734, introduced January 18, 2024, would have required any AI-generated political advertisement containing an “image, video, footage, or audio recording” to include a “clear and conspicuous statement” disclosing the use of AI in creating the content. The disclosure, depending on the media, must be readable, follow specified procedures, and be intelligible.

Failed

Introduced January 17, 2024, HB 1607 (Senate version S2524) would have prohibited a covered entity, such as an individual, firm, corporation, partnership, or other commercial entity, from making an “algorithmic eligibility determination” or an “algorithmic information availability determination” on basis of class, race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, wealth, or disability in a discriminatory manner. “Algorithmic eligibility determination” is an AI-generated determination in whole or in part regarding a person’s eligibility for, or opportunity to access, important life opportunities. “Algorithmic information availability determination” is an AI-generated determination about a person’s ability to receive advertising, marketing, solicitations, or other offers for an important life opportunity. Failure to comply would result in a violation of an unlawful discriminatory practice.

Enacted

In 2019, Illinois became the first state to enact restrictions with respect to the use of AI in hiring.  The Illinois AI Video Interview Act was amended in 2021 and went into effect in 2022, and now requires employers using AI-enabled assessments to:

  • Notify applicants of AI use;
  • Explain how the AI works and the “general types of characteristics” it uses to evaluate applicants;
  • Obtain their consent;
  • Share any applicant videos only with service providers engaged in evaluating the applicant;
  • Upon an applicant’s request, destroy all copies of the applicant’s videos and instruct service providers to do so as well; and
  • Report annually, after use of AI, a demographic breakdown of the applicants they offered an interview, those they did not, and the ones they hired.

Failed

Introduced December 19, 2022, HB 1002, would amend the University of Illinois Hospital Act and the Hospital Licensing Act, to require that before using any diagnostic algorithm to diagnose a patient, a hospital must first confirm that the diagnostic algorithm has been certified by the Department of Public Health and the Department of Innovation and Technology, has been shown to achieve as or more accurate diagnostic results than other diagnostic means, and is not the only method of diagnosis available to a patient.

Enacted

Introduced February 17, 2023 and signed into law August 12, 2024, HB 3773, amends the Human Rights Act, and provides that an employer that uses predictive data analytics in its employment decisions may not consider the applicant’s protected class information or ZIP code when used as a proxy for race to make certain employment-related decisions. Namely, it shall be a civil rights violation to: (1) use artificial intelligence to make decisions with respect to recruitment, hiring, promotion, renewal of employment, or conditions of employment, training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment, or for an employer to use artificial intelligence that has the effect of subjecting employees to discrimination on the basis of protected classes identified under the Article or to use zip codes as a proxy for protected classes; or (2) for an employer to fail to provide notice to an employee that the employer is using artificial intelligence.

 

The law defines “artificial intelligence” to mean a “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments,” and expressly include generative artificial intelligence. “Generative artificial intelligence” is defined to mean an “automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content,” including text, images, multimedia, and other content that would be otherwise produced by human means.

 

The Department of Human Rights is tasked with adopting implementing rules, including those relating to notice. The new amendments to the Human Rights Act will be codified at 775 ILCS 5/2-101 and 775 ILCS 5/2-102. The law goes into force on January 1, 2026.

Failed

Introduced February 17, 2023, HB 3943, would create the Social Media Content Moderation Act, and require that a social media company post terms of service for each social media platform owned or operated by the company in a manner reasonably designed to inform all users of the social media platform of the existence and contents of the terms of service and submit a terms of service report to the Attorney General on a semi-annual bases that includes a detailed description of content moderation systems, information on content that was flagged and how that content was flagged, including if the content was flagged and actioned by AI software.

Failed

Introduced February 17, 2023, HB 3943, would create the Social Media Content Moderation Act, and require that a social media company post terms of service for each social media platform owned or operated by the company in a manner reasonably designed to inform all users of the social media platform of the existence and contents of the terms of service and submit a terms of service report to the Attorney General on a semi-annual bases that includes a detailed description of content moderation systems, information on content that was flagged and how that content was flagged, including if the content was flagged and actioned by AI software.

Failed

Introduced February 17, 2023, HB 3880, would create the Children’s Privacy Protection and Parental Empowerment Act, and require a business that provides an online service to children shall not profile a child by default unless the profiling is necessary to provide the online service and only with respect to the aspect of the online service with which the child is actively and knowingly engaged and the business can demonstrate a compelling reason that profiling is in the best interest of children. Profiling is defined as any form of automated processing of personal information that uses personal information to evaluate certain aspects relating to a natural person, including analysing or predicting aspects concerning a natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location, or movements.

Proposed

Introduced February 06, 2024, HB 4869 would amend the Consumer Fraud and Deceptive Business Practices Act. Provides that any person who, for any commercial purpose, makes, publishes, disseminates, airs, circulates, or places an advertisement for goods or services before the public or causes, directly or indirectly, an advertisement for goods or services to be made, published, disseminated, aired, circulated, or placed before the public, that the person knows or should have known contains synthetic media, shall disclose in the advertisement that the advertisement contains synthetic media. Provides that if synthetic media has been used in any advertisement for goods or services that is published, aired, circulated, disseminated, or otherwise placed before the public and that depicts a person engaged in any action or expression that the person did not actually engage, the advertisement shall include a disclaimer that clearly and conspicuously states the likeness featured in the advertisement is synthetic, does not depict an actual person, and is generated to create a human likeness. Provides that a violation of the provisions constitutes an unlawful practice within the meaning of the Act.

Proposed

Introduced February 08, 2024, HB 5116 would create the Automated Decision Tools Act; provides that, on or before a specified date, and annually thereafter, a deployer of an automated decision tool shall perform an impact assessment for any automated decision tool the deployer uses or designs, codes or produces that includes specified information; provides that a deployer shall, at or before the time an automated decision tool is used to make a consequential decision, notify any natural person who is the subject of the consequential decision.

Proposed

Introduced February 09, 2024, HB 5321 would amend the Consumer Fraud and Deceptive Business Practices Act; provides that each generative artificial intelligence system and artificial intelligence system that, using any means or facility of interstate or foreign commerce, produces image, video, audio or multimedia AI-generated content shall include on the AI-generated content a clear and conspicuous disclosure that satisfies specified criteria.

Proposed

Introduced February 09, 2024, HB 5322 would create the Commercial Algorithmic Impact Assessments Act; defines algorithmic discrimination, artificial intelligence, consequential decision, deployer, developer and other terms; requires that by specified amount and annually thereafter, a deployer of an automated decision tool must complete and document an assessment that summarizes the nature and extent of that tool, how it is used and assessment of its risks, among other things.

Proposed

Introduced February 09, 2024, HB 5591 would create the Bolstering Online Transparency Act; provides that a person shall not use an automated online account, or bot, to communicate or interact with another person in this state online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election, unless the person makes a specified disclosure.

Proposed

Introduced February 09, 2024, HB 5649 would amend the Consumer Fraud and Deceptive Business Practices Act; provides that it is an unlawful practice within the meaning of the act for a licensed mental health professional to provide mental health services to a patient through the use of artificial intelligence without first obtaining informed consent from the patient for the use of artificial intelligence tools and disclosing the use of artificial intelligence tools to the patient before providing services through the use of artificial intelligence.

Enacted

Introduced on January 9, 2023, SB5, would create an omnibus consumer privacy law along the lines of the Virginia Consumer Data Privacy Act and the Colorado Privacy Act, to regulate, among other data uses, the collection and processing of personal information.  In particular, the bill sets out rules for profiling and automated decision-making.  Specifically, the bill enables individuals to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects” concerning the consumer.  Profiling is defined as “any form of automated processing of personal data to evaluate, analyze, or predict personal aspects concerning an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements[.]” Controllers must also perform a data protection impact assessment for high-risk profiling activities.  Enchanted as Public Law 94 May 01, 2023.

Failed

Introduced on January 29, 2023, HB1554, is similar to SB5 with respect to its regulation of “profiling.”

Proposed

Introduced March 02, 2023, HP 569, An Act To Protect Workers From Employer Surveillance, would require an employer to provide upon an employee request whether employee data interacts with an automated decisions system. Amended by H-173 and H-575.

Failed

Introduced May 18, 2023, LD 1973, would enact the Maine Consumer Privacy Act aimed at protecting consumer data. Section 9603 would require a consumer to opt-in to processing if the controller processes consumer data for the purpose of profiling in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the consumer unless the consumer opts-in to the processing. Section 9607 requires a controller to conduct a DPA if processing personal data for the purpose of profiling if the profiling presents a reasonably foreseeable risk to the consumer.  Profiling is not defined.

Failed

Introduced on May 23, 2023, the Data Privacy and Protection Act, HP 1270, is a comprehensive bill aimed at protecting consumer data. The Act includes retention limits, use restrictions, and reporting requirements. Section 9615 specifically governs the use of algorithms. The Act provides that covered entities using covered algorithms (broadly defined, including machine learning, AI, and natural language processing tools) to collect, process, or transfer data “in a manner that poses a consequential risk of harm” complete an impact assessment of the algorithm. The impact assessment must be submitted to the Attorney General’s office within 30 days of finishing it. The assessment must include a publicly available and easily accessible summary.

In addition to an impact assessment, the Act requires covered entities to create a design evaluation prior to deploying a covered algorithm. The design evaluation must include the design, structure, and inputs of the covered algorithm.

This bill includes a private right of action and allows for the recovery of punitive damages.

Enacted

Maryland law, HB 1202, prohibits an employer from using a facial recognition service for the purpose of creating a facial template during an applicant’s pre-employment interview, unless the applicant consents by signing a specified waiver.  This workplace AI law went into force on October 1, 2020.

Proposed

Introduced March 11, 2024, HB 1255 would restrict an employer from using an automated employment decision tool to make certain employment decisions; requiring an employer, under certain circumstances, to notify an applicant for employment of the employer’s use of an automated employment decision tool within a certain time period; and generally relating to automated employment decision tools.

Proposed

Introduced on January 18 and 19, 2023, the Massachusetts Data Privacy Protection Act (MDPPA) was filed in both the Senate SD 745, and in the House HD 2281. The bill is based on the federal American Data Privacy Protection Act with additional provisions relating to workplace surveillance. The MDPPA would require companies to conduct impact assessments if they use a “covered algorithm” in a way that poses a consequential risk of harm to individuals. “Covered algorithm,” is defined as “a computational process that uses machine learning, natural language processing, artificial intelligence techniques, or other computational processing techniques of similar or greater complexity and that makes a decision or facilitates human decision-making with respect to covered data, including determining the provision of products or services or to rank, order, promote, recommend, amplify, or similarly determine the delivery or display of information to an individual.”

Failed

Introduced on February 16, 2023, HB1974, would regulate the use of artificial intelligence (AI) in providing mental health services. In particular, the bill provides that the use of AI by any licensed mental health professional in the provision of mental health services must satisfy the following conditions: (1) pre-approval from the relevant professional licensing board; (2) any AI system used must be designed to prioritize safety and must be continuously monitored by the mental health professional to ensure its safety and effectiveness; (3) patients must be informed of the use of AI in their treatment and be afforded the option to receive treatment from a licensed mental health professional; and (4) patients must provide their informed consent to receiving mental health services through the use of AI. AI is defined as “any technology that can simulate human intelligence, including but not limited to, natural language processing, training language models, reinforcement learning from human feedback and machine learning systems.”

Proposed

Introduced on January 20, 2023, in both the Senate SD 1971 (assigned SB227), and in the House HD 3263, the Massachusetts Information Privacy and Security Act (MIPSA) creates various rights for individuals regarding the processing of their personal information, including the right to a privacy notice at or before the point of collection of an individual’s personal information, the right to opt out of the processing of an individual’s personal information for the purposes of sale and targeted advertising, rights to access and transport, delete, and correct personal information, and the right to revoke consent. Additionally, large data holders are required to perform risk assessments where the processing is based in whole or in part on an algorithmic computational process. A “large data holder”, is a controller that, in a calendar year: (1) has annual global gross revenues in excess of $1,000,000,000; and (2) determines the purposes and means of processing of the personal information of not less than 200,000 individuals, excluding personal information processed solely for the purpose of completing a payment-only credit, check or cash transaction where no personal information is retained about the individual entering into the transaction.

Proposed

Introduced on February 16, 2023, H1873, An Act Preventing A Dystopian Work Environment, would require that employers provide employees and independent contractors (collectively, “workers) with a particularized notice prior to the use of an Automated Decision System (ADS) and the right to request information, including, among other things, whether their data is being used as an input for the ADS, and what ADS output is generated based on that data. “Automated Decision System (ADS)” or “algorithm,” is defined as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes or assists an employment-related decision.” The bill further requires that employers review and adjust as appropriate any employment-related decisions or ADS outputs that were partially or solely based on the inaccurate data, and inform the worker of the adjustment. Employers and vendors acting on behalf of an employer must maintain an updated list of all ADS currently in use, and must submit this list to the department of labor on or before January 31 of each year. The bill also prohibits the use of ADSs in certain circumstances and requires the performance of algorithmic impact assessments. The reporting date has been extended to Wednesday July 31, 2024.

Failed

Introduced on February 16, 2023, SB31, An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT, would require any company operating a large-scale generative artificial intelligence model to adhere to certain operating standards such as reasonable security measures to protect the data of individuals used to train the model, informed consent from individuals before collecting, using, or disclosing their data, and performance of regular risk assessments.  A “large-scale generative artificial intelligence model” is defined to mean “a machine learning model with a capacity of at least one billion parameters that generates text or other forms of output, such as ChatGPT.” The bill further requires any company operating a large-scale generative artificial intelligence model to register with the Attorney General and provide certain enumerated information regarding the model.

Proposed

Introduced on January 11, 2024, HD. 4788, the Artificial Intelligence Disclosure Act would require that any generative artificial intelligence system used to create audio, video, text or print AI-generated content within Massachusetts include on or within such content a clean and conspicuous disclosure that meets the following criteria: (i) a clear and conspicuous notice, as appropriate for the medium of the content, that identifies the content as AI-generated content, which is to the extent technically feasible, permanent or uneasily removed by subsequent users; and (ii) metadata information that includes an identification of the content as being AI-generated content, the identity of the system, tool or platform used to create the content, and the date and time the content was created.

Proposed

Introduced on February 16, 2023, H. 83, would create an omnibus consumer privacy law called the Massachusetts Data Privacy Protection Act to regulate, among other data uses, the collection and processing of personal information. In particular, the bill sets out rules for the use of automated decision making technologies that would require that a covered entity using automated decision making technologies (Covered Algorithms) to conduct an impact assessment and evaluate any training data used to develop the Covered Algorithm to reduce the risk of any potential harms from the use of such technologies.

Proposed

Introduced on December 28, 2023, S. 2539, would require the development of a comprehensive set of policies designed to bring cybersecurity and AI preparedness up to the latest standards and to keep the Massachusetts government up to date as technology continues to rapidly advance.

Enacted

Introduced on March 1, 2023, HF2309, would create an omnibus consumer privacy law based on the Colorado Privacy Act and Connecticut Data Privacy Act, to regulate, among other data uses, the collection and processing of personal information.  In particular, the bill sets out rules for profiling and automated decision-making.  Specifically, the bill enables individuals to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects” concerning the consumer. Profiling is defined as “any form of automated processing of personal data to evaluate, analyze, or predict personal aspects concerning an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”  Controllers must also perform a data privacy and protection assessment for high-risk profiling activities.

Failed

Introduced on March 15, 2023, SF2915, establishes consumer rights regarding personal data. Consumers would have the right to access their personal data gathered by controllers and correct inaccurate information. They would have the right to delete personal data, opt out of data used for targeted advertising or profiling. Profiling includes any form of automated processing of personal data to evaluate or predict personal aspects. If passed, this act will be effective beginning July 31, 2024.

Enacted

Introduced on February 16, 2023, SB384, An act establishing the Consumer Data Privacy Act, would create an omnibus consumer privacy law, to regulate, among other data uses, the collection and processing of personal information, and profiling and automated decision-making. Specifically, the bill creates certain transparency requirements around profiling and enable individuals to opt-out of “profiling in furtherance of automated decisions that produce legal or similarly significant effects” concerning the consumer.  Profiling is defined as “any form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable individual’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”  Controllers must also perform a data protection assessment for high-risk profiling activities.

Enacted

Introduced on January 19, 2023, SB 255, creates an omnibus consumer privacy law based on a composite of the Colorado Privacy Act, Connecticut Data Privacy Act, and Virginia Consumer Data Protection Act. In particular, the bill sets out rules for profiling and automated decision-making.  Specifically, the bill enables individuals to opt-out of “in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the consumer.” Profiling is defined as “any form of automated processing of personal data to evaluate, analyze, or predict personal aspects concerning an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”  Controllers must also perform a data protection assessment for high-risk profiling activities.  The bill was reintroduced and passed by the legislature on January 18, 2024.

Failed

Introduced on December 5, 2022,  A4909, would regulate the “use of automated tools in hiring decisions to minimize discrimination in employment.” The bill imposes limitations on the sale of automated employment decision tools (AEDTs), including mandated bias audits, and requires that candidates be notified that an AEDT was used in connection with an application for employment within 30 days of the use of the tool.

Enacted

Initially introduced on January 11, 2022, S332 (the “Act”), creates an omnibus consumer privacy law along the lines of the Washington Privacy Act.   Among other things, the Act requires companies to conduct data protection assessments of “processing that presents a heightened risk of harm to a consumer” before conducting such processing. Such “heightened risk” results from activities such as profiling.  “Profiling” means any form of automated processing performed on personal data to evaluate, analyze or predict personal aspects related to an identified or identifiable individual’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements. Consumers are also afforded the right to opt-out of profiling in furtherance of decisions that produce legal or similarly significant effects.

The bill was signed into law on January 16, 2024.  The law will go into effect January 15, 2025.

Failed

Introduced on January 1, 2022, A537, would require an automobile insurer using an automated or predictive underwriting system to annually provide documentation and analysis to the Department of Banking and Insurance to demonstrate that there is no discriminatory outcome in the pricing on the basis of race, ethnicity, sexual orientation, or religion, that is determined by the use of the insurer’s automated or predictive underwriting system. Under this bill, “automated or predictive underwriting system” is defined to mean a computer-generated process that is used to evaluate the risk of a policyholder and to determine an insurance rate. An automated or predictive underwriting system may include, but is not limited to, the use of robotic process automation, artificial intelligence, or other specialized technology in its underwriting process.

Proposed

Introduced February 22, 2024, A3854, which is similar to A4030, essentially would make it unlawful for the sale, development, deployment, use, or offer for sale of an automated employment decision tool unless (1) a bias audit assesses the tool within the past year prior to selling or offering the tool for sale; (2) the tool includes at no additional costs this annual bias audit service; (3) the tool is developed, sold, deployed, used, or offered for sale with a notice stating the tool is subject to this bill; and (4) the tool’s developer has implemented the recommendations of the most recent bias audit conducted and the developer issues a press release stating so. “Employer” includes an “individual, partnership, association, corporation,” and other business entities. “Automated employment decision tool” is a “machine-based system that can, for a set of human-defined objectives provided by an employer or an individual acting on behalf of an employer, make predictions, recommendations, or decisions influencing recruitment, workforce, or employment decisions.” A “bias audit” would be an “impartial evaluation conducted by an independent auditor.”

Proposed

Introduced February 27, 2024, A3912 would expand the definition of “identity left” to include impersonation or false depictions of a person generated entirely or substantially manipulated by computer technology or AI-generated speech, speech transcription, or text. To constitute criminal activity, a person must reasonably believe the AI-generated content accurately exhibits the activity of a person, the content was produced without the person’s content, and the exhibition is “substantially likely” to create perceptible individual or societal harm. This act would take effect immediately.

Proposed

A4030, introduced March 7, 2024,would prohibit the sale or offer for sale in New Jersey an automated employment decision tool unless (1) a bias audit has been performed on the tool in the past year prior to sale; (2) the sale includes, at no additional fee, the annual bias audit service; and (3) the tool is sold or offered with a notice stating it is subject to these provisions. “Automated employment decision tool” is “any system” governed by “statistical theory” or other methodologies that filter candidates for hire automatically in a way that establishes a preferred candidate or candidates. “Bias audit” is an “impartial evaluation” of the automated employment decision tool to assess its compliance with anti-discrimination laws. A violation of this bill would result in a civil penalty of not more than $500 for the first violation and not less nor more than $1,500 for subsequent violations.

Proposed

Introduced June 6, 2024, AR141 encourages platforms that generate and disseminate deepfake and cheapfake media “to voluntarily commit to prevent and remove harmful content.” “Deepfake” and “cheapfake” media includes video recordings, motion picture films, sound recordings, electronic images, photographs or other technological representations of speech or conduct that depict a person’s speech of conduct that would not normally engage in those behaviors. These medias are AI-produced content that can “manipulate public understandings of evidence and truth.”

Proposed

Introduced March 18, 2024, S2964 (Assembly version A3855) establishes standards for independent bias auditing of automated employment decision tools (“AEDT”). This bill would apply to employers, including employment agencies, individuals, partnerships, associations, corporations, and other entities employing any person. An “independent auditor” would be a person or group capable of exercising objective judgment on all issues within the scope of a bias audit of an AEDT. “AEDT” is a system governed by statistical theory or related methodologies, including learning algorithms, that automatically filter candidates for hire for any term, condition, or privilege of employment in a way that “establishes a preferred candidate or candidates.” A “bias audit” would be an “impartial evaluation, including but not limited to testing, of an automated employment decision tool to assess its predicted compliance” with anti-discrimination laws.

Proposed

Introduced April 8, 2024, S3046 would provide corporation business and gross income tax credit for employing persons who have experienced job loss because of automation. The corporation tax credit would be equal to 10 percent of the salary and wages paid to each person employed by the corporation who experienced termination because of automation. To qualify, the corporation as a taxpayer must employ the person for at least seven months of the privilege period the taxpayer claims the credit. The credit, however, cannot exceed $2,500 per employee per privilege period. “Automation” is defined as a “device, process, or system that functions without continuous input from a human operator.” This bill would take effect immediately and would apply to privilege periods and taxable years beginning on or after January 1 of the year following enactment.

Proposed

S3015, introduced April 8, 2024 (Assembly version A3911), would require an employer located in New Jersey, including a person, firm, business, educational institution, nonprofit, corporation, LLC, or other entity, that requests applications to record video interviews and uses an AI-analysis of the video to notify the applicant that AI may be used to analyze their video, to provide the applicant with information before the interview as to how the AI works and evaluates applicants, and to obtain written consent before the interview that the application will be evaluated by AI. If an applicant has not consent, then an employer cannot use AI for analysis. Additionally, the bill would require an employer using an AI-analysis to determine applicant fitness to collect and report the applicants’ race and ethnicity who are and are not afforded the opportunity for an in-person interview as well as the applicants’ race and ethnicity who are offered a position or hired. This data must be reported annually to the Department of Labor and Workforce Development. Violation of this bill will result in a civil penalty of $500 for the first offense and $1,000 for any subsequent offense.

Proposed

S3225, introduced May 13, 2024, would require a business entity, such as a business corporation, professional services corporation, LLC, partnership, limited partnership, business trust, association, or any other legal commercial entity organized under New Jersey, that use a text-based chat to offer a transcript of the chat to the consumer. “Chat” includes any tool used by the entity “to provide real-time, text-based communication with a consumer.” “Transcript” is a “typed or printed verbatim record of a chat.” Additionally, the entity must provide “clear and conspicuous notice to the consumer at the outset of any interaction, informing the consumer of the option to receive a transcript of the chat.” Failure to comply will be unlawful. The bill would take effect immediately.

Proposed

Introduced May 20, 2024, S3298 (Assembly version A3858) would require insurance carriers to disclose in a “clear and conspicuous” location on the website if the carrier uses an “automated utilization management system” and the number of claims reviewed using this system in the previous year. “Automated utilization system” is a system used for reviewing the “appropriate and efficient allocation of health care services under a health benefits plan according to specified guidelines” to recommend or determine if and to what extent a health care service should be given or proposed to a covered person. The automated utilization system may use AI or other software. This bill, if enacted, would take effect on the first day of the 13th month following the date of enactment.

Failed

Introduced on February 10, 2022, S1402, provides that it is unlawful discrimination and a violation of the law against discrimination for an automated decision system (ADS) to discriminate against any person or group of persons who is a member of a protected class in: (1) the granting, withholding, extending, modifying, renewing, or purchasing, or in the fixing of the rates, terms, conditions or provisions of any loan, extension of credit or financial assistance; (2) refusing to insure or continuing to insure, limiting the amount, extent or kind of insurance coverage, or charging a different rate for the same insurance coverage provided to persons who are not members of the protected class; or (3) the provision of health care services.  Under the bill, ADS means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making.

An ADS is discriminatory if the system selects individuals who are members of a protected class for participation or eligibility for services at a rate that is disproportionate to the rate at which the system selects individuals who are not members of the protected class.  If passed, the law would take effect on the first day of the third month next following enactment.

Proposed

Introduced on January 9, 2024, S1588, regulates the use of automated employment decision tools during the hiring process to minimize employment

discrimination that may result from the use of the tools. The Bill would prohibit the sale of automated employment decision tools unless certain requirements are met, including a previous bias audit, a no cost yearly bias audit, and a notice that the tool is subject to the specific Bill. Additionally, the Bill has specific employee notification requirements for companies that use these tools.

Proposed

Introduced on January 17, 2024, SB 68, the Age-Appropriate Design Code Act applies to “a sole proprietorship, partnership, limited liability company, corporation, association, affiliate or other legal entity that is organized or operated for the profit or financial benefit of the entity’s shareholders or other owners and that offers online products, services or features to individuals in New Mexico and processes children’s personal data.”

The Act would prohibit a covered entity from “profiling” a child under 18 unless:

(1) the covered entity can demonstrate that the covered entity has appropriate safeguards in place to ensure that profiling is consistent with the best interest of

children reasonably likely to access the online product, service or feature; and

(2) profiling is necessary to provide the online product, service or feature requested, and only with respect to the aspects of the online product, service or

feature with which the child is actively and knowingly engaged; or

(3) the covered entity can demonstrate a compelling reason that profiling is in the best interest of children.  “Profiling” means automated processing of personal data that uses personal data to evaluate certain aspects relating to a natural person, including analyzing or predicting aspects concerning a natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements; “profiling” does not include the processing of data that does not result in an assessment or judgment about a natural person.

For the most part, SB 68 is the same as SB 319, which was introduced on February 2, 2023, and failed to pass.

Enacted

N.M. Stat. Ann. § 1-19-26.4 (proposed as HB 182), outlines regulations regarding advertisements containing AI-generated media. If someone creates, produces, or purchases an advertisement with deceptive media, they must include a clear disclaimer stating, “This [image/video/audio] has been manipulated or generated by artificial intelligence,” depending on the type of media used. The disclaimer must be easily readable or audible, depending on the media type, and must be present throughout the duration of the media or at specific intervals. These regulations became effective on May 15, 2024.

Enacted

In December 2021, New York City passed the first law (Local Law 144), in the United States requiring employers to conduct bias audits of AI-enabled tools used for employment decisions. The law imposes notice and reporting obligations.

Specifically, employers who utilize automated employment decision tools (AEDTs) must:

  1. Subject AEDTs to a bias audit, conducted by an independent auditor, within one year of their use;
  2. Ensure that the date of the most recent bias audit and a “summary of the results”, along with the distribution date of the AEDT, are publicly available on the career or jobs section of the employer’s or employee agency’s website;
  3. Provide each resident of NYC who has applied for a position (internal or external) with a notice that discloses that their application will be subject to an automated tool, identifies the specific job qualifications and characteristics that the tool will use in making its assessment, and informs candidates of their right to request an alternative selection process or accommodation (the notice shall be issued on an individual basis at least 10 business days before the use of a tool); and
  4. Allow candidates or employees to request alternative evaluation processes as an accommodation.

While enforcement of the law has been delayed multiple times pending finalization of the law’s implementing rules, on April 6, 2023 the Department of Consumer and Worker Protection (DCWP) published the law’s Final Rule. The law is now in effect, and enforcement began on July 5, 2023.

Failed

Introduced on November 3, 2023, S7735 (assembly version A7906), provides that it shall be unlawful for a landlord to implement or use an automated decision tool, unless it: (1) no less than annually, conducts a disparate impact analysis to assess the actual impact of any automated decision tool and publicly files the assessment; and (2) notifies all applicants than an automated decision tool will be used and provides the applicant with certain disclosures related to the automated decision tool.  If passed, the law will go into immediate effect.

Failed

Introduced on July 7, 2023, S7592 (assembly version A7904), would amend election law to require that any political communication, that uses an image or video footage that was generated in whole or in part with the use of artificial intelligence, disclose that artificial intelligence was used in such communication.

Failed

Introduced on October 13, 2023, 9A8129 (senate version S8209), would create the New York Artificial Intelligence Bill of Rights. Where a New York resident is affected by any system making decisions without human intervention, under the AI Bill of Rights they would be afforded the following rights and protections: (i) the right to safe and effective systems; (ii) protections against algorithmic discrimination; (iii) protections against abusive data practices; (iv) the right to have agency over one’s data; (v) the right to know when an automated system is being used; (vi)  the right to understand how and why an automated system contributed to outcomes that impact one; (vii) the right to opt out of an automated system; and (viii) the right to work with a human in the place of an automated system.

Failed

Introduced on September 29, 2023, A8098 (Senate version S7922) would require publishers of books created wholly or partially with the use of generative artificial intelligence to disclose such use of generative artificial intelligence before the completion of such sale; applies to all printed and digital books consisting of text, pictures, audio, puzzles, games or any combination thereof.

Failed

Introduced on October 16, 2023, A8158 (senate version S7847), requires that every newspaper, magazine or other publication printed or electronically published in this state, which contains the use of generative artificial intelligence or other information communication technology, identify that certain parts of such newspaper, magazine, or publication were composed through the use of artificial intelligence or other information communication technology.

Failed

Introduced on January 12, 2024, S8214, requires the registration with the Department of State of certain companies whose (i) primary business purpose is related to artificial intelligence as evidenced by their North American Industry Classification System (NAICS) Code of 541512, 334220, or 511210, and (ii) who reside in New York or sell their products or services in New York.  The fee for registration is $200. Failure to register can result in a fine of up to ten

thousand dollars. Companies that knowingly fail to register may be barred from operating or selling their AI products or services in the state for a period of up to ten years.

Failed

Introduced on October 27, 2023, A8195, the Advanced Artificial Intelligence Licensing Act, requires the registration and licensing of high-risk advanced artificial intelligence systems, establishes the advanced artificial intelligence ethical code of conduct, and prohibits the development and operation of certain artificial intelligence systems.

Failed

Introduced on January 12, 2024, S8206 (assembly version A8105), requires that every operator of a generative or surveillance advanced artificial intelligence system that is accessible to residents of the state require a user to create an account prior to utilizing such service. Prior to each user creating an account, such operator must present the user with a conspicuous digital or physical document that the user must affirm under penalty of perjury prior to the creation or continued use of such account.  Such document shall state the following:

“I, ________ RESIDING AT ________, DO AFFIRM UNDER PENALTY OF PERJURY THAT I HAVE NOT USED, AM NOT USING, DO NOT INTEND TO USE, AND WILL NOT USE THE SERVICES PROVIDED BY THIS ADVANCED ARTIFICIAL INTELLIGENCE SYSTEM IN A MANNER THAT VIOLATED OR VIOLATES ANY OF THE FOLLOWING AFFIRMATIONS:

  1. I WILL NOT USE THE PLATFORM TO CREATE OR DISSEMINATE CONTENT THAT CAN FORESEEABLY CAUSE INJURY TO ANOTHER IN VIOLATION OF APPLICABLE LAWS;
  2. I WILL NOT USE THE PLATFORM TO AID, ENCOURAGE, OR IN ANY WAY PROMOTE ANY FORM OF ILLEGAL ACTIVITY IN VIOLATION OF APPLICABLE LAWS;
  3. I WILL NOT USE THE PLATFORM TO DISSEMINATE CONTENT THAT IS DEFAMATORY, OFFENSIVE, HARASSING, VIOLENT, DISCRIMINATORY, OR OTHERWISE HARMFUL IN VIOLATION OF APPLICABLE LAWS;
  4. I WILL NOT USE THE PLATFORM TO CREATE AND DISSEMINATE CONTENT RELATED TO AN INDIVIDUAL, GROUP OF INDIVIDUALS, ORGANIZATION, OR CURRENT, PAST, OR FUTURE EVENTS THAT ARE OF THE PUBLIC INTEREST WHICH I KNOW TO BE FALSE AND WHICH I INTEND TO USE FOR THE PURPOSE OF MISLEADING THE PUBLIC OR CAUSING PANIC.”

Proposed

Introduced on August 4, 2023, S7623 (reprinted as S7623C on May 31, 2024) (assembly version A9315), would impose statewide requirements regulating tools that incorporate artificial intelligence to assist in employee monitoring and the employment decision-making process.  In particular, the bill (1) defines a narrow set of allowable purposes for the use of electronic monitoring tools (EMTs), (2) requires that the EMT be “strictly necessary” and the “least invasive means” of accomplishing those goals, and (3) requires that the EMT collect as little data as possible on as few employees as possible to accomplish the goal. The bill also requires that employers exercise “meaningful human oversight” of the decisions of automated tools, and conduct and publicly post the results of an independent bias audit, and provide notification requirements to candidates that a tool is in use.

Failed

Introduced on January 4, 2023, SB 365, the New York Privacy Act, would be the state’s first comprehensive privacy law. The law would require companies to disclose their use of automated decision-making that could have a “materially detrimental effect” on consumers, such as a denial of financial services, housing, public accommodation, health care services, insurance, or access to basic necessities; or could produce legal or similarly significant effects. Companies must provide a mechanism for a consumer to formally contest a negative automated decision and obtain a human review of the decision, and must conduct an annual impact assessment of their automated decision-making practices to avoid bias, discrimination, unfairness or inaccuracies.

The law would also permit consumers to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer.” Profiling is defined as any type of automated processing performed on personal data to evaluate, analyze, or predict personal aspects” such as “economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” Finally, the law would mandate that companies conduct a data protection assessment on their profiling activities, since profiling would be considered a processing activity with a heightened risk of harm to the consumer.

Failed

Introduced on January 4, 2023, A216, would require advertisements to disclose the use of synthetic media.  Synthetic media is defined as “a computer-generated voice, photograph, image, or likeness created or modified through the use of artificial intelligence and intended to produce or reproduce a human voice, photograph, image, or likeness, or a video created or modified through an artificial intelligence algorithm that is created to produce or reproduce a human likeness.”  Violators would be subject to a $1,000 civil penalty for a first violation and a $5,000 penalty for any subsequent violation.

Failed

Introduced on March 7, 2023, A5309, would amend state finance law to require that where state units purchase a product or service that is or contains an algorithmic decision system, that such product or service adheres to responsible artificial intelligence standards. The bill requires the commissioner of taxation and finance to adopt regulations in support of the law.

Failed

Introduced on March 10, 2023, SB 5641A (Assembly version A567), would amend labor law to establish criteria for the use of automated employment decision tools (AEDTs). The proposed bills mirrors NYC’s Local Law 144 in many ways. In particular, employers who utilize AEDTs must: (1) obtain from the seller of the AEDT a disparate impact analysis, not less than annually; (2) ensure that the date of the most recent disparate impact analysis and a summary of the results, along with the distribution date of the AEDT, are publicly available on the employer’s or employee agency’s website prior to the implementation or use of such tool; and (3) annually provide the labor department a summary of the most recent disparate impact analysis.

Failed

Introduced on May 3, 2023 and May 10, 2023, S6638 and A7106, the Political Artificial Intelligence Disclaimer (PAID) Act, would amend election and legislative law in relation to the use and disclosure of synthetic media. The act would add a subdivision to the election law that requires any political communication which was produced by synthetic media to be disclosed via printed or digital communications. The disclosure must read “This political communication was created with the assistance of artificial intelligence.” If passed, the act would take effect on January 1, 2024.

Proposed

S9609, introduced May 16, 2024, would make it unlawful for a rental property owner, or any agent or subcontractor thereof, to collect information on historical or contemporaneous prices, supply levels, or contract information as well as renewal dates using a system, software, process made by an algorithm. “Rental property owner” includes individuals as well as business entities. The rental property owner cannot exchange for value the services of a coordinator, which any person that operates software or data analytics services.

Proposed

S9542, introduced May 16, 2024, would amend general business law by prohibiting the publication of a “digital or physical newspaper, magazine, or periodical which was wholly or partially produced or edited through the use of artificial intelligence without significant human oversight.” AI includes the “use of machine learning technology, software, automation, and algorithms to perform tasks, to make rules and/or predictions based on existing data sets and instructions.”

Proposed

S9450 (Assembly version A10103), introduced May 15, 2024, would amend general business law to require an owner, licensee, or operator of “generative artificial intelligence” to “conspicuously” disclose a warning on the user’s interface that would inform the user that the outputs may be inaccurate and/or inappropriate. If an entity fails to do this, then they must pay a civil penalty of $25 per user of such system or $100,000.

Proposed

S9434 (Assembly version A9472), introduced May 15, 2024, would prohibit landlords from using an algorithmic device to set the amount of a residential tenant’s rent. “Algorithmic device” includes “a device that uses one or more algorithms to perform calculations of data, including data concerning local or statewide rent amounts being charged to tenants by landlords.” This also would include a product that incorporates an algorithmic device. A violation would result in monetary penalty.

Proposed

S9401, introduced May 15, 2024, would amend the labor law to prohibit an employer from using or applying an AI unless the employer has conducted an impact assessment for the AI’s impact and use. This assessment should be done at least once every 2 years and before any material change to the AI. The impact assessment must include these requirements: a description of the AI’s objectives; an evaluation of the ability of the AI to achieve its objectives; a summary of the underlying AI tools being used; the design and training data to develop the AI process; the extent the AI requires input of sensitive and personal data, how that data is used and stored, and any control users may have over this data; an estimated number of employees who have already been displaced by AI; and an estimated number of employees expected to be displaced by AI. “Employer” includes a business resided in New York, is not a small business, and employs more than 100 people.

Proposed

S9381 (Assembly version A10494), introduced May 14, 2024, would amend the general business law to add liability to proprietors for chatbot responses. “Proprietors” includes any person or business entity with more than 20 employees that owns, operates, or deploys a chatbot system that interacts with users. This would not include third-party developers that license their chatbot technology to the proprietor. “Chatbot” is an AI system, software program, or technological application that creates “human-like conversation and interaction through text messages, voice commands, or a combination thereof to provide information and services to users.” The proprietor is responsible for “ensuring such chatbot accurately provides information aligned with the formal policies, product details, disclosures and terms of service offered to users.” This liability cannot be waived through disclosure to users. Additionally, proprietors would have to provide “clear, conspicuous, and explicit notice to users that they are interacting” with AI, rather than a human representative.

Proposed

S8755, introduced March 7, 2024, establishes the New York artificial intelligence ethics commission, which would promulgate rules regulating AI use by business entities as well as other regulations. This bill also specifies that no entity doing business in New York shall use AI systems that discriminate based on race, gender, sexuality, disability, or other protected characteristics; create or disseminate false or misleading information created by AI to deceive the public; participate in the unlawful collection, processing, or dissemination of personal information by an AI system without consent; participate in the unauthorized use or reproduction of IP through AI; fail to have safeguards to prevent harm or material loss through AI; conduct AI research that is harmful or without the subjects’ consent; intentionally disrupt, damage, or subvert of an AI system to undermine its integrity or performance; or participate in the unauthorized use of a person’s personal identity or data by AI to commit fraud or theft. The commission can impose penalties for any violation. This act would take effect immediately.

Proposed

S7592 (Assembly version A79094), introduced July 7, 2023, and amended February 26, 2024, would require political communications to contain disclosures regarding the use of AI to make that communication. “Political communication” includes “an image or video footage that was generated in whole or in part with the use of artificial intelligence.” Failure to comply would result in a fine equal to the amount expended on the communication.

Proposed

S6685 (Assembly version A843), introduced May 4, 2023, would prohibit motor vehicle insurers from using AI-generated algorithms used to construct coverage terms, premiums and rates, and actuarial tables that can discriminate based on age, marital status, sex, sexual orientation, educational background or education level attained, employment status or occupation, wealth, consumer credit information, ownership or interest in real property, and other characteristics.

Proposed

S2477 (Assembly version A5631), introduced January 20, 2023, and amended recently on April 15, 2024, would revise the New York State Fashion Workers Act to require model management companies to obtain “clear written consent for the creation or use of a model’s digital replica, detailing the scope, purpose, rate of pay, and duration of such use.” The bill would prohibit model management companies from creating, altering, or manipulating a model’s digital replica using AI without written consent from the model. “Digital replica” is a “significant, computer-generated or artificial intelligence-enhanced representation of a model’s likeness.”

Proposed

S2277 (Assembly version A3308), introduced January 19, 2023, and recently amended, would require business entities in New York that have personal information of at least 500 individuals to give notice about the entity’s use of the personal information. The bill also would create anti-discrimination practices for the entity to follow regarding the use of the AI.

Proposed

A10374 (Senate version S9439), introduced May 21, 2024, would amend the general business law to prohibit robots and uncrewed aircraft equipped or mounted with weapons. “Robotic device” is a “mechanical device capable of locomotion, navigation, or movement on the group and that operates at a distance from its operator or supervisor, based on comments or in response to sensor data, artificial intelligence, or a combination.” The bill would make it unlawful for any person to use a robotic device or uncrewed aircraft to commit the crime of menacing; criminally harass another person; or use the device to physically restrain or attempt to restrain a human being. A knowing violation of this law would result in a civil penalty. This bill would not apply to a defense industrial company if the company were acting within their contract with the U.S. Dept. of Defense; a manufacturer or developer who modifies or operates these devices for the purpose of developing technology intended to detect the unauthorized weaponization of a robotic device or uncrewed aircraft; or government officials acting within the scope of their duties.

Proposed

A9149, introduced February 8, 2024, and referred to the Assembly Insurance Committee, would amend insurance law to require insurers to notify insureds about the use or lack of use of AI-based algorithms to review. This bill would broadly apply to insurers who are authorized to write accident and health insurance in New York, clinical peer reviewers who participate in a utilization review process for insurers, a corporate organized under New York, and health maintenance organizations. The department should certify these AI-based algorithms and trainings being used have minimized the risk of bias regarding a “covered person’s race, color, religious creed, ancestry, age, sex, gender, national origin, handicap or disability” and should “adhere to evidence-based clinical guidelines.” In addition, the bill would require documentation of “the utilization review of the individual clinical records or data prior to issuing an adverse determination.” A violation can result in a license suspension or revocation; refusal, for a maximum of 1 year, to issue a new license; a maximum fine of $5,000 per violation; or a maximum fine of $10,000 for each willful violation.

Proposed

A9103, introduced February 7, 2024, and referred to the Assembly Election Law Committee, would amend election law to include a notification requirement. The bill would require “any political communication made by phone call, email, or other message-based communication” that uses AI to create a human-like conversation to reasonably inform the person that they are communicating with AI. If passed, this bill would take effect immediately.

Proposed

A9054, introduced February 5, 2024, and referred to the Assembly Election Law Committee, would amend election law to prohibit entities from using generative AI in whole or in part to create a political communication that contains “any realistic photo, video, or audio depiction of a candidate, or person interacting with a candidate.” AI includes “any technology that engages in its own learning and decision-making to generate new data.” If passed, this bill would take effect immediately.

Proposed

Introduced February 5, 2024, and referred to the Assembly Election Law Committee, A9028 would amend election law to, as is relevant, require disclosure of any political communication covered by the bill and made by AI or artificial media. The bill would apply to printed or digital political communications, including “brochures, flyers, posters, mailings, electronic mailings, or internet advertising.” The disclosure must state the communication was “created by or with the assistance of artificial intelligence.” The disclosure must be readable, clear, and conspicuous. If a person has an intent to damage a candidate or deceive with the political communication, then a violation can amount to a criminal charge.

Proposed

A8369, introduced December 13, 2023, would amend insurance law to prohibit insurers from essentially using AI, an algorithm, or predictive model that incorporates external consumer data and information sources in a way to “unfairly discriminate” on the basis of “race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.” The bill includes certain requirements that the insurer must follow, such as providing information to the superintendent, in order to avoid unfairly discriminating against people. “External consumer data and information source” includes data used by an insurer to establish lifestyle indicators in “marketing, underwriting, pricing, utilization management, reimbursement methodologies, and claims management” practices.

Proposed

A8195, introduced October 27, 2023, and referred to the Assembly Science and Technology Committee, would, amongst a variety of things, establish an AI ethical code of conduct as well as require registration and licensing of “high-risk advanced artificial intelligence systems.” “High-risk” advanced AI system is a system that “possesses capabilities that can cause signifi9cant harm to the liberty, emotional, psychological, financial, physical, or privacy interests of an individual or groups of individuals, or which have significant implications on governance, infrastructure, or the environment.” This bill would apply to operators who distribute and have control over the development of a high-risk AI system.

Proposed

A8179, introduced October 27, 2023, and referred to the Ways and Means Committee, would tax certain corporations that have displaced people from their employment because of AI technologies, including machinery, AI algorithms, or computer applications. This bill would apply to corporations doing business in New York that have met specified requirements, such as having less than one million dollars but at least ten thousand dollars of receipts in New York. This act would take effect immediately upon enactment and apply to the next taxable year.

Proposed

A7859, introduced July 7, 2023, and referred to the Labor Committee, would amend labor law to require an employer or employment agency using an “automated employment decision tool to screen candidate who have applied for a position” to notify each candidate that this tool has been used to assess or evaluate the candidate, the job qualification and characteristics the tool uses, and information about the type of data the tool collects. “Automated employment decision tool” is any computation process that uses “machine learning, statistical modeling, data analytics, or artificial intelligence” to substantially assist or replace discretionary decision making for employment decisions. This bill would take effect on January 1 following enactment.

Proposed

Introduced February 3, 2023, and referred to the Consumer Affairs and Protection Committee, A3593 would amend general business law to require companies to follow a host of guidelines centered around protecting consumer privacy. In regard to AI, the bill would apply to a “controller” or “the person who, alone or jointly with others, determines the purposes and means of the processing of personal data.” This bill defines AI as an “automated decision-making” process derived from machine learning, AI, or an automated process involving personal data resulting in a decision affecting consumers. If a “controller makes an automated decision involving solely automated processing that materially contributes to a denial of financial or lending services, housing, public accommodation, insurance, health care services, or access to basic needs,” the controlled would need to (1) disclose an automated process made the decision; (2) provide an avenue for consumers to appeal the decision; and (3) explain the process to appeal the decision. In addition, a controller or processor engaged in this automated decision-making must annually do an “impact assessment” describing the automated decision-making process and assess if the process produces any discriminatory results. An independent auditor must assess the impact assessment results. This bill would take effect immediately.

Proposed

A9314, introduced February 24, 2024, and referred to the Labor Committee, would create criteria for the use of an “automated employment decision tool.” This is a system “used to filer employment candidates or prospective candidate for hire in a way that establishes a [referred candidate or candidate without relying on candidate-specific assessments by individual decision-makers.” This includes personality tests, cognitive ability tests, resume scoring systems, and other systems governed by statistical theory or specified methodologies. “Automated employment decision tool” does not include a tool that “does not automate, support, substantially assist or replace discretionary decision-making processes and that does not materially impact natural persons.” The guidelines this bill would create are conducting a disparate impact analysis to assess the impact of the employer’s use of an automated employment decision tool, writing a summary of the most recent disparate impact analysis, and providing to the department this summary. This act would take effect immediately.

Failed

S7422, introduced on May 24, 2023 and A7634, introduced on May 25, 2023, would prohibit film production companies who apply for Empire State film production credit from using synthetic media in any component of production that would displace a natural person from that role. This includes any form of media, such as text, image, video, or sound that is created or modified by use of artificial intelligence. Compliance with this act would be a condition for granting of the credit. If passed, the act would take effect immediately.

Proposed

Introduced on January 24, 2024, SB 217 would require AI-generated products have a watermark, prohibit removing such a watermark, prohibit simulated child pornography, and prohibit identity fraud using a replica of a person. Provides for injunctive relief and, for unauthorized removal of an AI watermark, a civil penalty of up to $10,000.

Proposed

Introduced on February 5, 2024, HB 3453, the Oklahoma Artificial Intelligence Bill of Rights would give Oklahoma residents the following rights:

  1. The right to know when they are interacting with an artificial intelligence engine rather than a real person;
  2. The right to know when their data is being used in an artificial intelligence model and the right to opt-out;
  3. The right to know when contracts and other documents that they are relying on were generated by an artificial intelligence engine rather than a real person;
  4. The right to know when they are consuming images or text that were generated entirely by an artificial intelligence engine and not reviewed by a human;
  5. The right to be able to rely on a watermark or some other form of content credentials to verify the authenticity of creative product they generate or consume. Specifically, it shall not be permissible for any websites, social media platforms, search engines, and the like, to remove a watermark or content credential without inserting an updated credential that indicates that the original was removed or altered.
  6. The right to know that any company which includes any of their data in an artificial intelligence model has implemented industry best practice security measures for data privacy, and conducts at least annual risk assessments to assess design, operational and discrimination harm.
  7. The right to approve any derivative media that is generated by an artificial intelligence engine and uses audio recordings of their voice or images of them to recreate their likeness.
  8. The right to not be subject to algorithmic or model bias which discriminates based on age, race, national origin, sex, disability, pregnancy, religious beliefs, veteran status, or any other legally protected classification.

If passed, the act would take effect November 1, 2024.

Proposed

Introduced on February 5, 2024, HB3577, the Artificial Intelligence Utilization Review Act would:

  • Require health insurers to disclose the use of AI algorithms;
  • Require health insurers to submit AI systems to Oklahoma Department of Insurance for review;

A violation shall be deemed to be an unfair method of competition and an unfair or deceptive act or practice. Civil penalties between $5,000 and $10,000.

If passed, the act would take effect November 1, 2024.

Proposed

Introduced on February 5, 2024, HB 3835, the Ethical Artificial Intelligence Act would:

  • direct deployers of automated decision tools to complete and document certain impact assessments
  • direct developers of automated decision tools to complete and document certain impact assessment;
  • direct deployers and developers to make impact assessment of certain updates;
  • mandate that developers and deployers provide certain impact assessment to the office of the attorney general;
  • require developer provide certain documentation to deployer;
  • require developer make certain information publicly available;
  • prohibit deployers from algorithmic discrimination.

The act would be enforced by the attorney general. A violation of the act would be an unfair or deceptive act in trade or commerce for the purpose of applying the Oklahoma Consumer Protection Act. Harmed parties may bring a civil action.

If passed, the act would take effect November 1, 2024.

Enacted

On August 1, 2023, Oregon passed SB619, the state’s first omnibus consumer privacy law.  The bill generally follows the Virginia Consumer Data Protection Act and sets out rules for profiling and automated decision-making.  Specifically, the bill enables individuals to opt-out of  processing for the purpose of “profiling the consumer to support decisions that produce legal effects or effects of similar significant significance.”  Profiling is defined as “an automated processing of personal data for the purpose of evaluating, analyzing or predicting an identified or identifiable consumer’s economic circumstances, health, personal preferences, interests, reliability, behavior, location or movements.” Controllers must also perform a data protection assessment for high-risk profiling activities. The law goes into effect on July 1, 2024.

Proposed

Introduced on March 7, 2023, HB49, would direct the Department of State to establish a registry of businesses operating artificial intelligence systems in the State.  The registry would include (1) The name of the business operating artificial intelligence systems; (2) The IP address of the business; (3) The type of code the business is utilizing for artificial intelligence; (4) The intent of the software being utilized; (5) The personal information and first and last name of a contact person at the business; (6) The address, electronic email address and ten-digit telephone number of the contact person; and (7) A signed statement indicating that the business operating an artificial intelligence system has agreed for the Department of State to store the business’s information on the registry. There has been no further action on HB49 since March 7, 2023.

Proposed

Introduced on March 27, 2023, HB708, would establish an omnibus consumer privacy law along the lines of those enacted in states like Virginia.  Among its requirements, the bill provides consumers with the right to opt-out of the processing of their personal data for purposes of “profiling in furtherance of in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.”  Profiling is defined as a “form of automated processing performed on personal data to evaluate, analyze or predict personal aspects related to an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” The bill also mandates the performance of data protection assessments in connection with “profiling” where the profiling presents “a reasonably foreseeable risk of: (i) discriminatory, unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (ii) financial, physical or reputational injury to consumers; (iii) a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where the intrusion would be offensive to a reasonable person; or (iv) other substantial injury to consumers.”

If passed, the act would go into effect in 18 months. There has been no further action taken on HB708 since March 27, 2023.

Proposed

Introduced on December 13, 2023, HB 1201 appears similar to HB 708 (above) in that it would establish an omnibus consumer privacy law. It provides consumers with the right to “Opt out of the processing of the consumer’s personal data for the purpose of any of the following: (i) Targeted advertising; (ii) The sale of personal data, except as provided under section 5(b); and (iii) Profiling in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the consumer.” “Profiling” is defined as “Any form of automated processing performed on personal data to evaluate, analyze or predict personal aspects related to an identified or identifiable individual’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” The bill would mandate data protection impact assessments where “the profiling presents a reasonably foreseeable risk of any of the following: (i) Unfair or deceptive treatment of, or an unlawful disparate impact on, a consumer.

(ii) Financial, physical or reputational injury to a consumer. (iii) A physical or other intrusion upon the solitude or seclusion of a consumer or the private affairs or concerns of a consumer where the intrusion would be offensive to a reasonable person. (iv) Any other substantial injury to a consumer.”

If passed, the act will take effect in 6 months.

Proposed

Introduced on January 9, 2024, HB 1947 appears similar to HB 708 and HB 1201 (above) in that it would establish an omnibus consumer privacy law.  It provides consumers with the right to “Decline or opt out of the processing of the consumer’s personal information for the purpose of any of the following: (i) Targeted advertising. (ii) The sale of personal information. (iii) Profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer.” “Profiling” is defined as “A form of automated processing of personal information to evaluate, analyze or predict personal aspects concerning an identified individual or identifiable individual, including the individual’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” A Data Protection Impact Assessment is not specifically mentioned in this bill.

If passed, the act will take effect in 1 year.

Proposed

Introduced on September 7, 2023, HB 1663 would require disclosure by health insurers of the use of artificial intelligence-based algorithms in the utilization review process. Requirements would include:

  • Disclose to clinicians, subscribers, and the public that claims evaluations use AI algorithms
  • Define ‘Algorithms used in claims review’ as clinical review criteria and therefore ensure they are subject to existing laws and regulations that such criteria be grounded in clinical evidence
  • Require specialized health care professionals who review claims for health insurance companies and rely on initial AI algorithms for such reviews to individually open each clinical record or clinical data, examine this information, and document both their own review and reason for denial before any decision to deny a claim is conveyed to a subscriber or health care provider.
  • Require health insurance companies to submit their AI-based algorithms and training datasets to the Pennsylvania Department of Insurance for transparency and require the Department of Insurance to certify that said algorithms and training data sets have minimized the risk of bias based on categories outlined in the Human Relations Act and other anti-discrimination statutes as applicable to health insurance in Pennsylvania and adhere to evidence-based clinical guidelines.

If passed, the act will take effect in 60 days. No further action has been taken on HB 1663 since September 7, 2023.

Proposed

PA SB1044, introduced May 16, 2024, proposes amendments to the Unfair Trade Practices and Consumer Protection Law that address the creation, distribution, or publication of AI-generated content. A disclosure would be required that clearly states that the content was AI-generated. The amendments would exempt owners, agents, or employees of radio or television stations, ISPs, newspapers, and other publications that, in good faith, acted without knowledge that the content was AI-generated.

Proposed

Introduced on August 7, 2023, HB 1598 would amend the Unfair Trade Practices and Consumer Protection Law to expand the definition of an unfair trade practice to include “creating, distributing or publishing any content generated by artificial intelligence without clear and conspicuous disclosure, including written text, images, audio and video content and other forms of media. A disclosure under this subclause must state that the content was generated using artificial intelligence and must be presented in a manner reasonably understandable and readily noticeable to the consumer.

If passed, the act will take effect in 60 days.

Failed

Introduced on February 1, 2023, SB146, would prohibit certain uses of automated decision systems and algorithmic operations in connection with video-lottery terminals and sports betting applications.  The law would take effect upon passage. The law was not accepted prior to the end of the legislative session in June 2023.

Proposed

RI H7521, introduced February 7, 2024, seeks to regulate automated decision tools and artificial intelligence by requiring regular impact assessments to measure the purpose, outputs, safeguards, and adverse impacts of such technologies. The bill would require that individuals subject to such automated decisions be notified that the consequential decisions were made using automated tools and/or AI. It also prohibits discrimination and allows civil actions against developers and deployers for such discrimination.

Proposed

S2888, entitled “Automated Decision Tools” and introduced on March 22, 2024, would require companies developing or deploying high-risk AI systems to conduct impact assessment and adopt risk management programs. Deployers would be required to implement and maintain risk management programs that identify, mitigate, and document risks associated with “consequential artificial intelligence decision systems” (CAIDS) before deployment. Developers would be obligated to provide deployers with information related to impact assessments, including the capabilities and limitations of CAIDS.

Failed

Introduced on March 30, 2023, HB6236, the Rhode Island Data Transparency And Privacy Protection Act, would establish an omnibus consumer privacy law along the lines of those enacted in states like Virginia.  Among its requirements, the bill provides consumers with the right to opt-out of the processing of their personal data for purposes of “profiling in furtherance of solely automated decisions that produce legal or similarly significant effects concerning the customer.”  Profiling is defined as “any form of automated processing performed on personal data to evaluate, analyze or predict personal aspects related to an identified or identifiable individual’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements.”  The bill also mandates the performance of data protection assessments in connection with “profiling” where the profiling presents “a reasonably foreseeable risk of unfair or deceptive treatment of, or unlawful disparate impact on, customers, financial, physical or reputational injury to customers, a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of customers, where such intrusion would be offensive to a reasonable person, or other substantial injury to customers[.]” The law was not accepted prior to the end of the legislative session in June 2023.

Failed

Introduced on April 19, 2023, H6286, would regulate companies’ uses of generative artificial intelligence models. Any company using large-scale generative AI may not use AI for discriminatory practices. The AI model must be programmed to generate text with a distinctive watermark to prevent plagiarism. The company must implement reasonable security measures to protect the data of individuals used to train the model, and the company must obtain informed consent from these individuals before using their data. The company must also conduct regular risk assessments of potential risks and harms related to their services. Within 90 days of the effective date of this act, any company using large-scale generative AI must register the name of the company, description of the AI model, and information on the company’s data gathering practices with the attorney general.

Failed

Introduced on January 18, 2023, SB404, would prohibit any operator of a website, an online service, or an online or mobile application, including any social media platform, to utilize an automated decision system (ADS) for content placement, including feeds, posts, advertisements, or product offerings, for a user under the age of eighteen.  In addition, an operator that utilizes an ADS for content placement for residents of South Carolina who are eighteen years or older shall perform an age verification through an independent, third-party age-verification service, unless the operator employs the bill’s prescribed protections to ensure age verification. The bill includes a private right of action.

Proposed

Introduced on January 9, 2024, H4696 would create a consumer data privacy law in South Carolina similar to those in states like Virginia. Among other requirements, controllers must honor verifiable consumer requests to opt-out of “profiling in furtherance of a decision that produces a legal or similarly significant effect concerning a consumer.” Controllers also must conduct a data protection impact assessment for “ the processing of personal data for purposes of profiling if the profiling presents a reasonably foreseeable risk of: (a) unfair or deceptive treatment of or unlawful disparate impact on consumers;  (b) financial, physical, or reputational injury to consumers; (c) a physical or other intrusion on the solitude or seclusion, or the private affairs or concerns, of consumers, if the intrusion would be offensive to a reasonable person; or (d) other substantial injury to consumers.”

“Profiling” means “any form of solely automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable individual’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”  If passed, the act would take effect immediately.

Proposed

Introduced on January 9, 2024, H4660 would require that “a person, corporation, committee, or other entity shall not, within ninety days of an election at which a candidate for elective office will appear on the ballot, distribute a synthetic media message that the person, corporation, committee, or other entity knows or should have known is a deceptive and fraudulent deepfake of a candidate on the ballot.”

If passed, the act would take effect immediately.

Proposed

Introduced on January 16, 2024, H4842, the South Carolina Age-Appropriate Design Code Act would apply to any business operating in South Carolina that either: “(i) has annual gross revenues more than twenty-five million dollars, as adjusted every odd-numbered year to reflect the Consumer Price Index;  (ii) alone or in combination, annually buys, receives for the covered entity’s commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal data of fifty thousand or more consumers, households, or devices; or (iii) derives fifty percent or more of its annual revenues from selling consumers’ personal data.”

Covered entities would be prohibited from “profiling” children under age 18 by default unless both of the following criteria are met: “ (a) the covered entity can demonstrate it has appropriate safeguards in place to ensure that profiling is consistent with the best interests of children reasonably likely to access the online service, product, or feature; and (b) either of the following is true: (i) profiling is necessary to provide the online service, product, or feature requested and only with respect to the aspects of the online service, product, or feature with which a child is actively and knowingly engaged; or (ii) the covered entity can demonstrate a compelling reason that profiling is in the best interests of children.”

“Profiling” means “any form of automated processing of personal data to evaluate, analyze, or predict personal aspects concerning an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements. ‘Profiling’ does not include the processing of information that does not result in an assessment or judgment about a natural person.”

Enacted

Effective July 1, 2024, HB1181, the Tennessee Information Protection Act, establishes an omnibus consumer privacy law along the lines of those enacted in states like Virginia.  Among its requirements, the bill mandates the performance of data protection assessments in connection with “profiling” where the profiling presents a reasonably foreseeable risk of: (A) Unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (B) Financial, physical, or reputational injury to consumers; (C) A physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where the intrusion would be offensive to a reasonable person; or (D) Other substantial injury to consumers.  “Profiling” is defined as “a form of automated processing performed on personal information to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements[.]”  The law gives the Tennessee Attorney General’s Office authority to impose civil penalties on companies who violate the law.

Enacted

The Ensuring Likeness Voice and Image Security Act (“ELVIS Act”) was signed into law on March 21, 2024. The Act protects voices of songwriters, performers, and celebrities from artificial intelligence and deepfakes by prohibiting the use of AI to mimic a person’s voice without their permission, and treats violations as Class A misdemeanors. The Act also authorizes civil action against any person who violates the law. The Act becomes effective July 1, 2024.

DRAFT

Released as a draft on October 28, 2024, the Texas Responsible AI Governance Act (“TRAIGA”) is expected to be introduced by Rep. Capriglione in the 2025 legislative session (starting January 14, 2025). Rep. Capriglione has had prior success with privacy-related bills in Texas, such as the Texas Data Privacy and Security Act, and worked with industry stakeholders to draft TRAIGA. If passed, TRAIGA would amend the Texas Data Privacy and Security Act to establish risk-based obligations in connection with the use of AI systems.

A “high-risk artificial intelligence system” is defined as any AI system that, when deployed, makes, or contributes to making, a consequential decision. “Consequential decisions” are decisions that have a material legal or similarly significant effect on the consumer, such as those relating to criminal case assessments, education enrollment, financial services, electricity services, food, healthcare services, housing, and other similarly important considerations. “Algorithmic discrimination” is defined as any unlawful differential treatment or impact that disfavors an individual or group based on their actual or perceived age, color, disability, ethnicity, genetic information, national origin, race, religion, sex, veteran status, or other protected classifications. TRAIGA would also establish an AI Council in Texas.

TRAIGA would require developers of “high-risk artificial intelligence systems” to exercise reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the system. Developers would also be required to provide a risk assessment to any deployers of the system, describing how the system should be used, any known limitations, and any reasonably foreseeable risks associated with its use, among other factors. Deployers of these systems would also be required to independently prepare a separate risk assessment of the system.

TRAIGA would impose additional obligations, including requirements to: (i) limit the risk that a high-risk AI system could be used to circumvent informed decision-making; (ii) prohibit the use of the system for social scoring; (iii) prohibit the collection of biometric identifiers in certain instances; (iv) prohibit emotion recognition without a consumer’s consent; and (v) prohibit the development of sexually explicit media.

Consumers would be required to be notified about the use of high-risk AI systems, both by the developer and the deployer, depending on the circumstances and the consumer's relationship with the deployer. In general, consumers must be notified of the system's use prior to interacting with it. This notification must include a description of the system's purpose, the fact that the system may or will make a consequential decision affecting the consumer, the nature of any consequential decisions in which the system may be a contributing factor, the factors used in making those decisions, contact information for the deployer, a statement regarding any human or automated components of the system, and a declaration of consumer rights under Section 551.107 (e.g., the right to seek declaratory or injunctive relief, as described below).

The deployer or developer would also be required to notify relevant state regulators (e.g., the state Attorney General, the AI Council created under TRAIGA, or the relevant state regulator for the industry) and affected consumers “as soon as practicable, but no later than the 10th day” after discovering that a high-risk AI system has caused or is likely to cause: (1) algorithmic discrimination of an individual or group, or (2) an inappropriate or discriminatory consequential decision. If the developer discovers or is made aware that a deployed high-risk AI system is using inputs or producing outputs that violate TRAIGA, the deployer must cease operating the system as soon as technically feasible and notify the AI Council and the Texas Attorney General as soon as practicable, but no later than 10 days after discovering the violation.

Enforcement would be handled by the Texas Attorney General, who would be authorized to bring civil actions against the developer and/or deployer to recover reasonable attorney’s fees and other reasonable expenses. The Attorney General could also impose a fine of between $5,000 and $10,000 per uncured violation. If a violation cannot be cured, the Attorney General may impose an administrative fine of between $40,000 and $100,000 per violation. As currently drafted, there would be a 30-day cure period from the notification of any alleged violation of the Act. Any developer or deployer who continues to operate in violation of the Act would be subject to a fine of $1,000 to $20,000 per day.

TRAIGA also authorizes consumers to seek declaratory relief (with the ability to recover reasonable attorneys’ fees) or injunctive relief against any deployer or developer who violates the Act.

TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption from TRAIGA’s general restrictions. Additionally, there is an exemption for AI developers who release their systems under a free and open-source license in certain circumstances.

Enacted

Introduced on February 16, 2023, HB4, the Texas Data Privacy and Security Act, is based on the Virginia Consumer Data Protection Act.  Once effective, the law will create similar requirements enabling individuals to opt-out of “profiling” that produces a legal or similarly significant effect concerning the individual.  Controllers must also perform a data protection assessment for high-risk profiling activities.  The Act goes into force on July 1, 2024.

Failed

Introduced on March 10, 2023, HB4695, would prohibit the use of artificial intelligence technology to provide counseling, therapy, or other mental health services unless (1) the artificial intelligence technology application through which the services are provided is an application approved by the commission; and (2) the person providing the services is a licensed mental health professional or a person that makes a licensed mental health professional available at all times to each person who receives services through the artificial intelligence technology.  The artificial intelligence technology must undergo testing and approval by the, Texas Health and Human Services Commission, the results of which will be made publicly available.  If passed, the law would take effect September 1, 2023.

Failed

Introduced on January 25, 2023, H114, would restrict the use of electronic monitoring of employees and the use of automated decision systems (ADSs) for employment-related decisions. Electronic monitoring of employees may only be conducted when, for example, the monitoring is used to ensure compliance with applicable employment or labor laws or to protect employee safety, and certain notice is given to employees 15 days prior to commencement of the monitoring. ADSs must also meet a number of requirements, including corroboration of system outputs by human oversight of the employee and creation of a written impact assessment prior to using the ADS.  The law was not accepted prior to the end of the legislative session in May 2023.

Proposed

Introduced on January 9, 2024, H.710 would put in place certain obligations for both developers and deployers of “high-risk artificial intelligence system.” For developers, these obligations would include, among others, using reasonable care to avoid any risk of algorithmic discrimination that is a reasonably foreseeable consequence of developing or modifying a high-risk system to make consequential decisions. Developers would also be required to provide disclosures relating to the system, such as disclosures about the known limitations of the system and foreseeable risks of algorithmic discrimination, a summary of the type of data to be processed, the purpose of processing, mitigation measures put in place to limit identified risks, and other similar information necessary to conduct a risk assessment. Similar obligations would apply to developers of generative artificial intelligence.

Deployers would be required to use reasonably care to avoid any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using a high-risk artificial intelligence system. High-risk systems may only be used to the extent that the deployer has already implemented a risk management policy that is at least as stringent as the Artificial Intelligence Risk Management Framework published through NIST and the deployer has conducted a risk assessment for the system.

Search engines and social media platforms knowingly using, or which reasonably believes that it is using, synthetic digital content would also be required to provide consumers with a signal indicating that the content was produced, or is reasonably believed to have been produced, by generative artificial intelligence.

Failure to comply with the Act would be treated as an unfair and deceptive act in trade and commerce in violation of 9 VSA 2453. The Attorney General may provide a cure period at its discretion. The Act would take effect on July 1, 2024.

Proposed

Introduced on January 9, 2024, H.711 would create an oversight and enforcement agency to collect and review risk assessments taken in connection with the use of high-risk artificial intelligence systems. The Act would require each deployer of “inherently dangerous artificial intelligence systems” to submit a risk assessment prior to deploying such a system and every two years thereafter, as well as submit a new risk assessment in case material and substantial changes are made to the system. Deployers would also be required to submit a 1-, 6-, and 12-month testing results to the Division of Artificial Intelligence showing the reliability of the results generated by the systems, as well as variances and mitigation measures put in place to limit risks posed by the use of such systems.

The Act would also create a duty for deployers and developers to meet a certain standard of care for the use of any inherently dangerous artificial intelligence systems that “could be reasonably expected to impact consumers.” The Act would also prohibit the deployment of inherently dangerous artificial intelligence systems that pose disproportionate risks unless those risks are evaluated and validated against the Artificial Intelligence Risk Management Framework published by NIST.

Violations of the Act would be treated as an unfair practice in commerce. The Act would also create a private right of action for consumers harmed by a violation of the chapter. The Act would take effect July 1, 2024.

Enacted

The Virginia Consumer Data Protection Act (VCDPA), which went into force on January 1, 2023, sets out rules for profiling and automated decision-making.  Specifically, the VCDPA enables individuals to opt-out of “profiling in furtherance of decisions that produce legal or similarly significant effects” concerning the consumer, which is generally defined as “the denial and/or provision of financial and lending services, housing, insurance, education enrollment or opportunities, criminal justice, employment opportunities, healthcare services, or access to basic necessities.”  Controllers must also perform a data protection impact assessment for high-risk profiling activities.

Proposed

Introduced on January 10, 2024, HB 747, the Artificial Intelligence Developer Act, would prohibit developers of “high-risk artificial intelligence systems” from offering, selling, leasing, giving, or otherwise providing to a third party to deploy any artificial intelligence unless they provide the developers with sufficient information to perform a risk assessment on the use of the system, such as through a document detailing the potential risks and benefits of using the system, as well as a description of the intended uses of that system. Similar obligations would apply to developers of generative artificial intelligence.

The Act would also require deployers of artificial intelligence to take reasonable care to avoid any risk of reasonably foreseeable “algorithmic discrimination” and may only use the high-risk artificial intelligence system to make “consequential decisions” if the deployer has designed and implemented a risk management policy for the use of that program. The Act also specifies the elements that must be included in a risk assessment, which includes, among other considerations, the purpose of processing, a description of transparency measures taken concerning the system, a description of the data used to train the algorithm, and other information.

Failure to comply with the Act would result in civil penalties not to exceed $1,000 plus reasonable attorney fees, expenses, court costs, and willful violations may result in civil penalties between $1,000 and $10,000. The law would take effect July 1, 2026.

Failed

Introduced on January 31, 2023, and reintroduced on January 8, 2024, SB5643 and its companion HB1616, the People’s Privacy Act, would prohibit a covered entity or Washington governmental entity from operating, installing, or commissioning the operation or installation of equipment incorporating “artificial intelligence-enabled profiling” in any place of public resort, accommodation, assemblage, or amusement, or to use artificial intelligence-enabled profiling to make decisions that produce legal effects (e.g., denial or degradation of consequential services or support, such as financial or lending services, housing, insurance, educational enrollment, criminal justice, employment opportunities, health care services, and access to basic necessities, such as food and water) or similarly significant effects concerning individuals. “Artificial intelligence-enabled profiling” is defined as the “automated or semiautomated process by which the external or internal characteristics of an individual are analyzed to determine, infer, or characterize an individual’s state of mind, character, propensities, protected class status, political affiliation, religious beliefs or religious affiliation, immigration status, or employability.”  The bill also bans the use of “face recognition” in any place of public resort, accommodation, assemblage, or amusement.  “Face recognition” is defined as “(i) An automated or semiautomated process by which an individual is identified or attempted to be identified based on the characteristics of the individual’s face; or (ii) an automated or semiautomated process by which the characteristics of an individual’s face are analyzed to determine the individual’s sentiment, state of mind, or other propensities including, but not limited to, the person’s level of dangerousness[.]”

Failed

Introduced on January 24, 2024, SB6299, would make it unlawful for any employer to utilize artificial intelligence or generative artificial intelligence to evaluate or otherwise make employment decisions regarding current employees without written disclosure of the employer’s use of such technology at the time of the employee’s initial hire, or within 30 calendar days of the employer starting to use such technology for such purpose.

Failed

Introduced on December 14, 2023, HB1951, provides that by January 1, 2025, and annually thereafter, a developers and deployers of automated decision tools must complete and document an impact assessment for any automated decision tool the deployer uses, or the developer develops, as specified.  “Automated decision tool” means a system or service that uses artificial intelligence and has been specifically developed and marketed to, or specifically modified to, make, or be a controlling factor in making, consequential decisions. Upon the request of the office of the attorney general, a developer or deployer must provide any impact assessment that it performed pursuant to this section to the office of the attorney general.  The bill requires certain other public disclosures.   The bill also prohibits the use of an automated decision tool that results in algorithmic discrimination.

Failed

Introduced on February 14, 2023, HB3498, the Consumer Data Protection Act, would create an omnibus consumer privacy law.  The bill generally follows the Virginia Consumer Data Protection Act and sets out rules for profiling and automated decision-making.  Specifically, the bill enables individuals to opt-out of the processing of their personal data for the purpose of “profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.”  Profiling is defined as “any form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”  Controllers must also perform a data protection assessment for high-risk profiling activities.

Related Practice Areas

  • Data Privacy & Security

This material is not comprehensive, is for informational purposes only, and is not legal advice. Your use or receipt of this material does not create an attorney-client relationship between us. If you require legal advice, you should consult an attorney regarding your particular circumstances. The choice of a lawyer is an important decision and should not be based solely upon advertisements. This material may be “Attorney Advertising” under the ethics and professional rules of certain jurisdictions. For advertising purposes, St. Louis, Missouri, is designated BCLP’s principal office and Kathrine Dixon (kathrine.dixon@bclplaw.com) as the responsible attorney.