State Attorneys General Provide Guidance on Artificial Intelligence Under Existing Data Privacy Laws
Key Takeaways: Several state attorneys general have issued guidance on how existing state laws apply to artificial intelligence (“AI”), with data privacy being a key concern. This guidance shows that many state laws already regulate AI, even without specific legislation targeting it.
With the recent growth in the use of artificial intelligence (“AI”), several states have taken the opportunity to issue guidance on the interplay of their existing laws and the development, supply, and use of AI. Among these states are California, Massachusetts, New Jersey, and Oregon. The guidance by state attorneys general addresses several different areas of state law which might apply in this context, including data privacy. Data plays a crucial role in the development of AI, making it critical for businesses to comply with existing data privacy laws.
California
On January 13, 2025, California Attorney General Rob Bonta issued two legal advisories, the Legal Advisory on the Application of Existing California Law to Artificial Intelligence in Healthcare and the Legal Advisory on the Application of Existing California Laws to Artificial Intelligence, outlining how California laws apply to AI.
The advisories emphasize that the use of AI must comply with the California Consumer Privacy Act (“CCPA”). This includes ensuring transparency, respecting individual data rights, and limiting data processing to what is “reasonably necessary and proportionate.”
Additionally, the California Invasion of Privacy Act (“CIPA”), which regulates the use of wiretapping and recording technologies, could be implicated if AI is trained by recording or listening to private electronic communications. It may also apply if an AI system examines or records voiceprints to determine the truth or falsity of statements without the individual’s consent.
The Student Online Personal Information Protection Act (“SOPIPA”) and the Confidentiality of Medical Information Act (“CMIA”) could also have an impact on how AI tools can be used in specific situations. SOPIPA applies to services and applications used primarily for K-12 school purposes and prohibits education technology service providers from selling student data, engaging in targeted advertising using student data, and creating profiles about students, except for specified school purposes. The CMIA requires covered entities to preserve the confidentiality of patients’ medical information and obtain patients’ consent before disclosing medical information to third parties. Businesses subject to SOPIPA or CMIA should be particularly mindful of their obligations under these laws to ensure their use of AI tools and any further use of the data by the AI provider is compliant with applicable legal requirements.
Oregon
On December 24, 2024, Oregon Attorney General Ellen Rosenblum issued guidance for businesses deploying AI technologies. The advisory highlights privacy and accountability as major concerns, given AI’s dependence on large volumes of personal data.
The advisory notes that the Oregon Consumer Privacy Act (“OCPA”) guarantees a consumer’s right to control the distribution of their personal data, and such control is particularly relevant for AI systems trained with such data. Developers that use personal data to train AI systems must clearly disclose this use in an accessible and clear privacy notice and obtain explicit consent from consumers if the personal data includes sensitive data as defined under the OCPA. The guidance emphasizes that affirmative consent for the use of sensitive data is required and retroactive or passive attempts to modify privacy notices or terms of use will not bring a business into compliance.
Pursuant to the OCPA, consumers must also be able to opt out of AI profiling when AI is used to make significant decisions about them, such as those pertaining to housing, education, or lending. Additionally, before engaging in high-risk activities such as profiling using AI tools, businesses must conduct data protection assessments to mitigate the risk posed to consumers.
Finally, businesses must ensure that their data practices are transparent and truthful, as misleading consumers can lead to legal violations under Oregon’s consumer protection laws, even if not directly covered by the OCPA.
Massachusetts
On April 16, 2024, Massachusetts Attorney General Andrea Joy Campbell issued an Advisory on the Application of the Commonwealth’s Consumer Protection, Civil Rights, and Data Privacy Laws to Artificial Intelligence to inform developers, suppliers, and users of AI of their obligations under Massachusetts consumer protection, anti-discrimination, and data security laws.
The advisory stresses that AI systems must comply with Massachusetts’ Standards for the Protection of Personal Information of Residents of the Commonwealth (the “Standards”), promulgated under Chapter 93H of the Massachusetts Consumer Protection Act. The Standards establish minimum security standards for entities that own or license the personal information of Massachusetts residents and AI developers, suppliers, and users must take the necessary steps to safeguard personal information used by AI systems, as well as comply with the breach notification requirements set forth in the statute. Chapter 93A of the Massachusetts Consumer Protection Act sets forth regulations regarding unfair and deceptive practices, which are also applicable to the marketing, sale, and use of AI tools. The advisory states that it is unfair or deceptive to falsely advertise the quality, value, or usability of AI systems; supply an AI system that is defective, unusable, or impractical for the purpose advertised; or misrepresent the reliability, manner of performance, safety, or condition of an AI system.
In addition, the advisory addresses Massachusetts’ Anti-Discrimination Law, which prohibits developers, suppliers, and users of AI systems from deploying technology that discriminates against residents on the basis of a legally protected characteristic. This includes algorithmic decision-making that relies on or uses discriminatory inputs and that produces discriminatory results.
New Jersey
On January 9, 2025, New Jersey Attorney General Matthew J. Plankin issued Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination to explain how the New Jersey Law Against Discrimination (“NJLAD”) may apply to algorithmic discrimination resulting from the use of new and emerging data-driven technologies, such as AI. The guidance provides a detailed explanation of algorithmic discrimination, the situations in which it may arise, and how the NJLAD protects consumers against discriminatory action.
The NJLAD prohibits algorithmic discrimination based on perceived or actual protected classes in disparate treatment or impact discrimination claims, as well as the failure to provide reasonable accommodations. The guidance highlights how AI tools used in various business practices, including in the context of employment, housing, places of public accommodation, credit, and contracting may unlawfully discriminate against individuals in violation of the NJLAD.
The guidance emphasizes that covered entities as well as the developers and vendors of automated decision-making tools must carefully consider and evaluate the design and testing of automated such tools before they are deployed, and carefully analyze and evaluate those tools on an ongoing basis after they are deployed in order to ensure that the tools do not contribute to the likelihood of algorithmic discrimination.
Importantly, all four guidance documents emphasize that all laws, including those not expressly mentioned, apply equally to the use and development of AI tools and systems. Entities that develop or use AI tools have a duty to ensure their use is compliant with all state, federal, and local laws. A variety of existing state laws apply to AI tools, even in the absence of specific AI regulations. Businesses utilizing AI tools in any state should ensure that their use of AI remains compliant with applicable laws.
Koley Jessen is committed to staying informed about developments related to AI guidelines and state privacy laws and will offer guidance as new information emerges. If you are unsure about your business’ AI compliance needs or the steps required to adhere to state privacy laws, please contact one of the specialists in Koley Jessen's Data Privacy and Security Practice Area for expert assistance.
* Special thanks to summer associate Ellie Johnson for her contributions to this article.
This content is made available for educational purposes only and to give you general information and a general understanding of the law, not to provide specific legal advice. By using this content, you understand there is no attorney-client relationship between you and the publisher. The content should not be used as a substitute for competent legal advice from a licensed professional attorney in your state.