Artificial intelligence, referred to as “AI” for short, has had an outsized impact on nearly every aspect of human existence. If that sounds like an overstatement, it’s not— machine learning systems and generative AI tools have now been integrated into various sectors of life including healthcare, government services, industry, and education. In 2023, more than 50% of US companies reported using AI for cybersecurity/fraud management, and 97% of business owners expressed enthusiasm that AI platforms like ChatGPT will help their businesses. Several cities and municipalities have adopted protocols for how local government may use and rely upon AI as part of day-to-day duties. 

Unsurprisingly, the law has lagged well behind the impressive speed of AI’s ballooning technological development. This notwithstanding, various governmental agencies, legislative bodies, and courts have begun to assemble a regulatory regime which may help answer the million-dollar question in this brave new world: who, or what, is liable when AI goes wrong?

Federal law

To begin, entities designing and utilizing AI will likely not be able to hide behind Section 230 for refuge. One of the most important authorities in the AI space, Section 230 of the Communications Decency Act of 1996 provides that “[n]o [internet platform] shall be treated as the publisher or speaker of any information provided by another information content provider.” Many legal scholars, however, are predicting that Section 230’s liability shield will not protect content created by generative AI. Indeed, during proceedings before the Supreme Court in early 2023, Justice Neil Gorsuch suggested that because AI is capable of generating ‘poetry’ and ‘polemics,’ it will not be protected by Section 230.

Case law

Likewise, AI designers won’t likely be able to escape liability for what their chatbots write by claiming that generative AI is a mere “tool” through which individuals can create content. OpenAI—the company responsible for “ChatGPT”— has been the subject of several lawsuits brought by parties attempting to hold it liable for statements made by its generative AI. In one such complaint, a Georgia radio host alleges that ChatGPT generated text which falsely accused him of embezzlement.  Although OpenAI has responded that it should not be held liable, as their product is merely a tool through which people can create content, legal scholars have predicted that this will be a losing argument.

Given the pervasiveness of these technologies, it is reasonable to presume that individuals could be harmed by generative AI in any number of ways, apart from being injured directly by the chatbot’s output or by a reliance on the same. In Jones v. W + M Automation, Inc., the plaintiff was injured by a robotic loading system and brought suit against a manufacturer and programmer of the system on a products liability theory. The court dismissed the complaint, finding that the defendants “manufactured only nondefective component parts” and the product was “reasonably safe when designed and installed.” Commentators have suggested that this same logic may apply to AI; designers of AI may not be liable for injuries caused by their products so long as the products were non-defective when made. However, defective AI, or AI is that is subsequently modified by a licensee, may give rise to liability for the licensor and licensee.

Administrative guidance

In guidance released in April of 2020, the Federal Trade Commission (FTC) promulgated guidelines pertaining to the regulation of AI. In this guidance, the FTC warns those who utilize algorithmic decision-making tools to make credit-granting decisions to do so in a transparent way and suggests that companies who fail to do so may be liable for damages.

Guidance from Massachusetts

Government officials have also used their platforms to offer some guidance on the subject. On April 16, 2024, Massachusetts Attorney General Andrea Joy Campbell issued an advisory to provide direction to “developers, suppliers, and users” of AI regarding their obligations under existing laws relating to consumer protection, anti-discrimination, and data security. In a nutshell, the advisory notes that existing laws apply to emerging AI systems just as they would in the context of any other technology. On the specific question of who is liable for a breach, however, the advisory is less helpful. It suggests that “developers, suppliers, and users” are liable where the state’s data security and anti-discrimination law are implicated. On the other hand, only “AI suppliers” are warned about the applicability of federal and state consumer protection statutes, including the Federal Trade Commission Act. This language can be read to indicate that the designers and end-users of AI could be off the hook for an AI’s breach of consumer protection laws, which can be quite broad (according to the advisory, an AI breaches state consumer protection laws where it “[f]ails to comply with Massachusetts ‘statutes, rules, regulations or laws, meant for the protection of the public’s health, safety or welfare.’ 940 Code Mass. Regs 3.16(3)). It should be noted, however, that the vague terms “developers, suppliers, and users” go undefined in the advisory. As a result, any AI stakeholders should presume that that they are not immune from liability.

Looking ahead

AI is certain to become more prevalent in the coming years, and with this heightened  prevalence will come litigation. It is likely that the “who is liable” inquiry in any given case involving AI will be a fact-specific one, and as a result, the answer will change as the case law becomes more developed. Without any comprehensive regulatory framework or legal standard, we can expect rulings to differ based on jurisdiction, court, and the nature of claims presented. At the moment, however, it appears that AI and the companies who use it will be on the hook for any damages stemming from its use.

Taft’s Privacy & Data Security team has extensive experience counseling clients emerging technology use, such as AI. We will continue to monitor updates relating to AI. For more data privacy & security-related updates, please visit Taft’s Privacy & Data Security Insights blog and the Taft Privacy & Data Security Mobile Application.