Civil Liability for Artificial Intelligence

The Principles project will focus on the core problem of physical harms (bodily injury and property damage). Other types of harm, such as copyright infringement, defamation, and privacy, have their own distinctive doctrinal questions and are the subjects of separate, ongoing Restatement projects. By focusing on physical harms, the project can maintain a clear scope and avoid overlap with other ongoing work. As the project progresses, the Institute will consider the broader implications of AI-caused harms and whether a more comprehensive approach might be necessary in the future.

From the press release announcing the project in 2024:

“Artificial intelligence has become front-page news, and in a short time has seen rapid advancements and increasing integration in many aspects of our society,” said ALI Director Diane P. Wood. “As AI systems become more sophisticated and capable, legal questions surrounding their use, including exposure to liability and ethical implications, are becoming increasingly complex and pressing. Given the anticipated increase in AI adoption by many industries over the next decade, now is an opportune time for The American Law Institute to undertake a more sustained analysis of common-law AI liability topics through a Principles project.”

“Courts are already facing the first set of cases alleging harms, largely related to copyright and privacy, stemming from chatbots and other generative AI models,” added Reporter Geistfeld, “ but,  there is not yet a sufficient body of caselaw that could be usefully restated. Meanwhile, influential state legislatures are actively considering bills addressing AI, and Congress and federal regulators pursuant to President Biden’s Executive Order 14110 are also addressing these matters. These efforts could benefit from a set of principles, grounded in the common law, for assigning responsibility and resolving associated questions such as the reasonably safe performance of AI systems.”

“This project can help courts, the tech industry, and federal regulators understand the legal implications of AI,” explained Wood. “It focuses on common-law principles of responsibility, which can guide decision-making in the absence of applicable legislation. By identifying these principles, the project can help avoid conflicts between federal and state laws and provide clarity for all involved parties.”

The Principles project will focus on the core problem of physical harms (bodily injury and property damage). Other types of harm, such as copyright infringement, defamation, and privacy, have their own distinctive doctrinal questions and are the subjects of separate, ongoing Restatement projects. By focusing on physical harms, the project can maintain a clear scope and avoid overlap with other ongoing work. As the project progresses, the Institute will consider the broader implications of AI-caused harms and whether a more comprehensive approach might be necessary in the future.

“There are certain characteristics of AI systems that will likely raise hard questions when existing liability doctrines are applied to AI-caused harms,” explained Geistfeld. “Examples include the general-purpose nature of many AI systems, the often opaque, ‘black box,’ decision-making processes of AI technologies, the allocation of responsibility along the multi-layered supply chain for AI systems, the widespread use of open-source code for foundation models, the increasing autonomy of AI systems, and their anticipated deployment across a wide range of industries for a wide range of uses.”

Reporters

Mark Geistfeld

Mark Geistfeld, Reporter, Civil Liability for Artificial Intelligence

Mark Geistfeld is the Sheila Lubetsky Birnbaum Professor of Civil Litigation at New York University School of Law, where his research has extensively addressed the common-law rules governing the prevention of and compensation for physical harms. He has authored or co-authored five books along with over 50 articles and book chapters, often showing how difficult doctrinal issues can be resolved by systematic reliance on the underlying legal principles.

Ketan Ramakrishnan

Reporter, Civil Liability for Artificial Intelligence

Ketan Ramakrishnan is an associate professor of law at Yale Law School. His interests include torts, AI regulation, constitutional law, contracts, property, and moral and legal philosophy. His articles on these subjects are published or forthcoming in the Harvard Law Review, University of Chicago Law Review, Yale Journal on Regulation, Fordham Law Review, and Philosophy & Public Affairs.

Recent Posts

No Results Found

The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.