NTIA Artificial Intelligence Accountability Policy Report
National Telecommunications and Information Administration
57 56
What is the inuence and impact, if any, that external
legal regimes—including the European Union’s AI Act
and AI Liability Directive —might have on state and
federal liability systems?
297
• How should liability rules avoid stiing bona de
research, accountability eorts, or innovative uses of
AI? What safeguards, safe harbors, or liability waivers
for entities that undertake research and trustworthy
AI practices, including adverse incident disclosure,
should be considered?
and AI-specific issues aect the use of AI in the employment context.”); Georgetown
University Center for Security and Emerging Technology Comment at 10 (noting
that “[p]roduct liability law provides inspiration for how accountability should be
distributed between upstream companies, downstream companies and end users.”);
Boston University and University of Chicago Researchers Comment at 1-2 (arguing that
accountability mechanisms are important for “(a) new or modified legal and regulatory
regimes designed to take into account assertions, evidence and similar information
provided by AI developers relevant to intended or known users of their products, and
(b) existing regimes such as product liability, consumer protection, and other laws
designed to protect users and others against harm.”).
297 See, e.g., SaferAI Comment at 2 (“We believe that the article 28 of the EU AI Act
parliament dra lays out useful foundations on which the US could draw upon in
particular regarding the distribution of the liability along the value chain to make
sure to not hamper innovation from SMEs, which is one of EU’s primary concerns.”);
Association for Intelligent Information Management (AIIM) Comment at 3 (“This
approach – classifying AI into dierent categories and establishing policy accordingly
– aligns with the European Union’s AI Act, which is currently working its way through
their legislative processes. While AIIM is not indicating its support for this legislation
nor advocating for the U.S. government to adopt similar policy, the premise is
commendable.”); Georgetown University Center for Security and Emerging Technology
Comment at 6 (“Accountability mechanisms should make sure to clearly define
what dierent actors in the value chain are accountable for, and what information
sharing is necessary for that party to fulfill their responsibilities. For example, the EU
parliament’s proposal for the AI Act requires upstream AI developers to share technical
documentation and grant the necessary level of technical access to downstream
AI providers such that the latter can assess the compliance of their product with
standards required by the AI Act.”); ICLE Comment at 9-11 (criticizing the proposed EU
AI Act’s “broad risk-based approach.”).
• Are the various liability frameworks that already
govern AI systems (e.g., in civil rights and consumer
protection law, labor laws, intellectual property laws,
contracts, etc.) suicient to address harms or are new
laws needed to respond to any unique challenges?
296
The Future Society Comment at 12 (“Transferring absolute liability to third-party
auditors would erroneously presuppose their capability to audit for novel risks. . . .
Shared liability between developers, deployers, and auditors encourages all involved
parties to maintain high standards of diligence, enhances eective risk management,
and fosters a culture of accountability in AI development and deployment.”); Global
Partners Digital at 3 (arguing that “[l]iability should be clearly and proportionately
assigned to the level in which those dierent entities are best positioned to prevent or
mitigate harm in the AI system performance.”); Cordell Institute for Policy in Medicine &
Law Comment at 11 (“[P]olicymakers should consider vicarious liability and personal
consequences for malfeasance by corporate executives”); ACT | The App Association
Comment at 2 (“Providers, technology developers and vendors, and other stakeholders
all benefit from understanding the distribution of risk and liability in building, testing,
and using AI tools. ..[T]hose in the value chain with the ability to minimize risks based
on their knowledge and ability to mitigate should have appropriate incentives to do
so”); Georgetown University Center for Security and Emerging Technology Comment
at 1 (“Due to the large variety of actors in the AI ecosystem, we recommend designing
mechanisms that place clear accountability on the actors who are most responsible for,
or best positioned to, influence a certain step in the value chain”). See also Salesforce
Comment at 8 (“AI developers like Salesforce oen create general customizable AI
tools, whose intended purpose is low-risk, and it is the customer’s responsibility (i.e.,
the AI deployer) to decide how these tools are employed. . . .It is the customer, and not
Salesforce, that knows what has been disclosed to the aected individual, and what
the risk of harm is to the aected individual.”).
296 See, e.g., Senator Dick Durbin Comment at 2 (“[W]e must also review and, where
necessary, update our laws to ensure the mere adoption of automated AI systems
does not allow users to skirt otherwise applicable laws (e.g., where the law requires
‘intent.’)”); ICLE Comment at 15 (“[T]he right approach to regulating AI is not the
establishment of an overarching regulatory framework, but a careful examination of
how AI technologies will variously interact with dierent parts of the existing legal
system”); Open MIC Comment at 8 (“Legal experts are divided regarding how AI-related
harms fit into existing liability regimes like product liability, defamation, intellectual
property, and third-party-generated content.”); CDT Comment at 33 (“The greatest
challenge in successfully enforcing a claim against AI harms under existing civil rights
and consumer protection laws is that the entities developing and deploying AI are not
always readily recognized as entities that traditionally have been covered under these
laws. This ambiguity helps entities responsible for AI harms claim that existing laws do
not apply to them.”); HRPA Comment at 5 (“The use of technology in the employment
context is already subject to extensive regulation which should be taken into
consideration when developing any additional protections. In the United States alone,
Federal and state laws dealing with anti-discrimination, labor policy, data privacy,
uals can be empowered to decide what systems are fair
and adhere to critical due process norms.”
301
AI account-
ability inputs can make it easier to bring cases and vin-
dicate interests now or in the future.
302
At the same time,
entities that may be on the other end of litigation (e.g.,
AI developers and deployers alleged to have caused or
contributed to harm) can also benet from more infor-
mation ow about defensible processes.
303
The creation of safe harbors from liability is relevant to AI
accountability, whether the one sheltered in that harbor
is an AI actor or an independent researcher. The Admin-
istration’s National Cybersecurity Strategy, for example,
recommends the creation of safe harbors in connection
with new liability rules for soware.
304
A small minority
of commenters addressed the safe harbor issues. Some
expressed doubt that safe harbors for AI actors in con-
nection with AI system-related harms would be appropri-
ate.
305
A number of commenters argued that researchers
301 Twenty-three Attorneys General Comment at 3. See also AI & Equality Comment at 2
(“[E]nabling AI-based systems with adequate transparency and explanation to aected
people about their uses, capabilities, and limitations amounts to applying the due
process safeguards derived from constitutional law in the analogue world to the digital
world.”).
302 See, e.g., AI & Equality Comment at 2 (“[T]ransparency and explainability mechanisms
play an important role in guaranteeing the information self-determination of
individuals subjected to automated decision-making, enabling them to access and
understand the output decision and its underlying elements, and thus providing
pathways for those who wish to challenge and request a review of the decision.”)
(emphasis added); CDT Comment at 22 (“[A] publicly released audit provides a
measure of transparency, while transparency provides information necessary to
determine whether liability should be imposed.”).
303 See, e.g., AIIM Comment at 3 (“[Organizations] are reluctant to implement new
technology when they do not know their liabilities, don’t know if or how they will
be audited or who will be auditing them, and are unclear about who may have
access to their data, among other things. . . .For instance, insurance companies have
had AI for years that can analyze images of crashes or other incidents to help make
determinations about fault or awards, but companies have been afraid to use it out of
fear of the potential liability if an AI-made decision is contested.”); Public Knowledge
Comment at 11 (noting that understanding liability “is especially important to ensure
that harms can be adequately addressed and also so that academic researchers, new
market entrants, and users can engage with AI with clarity about their responsibilities
and confidence surrounding their risk.”); DLA Piper Comment at 3 (“Undertaking
accountability mechanisms reduces potential liabilities in the event of accidents or
AI failures by showing diligent governance and responsibility were exercised.”); CDT
Comment at 29 (“One of the key ways of ensuring accountability is the promulgation of
laws and regulations that set standards for AI systems and impose potential liability for
violations. Such liability both provides for redress for harms suered by individuals and
creates incentives for AI system developers and deployers to minimize the risk of those
harms from occurring in the first place.”).
304 See The White House, supra note 292, at 20-21 (Strategic Objective 3.3).
305 See Senator Dick Durbin Comment at 2 (“And, perhaps most importantly, we must
defend against eorts to exempt those who develop, deploy, and use AI systems
from liability for the harms they cause.”); Global Partners Digital Comment at 10
(“Accountability needs to be embedded throughout the whole value chain, or more
specifically, throughout the entire lifecycle of the AI system. . . . [L]iability waivers do
not seem appropriate, and there is a clear need for a dynamic distribution of the legal
liability in case of harm.”).
AI accountability inputs can assist in the development
of liability regimes governing AI by providing people
and entities along the value chain with information and
knowledge essential to assess legal risk and, as needed,
exercise their rights.
298
It can be diicult for those who
have suered AI-mediated employment discrimina-
tion, nancial discrimination, or other AI system-related
harms to bring a legal claim because proof, or even recog-
nition, that an AI system led to harm can be hard to come
by; thus, even if an aected party could, in theory, bring
a case to remedy a harm, they may not do so because
of information and knowledge barriers.
299
Accountabili-
ty inputs can assist people harmed by AI to understand
causal connections, and, therefore, help people deter-
mine whether to pursue legal or other remedies.
300
As a comment from twenty-three state and territory at-
torneys general stated, “[b]y requiring appropriate dis-
closure of key elements of high-risk AI systems, individ-
298 While accountability inputs can play an important role in the assigning of liability, we
note that these inputs do not in themselves supplant appropriate liability rules. See,
e.g., The Future Society Comment at 8 (“Third-party assessment and audits must not
be perceived as silver bullets. . . . Furthermore, external audits, in particular, may be
subject to liability-washing (companies seeking to conduct external audits with the
ulterior motivation of evading liability.”); Cordell Institute for Policy in Medicine & Law
Comment at 3 (“Governance of AI systems to foster trust and accountability requires
avoiding the seductive appeal of ‘AI half-measures’—those regulatory tools and
mechanisms like transparency requirements, checks for bias, and other procedural
requirements that are necessary but not suicient for true accountability.”); Boston
University and University of Chicago Researchers Comment at 2 (“[A]ccountability and
transparency mechanisms are a necessary but not suicient aspect of AI regulation.
. . . To be eective, a regulatory approach for AI systems must go beyond procedural
protections to include substantive, non-negotiable obligations that limit how AI
systems can be built and deployed.”). When AI transparency and system evaluations
contribute additional information and knowledge that could be used to bring legal
cases, the challenge may remain on how to apply legal concepts to modern use
situations involving AI even when people agree a law may be applicable. See, e.g., Lorin
Brennan, “AI Ethical Compliance is Undecidable”, 14 Hastings Sci. & Tech. L.J. 311, 323-
332 (2023) (arguing that it is “unsettled how applicable law should be applied” in the
context of AI ethical compliance).
299 See, e.g., CDT Comment at 34 (“Due to the lack of transparency in AI uses, the plainti
may not have the information needed to even establish a prima facie case. They may
not even know whether or how an AI system was used in making a decision, let alone
have the information about training data, how a system works, or what role it plays in
order to oer direct evidence of the AI user’s discriminatory intent or to discover what
similarly situated people experienced due to the AI.”); Public Knowledge Comment at
12 (“Unfortunately, identifying the party responsible for introducing problems into the
AI system can be challenging, even though the resulting harms may be evident. While
much has been written on dierent legal regimes and their eectiveness in addressing
AI-related harms, less attention has been given to determining the specific entities in
the chain of development and use who bear responsibility.”).
300 See, e.g., OECD, Recommendation of the Council on Artificial Intelligence, Section
1.3 (2019), https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
Cf. Danielle Citron, “Technological Due Process,” 85 Wash. U. L. Rev. 1249, 1253-54
(2008) (“Automation generates unforeseen problems for the adjudication of important
individual rights. Some systems adjudicate in secret, while others lack recordkeeping
audit trails, making review of the law and facts supporting a system’s decisions
impossible. Inadequate notice will discourage some people from seeking hearings and
severely reduce the value of hearings that are held.”).
AI accountability inputs can assist in the
development of liability regimes governing
AI by providing people and entities along
the value chain with information and
knowledge essential to assess legal risk
and, as needed, exercise their rights.