News Detail Banner
All News & Events

Artificial Intelligence Update - July 2022

NIST Proposes Comprehensive Risk Management Framework for AI 

As advances in artificial intelligence (“AI”) have led to widespread adoption of AI-based applications, the potential that AI systems could produce unwanted and potentially harmful results has attracted increased scrutiny from policymakers. Among other concerns, policymakers seek to ensure that AI systems avoid harmful bias and are accurate, explainable, and protective of privacy. In the past, these issues have been addressed in the U.S. through a patchwork of regulatory and state legislative actions, generally targeted at specific applications or issues.

In 2019, the Trump Administration issued the Executive Order on Maintaining American Leadership in AI, which, among other things, directed the U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) to create a comprehensive “plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.” E.O. 13859 (Feb 11, 2019). In 2020, Congress directed NIST to develop an AI Risk Management Framework (“AI RMF”). See Commerce, Justice, Science, and Related Agencies Appropriations Bill, 2021, H. Rept. 116-455, 116th Cong. (Jul. 16, 2020). On March 17, 2022, the NIST released the first draft of its Artificial Intelligence Risk Management Framework (the “AI RMF”). See https://www.nist.gov/system/files/documents/2022/03/17/AI-RMF-1stdraft.pdf.

A. Draft AI RMF
The draft AI RMF uses a three-class taxonomy to classify AI characteristics by which AI can be assessed as trustworthy or risky: 1) technical characteristics, (2) socio-technical characteristics, and (3) guiding principles. Technical characteristics refer to risks in the design of the AI system, and include accuracy, reliability, robustness, and resilience/security. Socio-technical characteristics refer to the human and systemic institutional and societal biases that impact the way AI systems are used and perceived in society, including explainability, interpretability, privacy, safety, and managing bias. Guiding principles refer to broader societal norms and values to which AI systems should adhere, including fairness, accountability, and transparency.
Referencing this framework, the draft AI RMF sets out a list of actions that organizations can take to identify and mitigate the relevant risks for particular AI systems. The draft AI RMF organizes these actions into four functions: Map, Measure, Manage, and Govern, which should be iteratively performed.
Following a public workshop on the draft AI RMF in March 2022, the NIST and has released the first round of public comments, which includes input from a wide variety of stakeholders from industry, government, and higher education, including Google, Kaiser Permanente, the Bureau of Labor Statistics, the American Property Casualty Insurance Association, the Recording Industry Association of America, the U.S. Chamber of Commerce, and U.C. Berkeley. See https://www.nist.gov/itl/ai-risk-management-framework. The NIST plans to conduct another workshop on October 19-21, 2022, and release the final version in late 2022 or early 2023. Id.; https://www.nist.gov/itl/ai-risk-management-framework/ai-risk-management-framework-workshops-events.

B. Current Legislation and Regulation of AI
AI regulations to date have been directed to only a subset of the risks identified by the draft AI RMF. Currently no federal legislation governs AI systems in the U.S., but federal oversight has come from regulatory agencies, including the Federal Trade Commission, the Department of Housing and Urban Development, and the Equal Employment Opportunity Commission. The scope of their regulation is necessarily limited – for example, the FTC’s rulemaking is limited to addressing discriminatory and fraudulent business practices and the EEOC focuses on the use of AI in hiring and workplace applications.

At least five states (Alabama, Colorado, Illinois, Mississippi, and Utah) have passed legislation related to AI, and legislation is pending in over a dozen more. See https://www.ncsl.org/research/telecommunications-and-information-technology/2020-legislation-related-to-artificial-intelligence.aspx. However, so far, state legislation has also been limited in scope. For example, Illinois has focused on regulating bias in AI systems used for employment decisions, and Alabama’s bill simply created an advisory council on AI.

Even the proposed European Union AI Act, which is the most comprehensive AI legislation proposed to date, does not explicitly address certain technical and socio-technical risks such as accuracy, explainability, and interpretability, and has been criticized for its lack of a process to reclassify AI systems based on future developments. See, e.g., EDRi et al., An EU Artificial Intelligence Act for Fundamental Rights: A Civil Society Statement, https://edri.org/wp-content/uploads/2021/12/Political-statement-on-AI-Act.pdf. The AI RMF’s efforts to address the full spectrum of AI risks may enable more comprehensive future legislation or regulation.

C. Potential Impact of the AI RMF
Despite its voluntary nature, the AI RMF itself may effectively impose legal obligations on developers and users of AI systems by informing the common law standard of care. As disputes arise regarding harms caused by AI systems, courts may look to the AI RMF to determine whether there are additional actions that should have been taken to prevent those harms. There is some precedent for using voluntary NIST frameworks as a standard of care. In 2014, the NIST released a voluntary cybersecurity framework. Commentators recognized that it could be adopted as a standard of care. See, e.g., Scott J. Shackelford et al., Toward a Global Cybersecurity Standard of Care? Exploring the Implications of the 2014 NIST Cybersecurity Framework on Shaping Reasonable National and International Cybersecurity Practices, 50 Tex. Int’l L.J. 305 (2015). And courts have recognized its value in informing the standard of care. For example, expert testimony in a voting rights case invoked the NIST cybersecurity framework in opining about the appropriate standard of care. See Curling v. Raffensperger, 397 F. Supp. 3d 1334, 1376 n.59 (N.D. Ga. 2019). And compliance with the cybersecurity framework was a condition of the settlement of class action litigation over Yahoo’s data security breach in 2020. In re Yahoo! Inc. Customer Data Sec. Breach Litig., No. 16-MD-02752-LHK, 2020 WL 4212811, at *33 (N.D. Cal. July 22, 2020). The AI RMF is likely to be used similarly to establish a standard of care in future litigation over AI systems.

For now, the AI RMF can help organizations address the emerging patchwork of legal requirements applying to AI systems, and prepare for future comprehensive regulation. The NIST privacy framework, released in 2020, has served a similar role. At the time the privacy framework was released, the E.U. General Data Protection Regulation (“GDPR”) and the California Consumer Privacy Act ("CCPA”) were the only significant data privacy laws in effect. Since then, over 60% of states have considered or passed new privacy regulations. The privacy framework has helped organizations anticipate and keep up with the wave of privacy regulations. As AI regulation moves forward on a similar trajectory, the AI RMF will likely prove a useful tool for organizations to keep up with the rapidly evolving legal landscape.