Press Releases
Sens. Warner, Moran Introduce Legislation to Establish AI Guidelines for Federal Government
Nov 02 2023
WASHINGTON – U.S. Sens. Mark R. Warner (D-VA) and Jerry Moran (R-KS) today introduced legislation to establish guidelines to be used within the federal government to mitigate risks associated with Artificial Intelligence (AI) while still benefiting from new technology. U.S. Rep. Ted W. Lieu (D-CA-36) plans to introduce companion legislation in the U.S. House of Representatives.
Congress directed the National Institute of Standards and Technology (NIST) to develop an AI Risk Management Framework that organizations, public and private, could employ to ensure they use AI systems in a trustworthy manner. This framework was released earlier this year and is supported by a wide range of public and private sector organizations, but federal agencies are not currently required to use this framework to manage their use of AI systems.
The Federal Artificial Intelligence Risk Management Act would require federal agencies to incorporate the NIST framework into their AI management efforts to help limit the risks that could be associated with AI technology.
“The rapid development of AI has shown that it is an incredible tool that can boost innovation across industries,” said Sen. Warner. “But we have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly. It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks.”
“AI has tremendous potential to improve the efficiency and effectiveness of the federal government, in addition to the potential positive impacts on the private sector,” said Sen. Moran. “However, it would be naïve to ignore the risks that accompany this emerging technology, including risks related to data privacy and challenges verifying AI-generated data. The sensible guidelines established by NIST are already being utilized in the private sector and should be applied to federal agencies to make certain we are protecting the American people as we apply this technology to government functions.”
“Okta is a strong proponent of interoperability across technical standards and governance models alike and as such we applaud Senators Warner and Moran for their bipartisan Federal AI Risk Management Framework Act,” said Michael Clauser, Director, Head of US Federal Affairs, Okta. “This bill complements the Administration’s recent Executive Order on Artificial Intelligence (AI) and takes the next steps by providing the legislative authority to require federal software vendors and government agencies alike to develop and deploy AI in accordance with the NIST AI Risk Management Framework (RMF). The RMF is a quality model for what public-private partnerships can produce and a useful tool as AI developers and deployers govern, map, measure, manage, and mitigate risk from low- and high-impact AI models alike.”
“IEEE-USA heartily supports the Federal Artificial Intelligence Risk Management Act of 2023,” said Russell Harrison, Managing Director, IEEE-USA. “Making the NIST Risk Management Framework (RMF) mandatory helps protect the public from unintended risks of AI systems yet permits AI technology to mature in ways that benefit the public. Requiring agencies to use standards, like those developed by IEEE, will protect both public welfare and innovation by providing a useful checklist for agencies implementing AI systems. Required compliance does not interfere with competitiveness; it promotes clarity by setting forth a ‘how-to.’”
“Procurement of AI systems is challenging because AI evaluation is a complex topic and expertise is often lacking in government.” said Dr. Arvind Narayanan, Professor of Computer Science, Princeton University. “It is also high-stakes because AI is used for making consequential decisions. The Federal Artificial Intelligence Risk Management Act tackles this important problem with a timely and comprehensive approach to revamping procurement by shoring up expertise, evaluation capabilities, and risk management.”
"Risk management in AI requires making responsible choices with appropriate stakeholder involvement at every stage in the technology's development; by requiring federal agencies to follow the guidance of the NIST AI Risk Management Framework to that end, the Federal AI Risk Management Act will contribute to making the technology more inclusive and safer overall,” said Yacine Jernite, Machine Learning & Society Lead, Hugging Face. “Beyond its direct impact on the use of AI technology by the Federal Government, this will also have far-reaching consequences by fostering more shared knowledge and development of necessary tools and good practices. We support the Act and look forward to the further opportunities it will bring to build AI technology more responsibly and collaboratively."
“The Enterprise Cloud Coalition supports the Federal AI Risk Management Act of 2023, which mandates agencies adopt the NIST AI Risk Management Framework to guide the procurement of AI solutions,” said Andrew Howell, Executive Director, Enterprise Cloud Coalition. “By standardizing risk management practices, this act ensures a higher degree of reliability and security in AI technologies used within our government, aligning with our coalition's commitment to trust in technology. We believe this legislation is a critical step toward advancing the United States' leadership in the responsible use and development of artificial intelligence on the global stage.”
A one-page explanation of the legislation of the legislation can be found here.
# # #