U. S. Proposes New Reporting Requirements for AI Developers and Cloud Providers
TMTPOST--The U.S. Commerce Department announced on Monday its proposal to impose detailed reporting requirements on developers of advanced artificial intelligence (AI) and cloud computing services to ensure these technologies are secure and resilient against cyberattacks.
The proposed rule from the Department's Bureau of Industry and Security mandates that developers of “frontier” AI models and computing clusters provide comprehensive reports to the federal government on their development activities.
This includes detailed accounts of cybersecurity measures and results from red-teaming exercises, which test for vulnerabilities and dangerous capabilities. These capabilities could include aiding cyberattacks or making it easier for non-experts to develop hazardous materials or weapons.
Red-teaming, a practice originating from Cold War-era simulations where adversaries were designated as the “red team,” has long been used in cybersecurity to identify potential risks.
Commerce Secretary Gina Raimondo emphasized the need for these measures, noting that AI technology is advancing rapidly with both significant potential and associated risks.
“The information collected through the proposed reporting requirement will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors,” the Commerce Department security bureau said in a release.
The rise of generative AI, which can produce text, images, and videos from open-ended prompts, has generated both excitement and concern. While the technology holds promise, it also raises fears about job displacement, election manipulation, and other potential risks.
In October 2023, U.S. President Joe Biden signed an executive order requiring AI developers to share safety test results with the U.S. government before public release, particularly for systems posing risks to national security, the economy, public health, or safety.
Alan Estevez, Under Secretary of Commerce for Industry and Security, indicated that the rule will help the BIS understand the capabilities and security of the most advanced U.S. AI systems. This effort builds on the BIS's history of conducting defense industrial base assessments and aims to address risks emerging in critical industries.
The draft rule, which is approximately 20 pages long, details definitions for AI models and systems. An AI model is described as a component in an information system that employs AI technology and uses computational, statistical, or machine learning methods to generate outputs from given inputs. An AI system encompasses any data system, software, hardware, application, tool, or utility that operates fully or partially using AI.
If U.S. activities meet the specified parameter thresholds, the BIS will follow up with more detailed inquiries, requiring responses within 30 days. The term "U.S. persons" includes U.S. citizens, permanent residents (green card holders), companies and institutions organized under U.S. law, and any individuals residing in the U.S.
The proposed rule also addresses the military applications of large AI models. The BIS notes that industrial and governmental entities worldwide are integrating dual-use foundational models into defense capabilities. The U.S. defense industrial base will also need to integrate such models to maintain international competitiveness. The government must be prepared to ensure that dual-use foundational models produced by U.S. companies are available for defense use.
The BIS emphasizes the need to understand how many U.S. companies are developing, planning to develop, or own computing hardware necessary for dual-use foundational models and their characteristics. This information will help the U.S. government determine whether action is needed to stimulate the development of dual-use foundational models or support specific types.
Compared to Biden’s AI executive order, the proposed rule maintains the same floating-point operations threshold but increases the interconnection speed requirement from 100 Gbit/s to 300 Gbit/s, raising the reporting threshold. The frequency of reports and technical parameters are key areas where public feedback is sought, as they directly affect which companies are required to report. Currently, BIS estimates that no more than 15 major tech companies will need to comply, excluding smaller companies.
Major cloud providers affected by this proposal include Amazon Web Services (AWS), Google Cloud, and Microsoft Azure.
The BIS acknowledges that as AI technology evolves, the number of reporting entities will likely increase. However, Executive Order 14110 specifies that the Commerce Secretary will update reporting requirements as needed, affecting only a limited number of companies.