Commerce proposes new rules to monitor advanced AI capabilities
The US Department of Commerce has proposed new mandatory reporting requirements for developers of the most powerful artificial intelligence (‘AI’) models and computing clusters, aiming to enhance national security by assessing defence-relevant AI capabilities.
The proposed rule, announced by the Bureau of Industry and Security (‘BIS’), 9 September, targets AI developers and cloud providers at the ‘frontier’ of technological advancements.
The new regulations would require AI developers to provide detailed reports on their developmental activities, cybersecurity measures, and outcomes from red-teaming efforts, which BIS explained ‘involve testing for dangerous capabilities like the ability to assist in cyberattacks or lower the barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons.’
‘As AI is progressing rapidly, it holds both tremendous promise and risk,’ said Commerce Secretary Gina Raimondo. ‘This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security.”
Commerce Under Secretary Alan Estevez emphasised that the reporting requirement would expand on BIS’s experience in conducting defence industrial base surveys, helping the government to better understand the capabilities and security of advanced AI systems.
‘This action demonstrates the US government’s proactive thinking about the dual-use nature of advanced AI,’ explained Assistant Commerce Secretary Thea Rozman Kendler.
The proposed rule follows a pilot survey conducted by BIS earlier this year and aims to ensure that emerging AI technologies meet rigorous safety and reliability standards, can withstand cyber threats, and are safeguarded against misuse by foreign adversaries or non-state actors, which are seen as critical for maintaining US national defence and technological leadership.