The US Creates a New Strategy Plan to Regulate AI
On 30th October 2023, President Joe Biden issued an executive order to establish a new strategy plan for regulating Artificial Intelligence (AI).
The new strategy requires AI companies to share the test results of AI systems with the US government before their release. The new measures aim to harness AI while protecting the general well-being of Americans in key areas, including national security, economic security, health, and safety.
The US announced its safety measures against AI ahead of an AI safety summit the UK government hosted on 1st November. A key focus of the AI safety summit was international cooperation in AI governance.
How Does the US Plan to Protect Against AI Through Its Executive Order?
The rapid development of AI, especially generative AI, has raised public concerns, legal battles, and fears. The US recognises that AI poses risks that have led to wrongful arrests, the ability to distort the truth by replicating images and voices, and creating social discrimination and inequality.
Some of the biggest fears regarding AI are that it could aid the creation of more deadly bio-weapons and crippling cyber-attacks. Worse still, it might develop capacities that outsmart human abilities.
The key purpose of the plan is to invoke the Defense Production Act which obligates companies that are training AI models to tell the federal government when it creates a perceived risk to national security or public health and safety.
The proposed measures include the following:
Making new security and safety standards for AI
Asking AI companies to share AI system test results with the US government
Safeguarding consumer privacy by developing standards that organisations can utilise to assess privacy measures implemented in AI
Establishing a programme to assess potentially harmful AI-related healthcare practices
Developing materials on the responsible use of AI tools by educators
Promoting civil rights and equity by establishing best practices for the proper application of AI in the legal system, including forecasting crime, risk assessment, and sentencing
Collaborating with international partners to put AI standards into practice globally
Releasing official guidelines for watermarking content created by AI to combat the risks associated with fraud and deep fakes
Establishing new guidelines for biological synthesis screening to guard against the dangers of using AI to create harmful biological materials
How Will This Plan Affect US Companies?
Companies that develop AI models and systems that could pose threats to economic, national, or public health and safety must notify the US government when they are training the model.
AI companies are now subject to strict government testing regulations set by the National Institute to ensure that AI systems are safe before their release. In addition, they also need to share the results of safety tests with the federal government. These steps would ensure that AI systems are safe and reliable before companies release them to the public.
As such, it is important for affected organisations to be compliant with the new regulations and update their internal policies and processes in line with these.
How Can Gerrish Legal Help?
Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property.
We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements.
We are here to help you, get in contact with us today for more information.