A step forward for AI regulation in the US
-
Legal Development 14 November 2023 14 November 2023
-
North America
-
Cyber Risk
On 30th October 2023, President Biden issued an executive order on Safe, Secure and Trustworthy Artificial Intelligence (the Executive Order).
The Executive Order “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”
Following on from our recent article in which we looked at the development of AI regulation in the UK, the US government has now also taken steps to update the way in which AI risk is managed in the US.
An executive order is a published directive from the President of the United States that manages operations of the federal government. Executive orders are regulations, rather than legislation, and remain in force until they expire or are revoked.
The Executive Order directs certain actions, under eight headings:
- New Standards for AI Safety and Security
The Executive Order requires that developers of the most powerful AI systems share their safety test results and other critical information with the US government. The aim is that such measures will ensure AI systems are safe, secure and trustworthy before being made public.
The National Institute of Standards and Technology is to set rigorous standards for testing to ensure safety, to be applied by government departments. Government departments will also develop guidance to protect against AI-enabled fraud and establish a program to develop AI tools to assist with cyber security.
- Protecting Americans’ Privacy
The Biden Administration uses the Executive Order to call on Congress to pass data protection legislation to protect all Americans, in particular children. It also directs that federal support for accelerating the development of data privacy (including the use of “cutting edge AI”) will be prioritised.
- Advancing Equity and Civil Rights
The Executive Order issues directions to prevent AI being used to exacerbate discrimination, bias and other abuses in justice, healthcare and housing – through developing guidance, training and best practices.
- Standing Up for Consumers, Patients, and Students
The responsible use of AI in healthcare and education will be advanced, through deploying appropriate AI tools and establishing a safety program to protect against unsafe practices.
- Supporting Workers
Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers.
- Promoting Innovations and Competition
The Executive Order aims to lead the way in innovation and competition, by focussing on research, in particular in vital areas such as healthcare and climate change, and by providing small businesses access to assistance and resources to enable them to commericalise AI.
- Advancing American Leadership Abroad
Noting that “AI’s challenges and opportunities are global”, the Executive Order directs collaboration and engagement with international partners.
- Ensuring Responsible and Effective Government Use of AI
To ensure responsible use of AI by the US government, guidance will be issued, setting out clear standards to protect rights and safety.
The Executive Order is an important step in regulating AI in the United States and will require implementation by a broad range of organisations and governmental departments. There is a focus on transparency, with organisations required to share information with the government.
Following the issuing of the Executive Order, the Office of Management and Budget (OMB) has issued a draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. This guidance would establish AI governance structures in federal agencies, advance responsible AI innovation, increase transparency, protect federal workers, and manage risks from government uses of AI. We understand the OMB’s proposed guidance builds on the Blueprint for an AI Bill of Rights and the AI Risk Management Framework by mandating a set of minimum evaluation, monitoring, and risk mitigation practices derived from these frameworks and tailoring them to context of the federal government.
Key takeaway
We will have to wait to see how the Executive Order and draft policy will be implemented given potential reluctance of some to share business critical information. What is clear however is that organisations should be vigilant and take steps to monitor the use of and activity relating to AI in line with obligations to share information with the US government and in compliance with the guidance.
It is also important that organisations validate the AI services provided by third parties to ensure that there is compliance throughout the supply chain.
The implementation of the Executive Order and related policies/guidance remains a work in progress. The Biden Administration has confirmed that more action will be required and it will be working to pursue bipartisan legislation to ensure safe, secure and trustworthy AI. Organisations should therefore continue to monitor how the position develops in the US and whether there may be equivalent developments in other countries, including the UK and the European Union.
Read our recent article on AI regulation in the UK and the impact on insurers here.
End