The AI Opportunities Action Plan

  • Market Insight 20 January 2025 20 January 2025
  • UK & Europe

  • Technology & AI evolution

The British Government unveiled on 13 January 2025 their response to the ‘AI Opportunities Action Plan’, the strategy for development of AI in the UK.

View the AI Opportunities Action Plan here

Commissioned shortly after the General Election on 4 July 2024, the Plan forms part of Labour’s Plan for Change which aims to reinvigorate the UK economy. The detailed Plan sets out step by step the Government’s intended approach to what the Prime Minister described as “the defining opportunity of our generation”.

The existing position

The Sunak Government published a White Paper in August 2023, focusing on a pro-innovation approach to AI regulation. The Government considered the risk of AI technologies and prioritised a “clear, proportionate approach to regulation” to enable the responsible application of AI. The report concluded that a principles-based approach was preferable to EU-style statutory regulation, which empowers existing regulators to devise bespoke regulatory approaches for their own specific sectors.

One such regulator is the Information Commissioner’s Office (ICO), which has been conducting research into the use of AI. For instance, in November 2024, the ICO published their AI in Recruitment Outcomes Report, which found that the deployment of AI in the recruitment context has many areas for improvement in data protection compliance and the management of privacy risks. In particular, the ICO picked up on concerns around the use of personal information to train and test AI – both in terms of data contents and volume – as well as supply chain transparency. The ICO’s report found the approach by some organisations towards their various UK GDPR obligations needed improvement, which raised concerns about the principles-based approach to AI regulation.

As noted above, the position in the UK is in contrast to the EU which adopts a harmonised legal regime known as the EU Artificial Intelligence Act (“EU AI Act”). The EU AI Act came into force on 1 August 2024 and seeks to regulate the use, input and output of various AI systems by providers, deployers, importers, distributors and product manufacturers. Failure to comply with the EU AI act may result in a maximum financial penalty of EUR 35 million or 7 percent of worldwide turnover, whichever is higher.

Opportunities Action Plan: The UK's Response to AI

The Plan sets out fifty measures which the Government believes will push the UK to the forefront of AI leadership. It recognises the need to adopt an eager approach to the research and development stages of AI, providing the right financial and regulatory freeway to endorse AI from inception to application.  The Government seem confident that the UK will soon reap the benefits of AI investment, through its incorporation in key areas such as healthcare. Futureproofing the NHS is one of the Government’s key priorities, and the potential utilisation of AI in healthcare means that ‘AI for science’ features heavily throughout the Plan.

The recommendations incorporated in the plan are categorised under seven priorities:

     1.    Building sufficient, secure, and sustainable infrastructure
     2.    Unlocking data assets in the public and private sector
     3.    Training, attracting and retaining the next generation of AI scientists and founders
     4.    Enabling safe and trusted AI development and adoption through regulation, safety and assurance
     5.    Adopt a "Scan > Pilot > Scale" approach in Government
     6.    Enable public and private sectors to reinforce each other
     7.    Address private-sector-user adoption barriers

To facilitate these goals, the Government has proposed the creation of a new unit, UK Sovereign AI, to proactively interact with the private sector. UK Sovereign AI will have the ability to partake in international collaboration, create joint ventures and act as an incubator for budding AI companies, with the long-term aim of influencing the governance of frontier AI for the UK.

Save for a partial agreement on one recommendation on the exploration of the existing immigration system to attract top talent, all other recommendations have been agreed in full in the Government’s response and a two-year timeline for implementation has been set up with some of the recommendations planned to take effect as early as Spring 2025.

The future regulatory landscape

Unsurprisingly, the Government has maintained a light touch on regulation while actively promoting innovation. The Plan, however, acknowledges the importance of a proportionate and flexible regulatory approach. Consequently, the introduction of a bill announced in November 2024, to make the commitments agreed in the Voluntary Agreement on AI Testing (April 2024) binding, remains a possibility.

Although, if introduced, this bill might be perceived as a shift in direction for the Labour Government compared to its predecessor, this is debatable. The bill would merely formalise principles already in place following the Voluntary Agreement. Additionally, it will primarily address a small segment of the AI development sector, specifically the frontier models, due to the significant risks they may pose.

Further, while the Plan acknowledges the importance of protecting citizens from significant risks posed by AI and fostering public trust in the technology, it warns against the risk of regulations blocking or deterring the development of AI in the UK.

Overall, the Plan demonstrates that the Government remains largely aligned with its predecessor in prioritising innovation as the driving force behind AI regulation. It emphasises the need for flexible regulation, the goal of strengthening the UK’s position as a global leader in AI, and the adoption of an agile approach in response to the rapid pace of technological advancements both domestically and globally. The five principles that underpinned the development and use of AI under the Conservative Government’s White Paper—namely (i) safety, (ii) transparency, (iii) fairness, (iv) accountability, and (v) contestability—largely remain current and are even further developed in the Labour Government’s Plan.

Other AI Pro-innovation initiatives progressed by the previous Government and followed up in the Plan include regulatory sandboxes to encourage development and live consumer testing in controlled environments. Additionally, the Government has developed the copyright regime, including proposals to establish a copyright-cleared British media asset training dataset and supporting the copyright of the creative industries.

Potential impact of the Opportunities Action Plan

The Plan is being hailed as a bold and forward-thinking strategy aimed at positioning the UK as a global leader in AI by outlining a comprehensive framework that spans various sectors from healthcare to transport. Although safety remains a consideration for the Government, the priority has clearly shifted from the previous ‘safety first’ rhetoric to embracing a more aggressive pro-innovation approach. The Government is keen to attract technology companies to invest in the UK, emphasising the country’s commitments to fostering a conducive environment for AI development and deployment.

However, the reality behind this ambitious Plan is more nuanced. The Government is under significant scrutiny to demonstrate economic competence and a deep understanding of operational requirements, especially amidst concerns over its insight into the market and its ability to retain investments particularly from the technology sector. The Government’s Plan appears to be a rush towards retaining its competitive edge and revenue generated by investors who may otherwise choose to move its money elsewhere. By presenting itself as a large potential customer for AI solutions, expanding AI infrastructure, reforming intellectual property laws and providing a moderate regulatory framework, the Government is effectively offering a compelling package to entice companies to stay and grow in the UK

While fostering such an innovative environment is crucial for economic growth and the development of next-generation AI models, the risks associated with AI must not be overlooked. The research conducted by the ICO demonstrates the risks of losing oversight of AI, including concerns over privacy of personal data used to train and test AI. The concern remains that, in the push for innovation and technological development, regulatory obligations to protect the personal data of individuals falls by the wayside.  For instance, the Plan identifies that developers need access to high-quality data and that unlocking both public and private data sets to enable AI innovation will be critical and it is prepared to grant such access.

The Plan recognises the need to implement guidelines and best practices in this regard, however it remains to be seen whether the proposed regulatory oversight or the use of AI-driven data cleansing tools to curate data sets suitable for AI developers and researches, provide appropriate safeguards to protect personal data. For instance, there is little detail at this stage as to which companies will have access to the collected data, whether people are going to be able to opt out and the cyber security frameworks that will need to be in place to ensure the data collected and used remains secure, particularly where they are used to underpin AI systems in critical infrastructure, healthcare, and education to name a few. We are also yet to see the ICO comment on the Government’s recent Plan, particularly given the ICO’s November 2024 review of AI in Recruitment which we referred to above.

If you are interested in reading more about how employers can manage the risks of AI when recruiting and managing employees, you can read our article here.

An undoubtedly strategic move to boost the UK AI’s capabilities, the Plan reflects a pragmatic response to the competitive global tech landscape. The true impact on businesses will depend on the execution of the Plan and the Government’s ability to balance innovation with economy stability while preserving fundamental rights such as privacy and data protection.

As the plan unfolds, it remains to be seen whether the Government’s proposal will be able to maintain a balance between fostering innovation while addressing ethical considerations and risk issues without the benefit of regulation such as the EU AI Act.

End

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!