AI Regulation in the UK and data protection; thoughts for insurers
-
Market Insight 25 October 2023 25 October 2023
-
UK & Europe
-
Technology, Outsourcing & Data
While artificial intelligence (“AI”) and machine learning technology is not a new concept, the recent publicity around AI products and their transformative potential has naturally generated a discussion around the regulatory framework that governs it.
Here, Rosehana Amin and Harriet Davies from our cyber risk team take a deeper look at the development of the regulation of AI in the UK and what this may mean for insurers.
Regulation
In June 2023, the UK Government closed its consultation on its approach to regulating AI. Titled “AI regulation: a pro-innovation approach”, the white paper details the Government’s plans for encouraging an innovation led approach to AI regulation, with a stated aim of putting the UK “on course to be the best place in the world to build, test and use AI technology”.
Industry engagement with the Government reported conflicting and uncoordinated requirements from sector regulators, currently attempting to regulate AI through existing frameworks. The Government has recognised that this creates unnecessary burdens and that gaps in current regulation may leave risks unmitigated. The Government’s National AI Strategy sets out its aims to both regulate AI and support innovation.
There are three objectives that the framework, set out in the white paper, is designed to achieve:
-
Drive growth and prosperity
-
Increase public trust in AI
-
Strengthen the UK’s position as a global leader in AI
Sector based approach
The UK Government has indicated that it does not intend to adopt a statutory framework (in a move away from the position of the EU, which is working towards implementation of the Artificial Intelligence Act), but will instead take a pro-innovation approach to AI regulation that would empower relevant regulators to guide and inform the use and adoption of AI based on five values-focussed cross-sectoral principles.
The UK Government does not intend to put these principles on a statutory footing initially, as there are concerns that onerous legislative requirements could hold back AI innovation and reduce the ability to respond quickly, and in a proportionate way, to future technological advances. Instead, the principles will be issued on a non-statutory basis and implemented by existing regulators.
Regulators will lead the implementation of the framework, for example, by issuing guidance on best practice for adherence to these principles. Regulators will be expected to apply the principles proportionately to address the risks posed by AI within their remits, in accordance with existing laws and regulations.
The intention, therefore, is that the onus is on sector specific regulators to provide their own rules and guidance, consistent with the principles. However, the Government also notes throughout the white paper that regulators may wish to develop joint guidance in certain areas and provide guidance that is coordinated with other regulators in others.
Five principles
The Government sets out the five principles, and gives some guidance to regulators as to how it envisages these may be implemented:
- Safety, security and robustness
-
AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, assessed and managed.
-
Regulators may need to introduce measures for regulated entities to ensure that AI systems are technically secure and function reliably, as intended, throughout their entire life cycle.
-
Regulators should assess the risk of AI to their sector and should consider providing guidance in a way that is coordinated with other regulators.
-
- Appropriate transparency and “explainability”
-
Transparency refers to the communication of appropriate information about an AI system to relevant people.
-
Explanability refers to the extent to which it is possible for relevant parties to access, interpret and understand the decision-making process of an AI system.
-
Regulators should have sufficient information about AI systems and their associated inputs and outputs to give meaningful effect to the other principles, which should be proportionate to the risks presented.
-
- Fairness
-
AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals, or create unfair market outcomes.
-
Regulators may need to develop and publish descriptions and illustrations of fairness that apply to AI systems within their regulatory domain, and develop guidance that takes into account relevant law, regulation, technical standards and assurance techniques (for example, the Human Rights Act 1998, the UK GDPR, consumer and competition law, and sector specific requirements such as the Financial Conduct Authority Handbook).
-
- Accountability and governance
-
Clear lines of accountability should be established across the AI life cycle.
-
Regulators will need to look for ways to ensure that clear expectations for regulatory compliance and good practice are placed on appropriate actors in the AI supply chain.
-
Regulator guidance on this principle should reflect that ‘accountability’ refers to the expectation that organisations or individuals will adopt appropriate measures to ensure the proper functioning, throughout the life cycle, of the AI systems.
-
- Contestability and redress
-
Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.
-
Regulators will be expected to clarify existing routes to contestability and redress, implementing proportionate measures to ensure that the outcomes of AI use are contestable where appropriate.
-
The UK’s initial non-statutory approach will not create any new rights or new routes to redress at this stage.
-
We note a somewhat similar approach to that of the Law Commission in its recent report on digital assets, where it made only minor recommendations for legislative reform. Here, we also see a move away from legislating in fast moving world of cyber and technology related areas, as implementation of new (and updating existing) legislation is seen as not being able to keep up with progress. You can read our insight on this report here.
Since publishing the white paper, the Government received responses from over 400 individuals and organisations across regulators, industry, academia, and civil society. The Government’s response is due to be published later this year.
Information Commissioner’s Office
On 19 September 2023, the Government provided an update on the UK’s AI policy, and reported that it is working closely with many regulators across sectors to ensure the AI framework is a coordinated effort. It is worth noting that the UK Government has a pro-innovation approach to AI regulation and so the need to regulate and ensure safeguards must be balanced against a desire to allow for entrepreneurship and embrace opportunities presented by AI.
As it relates to data protection, the ICO has already published guidance, calling organisations’ attention to needing to understand, and be accountable for, the data processing implications of AI. In April 2023, the ICO released a list of eight questions that helpfully provide organisations with a roadmap to compliance with data protection law and any AI initiatives that process personal data. Such questions include what the lawful basis for processing personal data is, and how will risks such as data leakage, model inversion, membership interference, data poisoning and other adversarial attacks be mitigated.
As the Data Protection and Digital Information Bill is currently progressing through Parliament and is expected to become law next year, it is doubtful that the Bill will be updated to directly reference the AI principles. Instead, we expect the Digital Regulation Cooperation Forum, of which the ICO is a member, to coordinate a multi-regulatory approach to AI regulation across all sectors going forward.
Impact on insurers
Whilst the UK regulatory framework for AI is being developed, the increased use of AI is likely to bring positive change for insurers themselves as a business operation. AI and machine learning has the ability to automate the review of incoming claims, speed up the time to settle a claim providing a value-add for its insureds, as well as also the ability to uncover fraudulent or over-exaggerated claims.
From a perspective of managing cyber risks, insurers may also want to encourage insureds to use AI for security and threat monitoring. The recent IBM Cost of a Data Breach Report 2023 found that organisations that use security AI, on average, saved USD 1.76 million compared to organisations that do not.
However, the move towards tighter and more consistent regulation may well increase the number of claims involving AI, which is likely to create novel issues for insurers when adjusting claims and forecasting their exposure. For example, there is the potential for GDPR claims arising from the unlawful collection of data, particularly if an AI product is processing biometric data. This, in turn, creates reputational damage for insureds, particularly in circumstances where little or no due diligence of the AI product has been carried out.
There is also the risk of “silent AI”, similar to the silent cyber risk that insurers have been grappling with in non-cyber policies for some time. AI risks are wide-ranging and may well span a number of different business lines.
While a cyber policy is likely to be the natural place for insureds to look for AI related coverage in the event of a software failure, questions should be asked about what cyber insurance will typically cover in terms of AI risks, especially for third-party risks. For example, there may be data scraping or intellectual property exclusions that form the basis of a lawsuit against an insured. AI also presents the opportunity for insurers to reassess their cyber policy wordings and how they correlate with other business lines, including technology E&O coverages, D&O, and intellectual property risks. Careful scrutiny of all current forms of policy language is recommended to assess where these gaps may be.
Insurers should also consider whether existing policies could respond to losses arising from the use of AI, when this type of risk may not have been factored in during the underwriting stages. Insurers may also want to include AI questions in application forms to assess the potential reliance an insured places on AI, which could expose the insured to greater risks.
Conclusion
It remains to be seen how the Government will respond to the feedback on the white paper, and whether the next general election in the United Kingdom will impact the timing of any proposed AI legislation. If settled AI legislation is still some time away, will insurers get ahead and write standalone AI insurance policies, or could that present the risk that insureds have overlapping insurance from different business lines, leading to co-insurance headaches?
End