Technology risk
AI and Arbitration: China’s efforts
Click each term for related articles
UK & Europe
Technology risk
This is the first article in Clyde & Co’s latest international arbitration series covering the topic of AI in international arbitration. In this piece, trainees Sebastian Florax and Arina Naumova from our London office provide the legal perspective from England & Wales.
Artificial Intelligence (“AI”) is defined as the practice of converting natural language into machine learning managed by an algorithm1. At its core, AI has been developed to automate complex tasks previously done by humans. Clearly, this area of technology is becoming increasingly relevant to all industries, not least the legal sector. Considering the recent media frenzy around AI, this article aims to cut through the noise and explore how AI is currently being used and regulated in the context of international arbitration and the direction where such regulation is headed.
Despite the rapidly evolving landscape of AI, it is worth emphasising that certain AI tools have been around for some years and there is a clear framework, if not formal regulations, governing these technologies.
A well-known example of AI already employed in international arbitration proceedings is AI-backed translation software, used to sift through the large volume of foreign language documents often present in arbitrations. Under s.34 Arbitration Act 19962, the translation of documents is a procedural matter determined by the appointed tribunal, meaning that the tribunal can decide upon the manner in which a translation is carried out. Frequently, due to the cost and speed of traditional human translation, AI translation is employed as a first step. However, machine translations often lead to misleading and inaccurate statements being put before the court. In fact, warnings as to the accuracy of translations arising from AI software have been publicised for several years now3. The consequences of inaccurate translations are self-evident, as seen in the case of Occidental Petroleum v Ecuador4, where one of the parties lost their claim based on the tribunal premising their award on a mistranslation.
The solution is to ensure that the appointed tribunal is aware of whether AI-generated translations are used and that parties are sensitive as to the accuracy of the wording.
Another important area where AI is being used in international arbitration is document review. For example, the learning capabilities of AI systems allow for the technology to focus on the most relevant documents first, which is key to speeding up the process of reviewing a large number of documents. As a result, the expensive and slow process of document review in large arbitrations can now be done for a fraction of the cost and time5.
The use of AI in document review in the context of legal proceedings was considered in 2016 in the case of Pyrrho Investments Ltd v MWB Property Ltd.6 The case concerned the use of “predictive coding” and “computer-assisted review” (i.e., extraction of the most relevant documents based on the technology learning from a lawyer performing the task), which was approved for the first time by the English courts. While the case was the first of its kind in 2016, the widespread use of “predictive coding” and “computer-assisted review” in disclosure today shows how quickly AI in the legal sector evolves from a novel tool requiring regulation to a routine system used in almost all large-scale dispute resolution.
Generative AI – a model which generates new content based on data it was trained on – is truly novel in legal proceedings and as yet is scarcely regulated. It is this tool that gives rise to new case law and guidance on the subject. In a recent survey conducted by FTI Consulting7, very few international arbitration practitioners said that their firms were using AI for more complex tasks such as predictive case outcomes, legal research or drafting deposition questions. Two examples where the use of generative AI has caused issues include: legal research and drafting proceedings.
Despite the temptation to rely on AI when conducting legal research in international arbitration, it is a tool to be used with caution. Two New York lawyers found this out the hard way when they were sanctioned USD 5,000 in June 2023 for relying on completely fictitious case law that they found through ChatGPT.8 While many law firms have since introduced strict policies on regulating the use of ChatGPT, centralised regulation or guidance from arbitral institutions would be useful to ensure a consistent approach.
Relying on technology to draft proceedings is another more novel way to employ AI in international arbitration – it can assist with structuring and setting out the arguments. Despite its novelty, judges in Texas and Pennsylvania have already issued standing orders requiring disclosure if AI has been used in drafting pleadings and certification that the accuracy of such pleadings has been verified.9 Similar guidelines have been issued by Canadian courts in Manitoba and Yukon.10 With the absence of such guidance in the UK courts, the gap could be filled with procedural orders issued by arbitrators, which should perhaps be more specific than the court orders discussed above to ensure that disclosure remains reasonable and proportionate to risks associated with the use of AI in the administration of justice.11
The way that AI in international arbitration will be regulated cannot be predicted with total accuracy. However, legislation and government policies about AI more broadly can be used to make an educated guess. For example, in the UK a White Paper was published in March 2023 which recognises the opportunities and risks associated with AI.12 The UK White Paper currently sets out principles to be followed by regulators when developing guidance on AI. Examples which may be relevant to arbitration include the principle of AI systems being transparent and explainable and the technology operating on a fair basis.
The EU has also already taken specific steps to regulate AI, recognising the disruption that may be caused without a clear framework. On 9 December 2023, the EU provisionally agreed to pass the European Union Artificial Intelligence Act (EU AI Act); the first of its kind. The Act is yet to be finalised and will likely come into effect in 2026. The Act will be far-reaching as it will apply to both providers and developers of AI in the EU, regardless of the location of establishment. As indicated previously in its White Paper, AI systems will be divided into categories of risk, such as limited risk, high risk and unacceptable risk. For those AI applications that do not pose a high risk, the Commission proposes to set up a voluntary labelling scheme under which economic operators can signal that their AI-enabled products and services are trustworthy.13 The real uncertainty lies in how AI systems employed in the international arbitration sector will be labelled and how far AI will go in assisting with the preparation of proceedings. The more advanced work the AI system is doing, where human judgment is necessary, the likelier it is to be considered high risk.
In terms of guidance published by arbitral institutions, to date, only the Silicon Valley Arbitration & Mediation Centre has published draft guidelines on the use of AI.14 The draft guidelines focus on principles such as safeguarding the confidentiality of information where AI tools are used and emphasising that the decision-making process in arbitration must be non-delegable to AI. Such guidelines are likely to be relied upon by other arbitral institutions when writing their own advice on AI.
To conclude, AI guidance in international arbitration is still in its infancy and a whole list of unanswered questions remains. These questions ought to be addressed by arbitral institutions sooner rather than later so as to avoid an inconsistent approach to using AI in the sector. Such questions include guidance not only on features outlined in this article, like where to draw the line on using AI to draft pleadings, but also speculation as to how far AI will go and what the penalties for misuse should be. With so many possible variables, the journey towards a comprehensive legal framework for AI in the UK arbitration sector and worldwide is indeed an ongoing one, with careful review required in navigating this complex and exciting frontier.
This series will continue next week with the perspective from Spain.
1 Artificial intelligence (AI) | Definition, Examples, Types, Applications, Companies, & Facts | Britannica
2 Arbitration Act 1996 (legislation.gov.uk)
3 Google translation AI botches legal terms 'enjoin,' 'garnish' -research | Reuters
4 3 reasons to ensure your arbitration doesn't get lost in translation | LexisNexis Blogs
5 Generative AI and the small law firm: Leveling the playing field - Thomson Reuters Institute
6 Pyrrho Investments Ltd v MWB Property Ltd & Ors [2016] EWHC 256 (Ch) (16 February 2016) (bailii.org)
7 Guest blog: How will AI impact dispute resolution? - ICC - International Chamber of Commerce (iccwbo.org)
8 New York lawyers sanctioned for using fake ChatGPT cases in legal brief | Reuters
9 Standing Order - In Re: Artificial Intelligence ("AI") in Cases Assigned to Judge Baylson | Eastern District of Pennsylvania | United States District Court (uscourts.gov)
10 CIArb - The use of AI in international arbitration – thoughts from the coalface
11 CIArb - The use of AI in international arbitration – thoughts from the coalface
12 A pro-innovation approach to AI regulation - amended (web-ready PDF) (publishing.service.gov.uk)
13 SCL: 10 Things You Need to Know About the EU White Paper on Artificial Intelligence
End