Embracing AI, Episode 4 | The realities and risks for professional services

  • Podcast 01 October 2024 01 October 2024
  • UK & Europe

  • Technology risk

Steven Bird, Senior Associate at Clyde & Co, spoke to Matthew Lavy KC, Barrister at 4 Pump Court – who he described as “the godfather of AI law” - about the risks and realities facing professional services firms as they adopt this transformational technology.

As a former software developer turned barrister, Matthew Lavy is one of the UK’s foremost silks specialising in IT and IP disputes. His knowledge of the technology behind IT projects and how they are implemented, along with his two decades at the bar, mean he has unrivalled insights into the development of artificial intelligence (AI) and its impact on many sectors, including professional services.

The speed at which AI is changing the face of the modern world – and the laws which govern it – is highlighted by the second edition of Lavy’s book, “The Law of Artificial Intelligence” (which he co-edits with Matt Hervey). In the three years since the first edition was published, the book has almost doubled in size, due to the growth in scope of what it now covers.

The impact on white collar jobs

What might this rapid evolution mean for white collar jobs, Bird asked? In Lavy’s view, it could radically change many roles as we currently know them, empowering professionals in some tasks while making other tasks obsolete, making teams leaner and altering business models.

One area where AI looks set to be transformational is legal document review. Using Large Language Models (LLMs) not only to find relevant documents, but to tell you what issues the documents are relevant to, which parts of the documents are relevant, and even how the documents might be deployed, could be a game-changer.

“Imagine you've got a new case, it's got a million documents that have been loaded into the disclosure system and you haven't read any of them,” said Lavy. “You ask the LLM: ‘Find me documents that prove what the claimant says in paragraph five of his claim is untrue.’ The machine then produces…a list of ten documents, which together pithily show that... That's not just more efficiency on an existing process - that's a paradigm shift.”

For senior litigators, this will make them more effective. However, the junior lawyers and paralegals who would typically be tasked with this work will no longer need to do it. Lavy believes that as a result, professionals will spend more time overseeing the AI.

Elevated and novel risks

For Lavy, for all its benefits, the deployment of AI heightens some risks, while simultaneously introducing new ones. For instance:

Client liability: Just as is the case when tasks like legal research and review are undertaken by junior colleagues, or when an accountant delegates the job of analysing the tax implications of a particular investment strategy, Lavy pointed out that senior professionals must satisfy themselves that the analysis conducted by AI is correct before relying on it in their advice to clients. He warned that given how impressive AI can seem, the likelihood of confirmation bias based on the assumption that the AI is right, elevates this risk.

Regulatory compliance: One of the most notable new risks created by this powerful technology is the failure to comply with the many global regulatory initiatives around AI, such as the EU AI Act, the focus of which is on governance, validation and data quality.

Third party liability for copyright infringement: Other novel risks include the risk of liability to third parties in IP terms when professional services firms train their own LLM models or use generative AI outputs. Lavy used the example of an architect using generative AI to create some conceptual designs for a client. 

“If the conceptual design the client chooses to develop looks uncannily like another conceptual design on another architect’s website, and it turns out that it was part of a mass of data hoovered up in the AI model training process, you're potentially staring down the barrel of a copyright infringement action,” he warned. The question of whether the output of generative AI models can infringe copyright in the data used to train them is a live issue in court systems in a number of jurisdictions around the world, with outcomes yet to be decided.

Other IP risks: There is also the question of who (if anyone) owns the copyright in the generated model – the answer to which may depend on jurisdiction. Under UK law for instance, it is unclear who the ‘author’ (and copyright owner) of an AI-generated work is: the person typing the prompt into the AI system or the developer of the AI model?

Lavy added, “There's also the prior question of whether a work generated in that way even has copyright subsisting in it, given that the law as it currently stands requires intellectual creativity on the part of the author. So, there are huge uncertainties in that space.”

Contracting issues: Bird asked about the contractual implications for companies when using AI models, and what issues they should consider in any supply agreement for AI services.

A classic supply arrangement, where the supplier and the customer both take on certain risks, and KPIs and acceptance criteria must be defined, can be challenging in the context of AI, according to Lavy, due the greater complexity of the inputs and outputs and the sophistication of the technology.

He used the example of loan application approvals, where rather than a binary approved/rejected outcome based on limited variables such as income thresholds, AI systems could use multiple variables and historic loan data to create outputs based on risk tolerance. This could make outcomes less predictable and harder to explain to customers, and could pose challenges around determining whether the system is performing correctly.

Helpfully, the Society for Computers and Law has published a set of model clauses for AI contracts to help companies through this maze, which include clauses that you would not normally expect to find in traditional software implementation contracts. For instance, it outlines clauses that require suppliers to provide information to help customers understand the logic behind AI outputs, and clauses that deal with unlawful discrimination, in case AI models result in people being treated differently or unfairly under equality laws.

Ultimately, Lavy argued that although AI implementations require careful thought and advised that contracts may need to be approached in more creative, sophisticated ways than before, there’s no need for a wholesale reinvention of the way companies go about contracting.

It's clear that AI is ushering in new realities for the professional services sector, which is changing the dynamics of how they do business in new and unexpected ways, and altering the risk profile of their operations. Hearing the insights of such a prominent authority in this space was both fascinating and incredibly valuable for the audience of global leaders listening in.

Access our Global Guide to AI Regulation

End

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!

You might be interested in...