Regulatory risk
Digital resilience podcast series | Episode 1 | Intellectual property and data privacy issues in artificial intelligence
选择以下类别阅读相关文章
全球
People challenges
This is the second episode of our Digital Resilience podcast series in which host, Dino Wilkinson, explores the risks associated with digital transformation in the realm of data and artificial intelligence (AI). This episode looks at workplace-related risks arising from employers’ and employees’ use of AI and highlights the recent European Court of Justice decision on automated decision-making in the SCHUFA case. Guest speakers for this episode are employment law specialists James Major, Partner in Clyde & Co’s London office and Cynthia Aoki, Partner in the Canada office.
The episode begins with a general look at how AI is being used in the HR function, before delving into the risks that come from bias in decision-making and the potential for discrimination. It moves on to the issue of employment tribunals, including the reputational impact on organisations, and how such claims could be defended. Major and Aoki then provide advice on avoiding the many legal pitfalls around AI, followed by a final brief look at the intellectual property (IP) risks of using the technology for content creation.
Major kicks off by outlining how the HR function is using AI; in the recruitment process to sift CVs, target job advertisements, and even conduct interviews, as well as in performance management and work tracking. As such, he says: “The main risk is one of employers inadvertently committing acts of discrimination through their use of the AI tool,” due to reliance on historical data, which could be biased towards a particular group, or because the systems have been built with inherent bias.
Aoki stresses that this is not necessarily a new challenge, giving an example from 1988 where a computer program was found to discriminate against women and people of colour, stating: “this concept of AI appears to be new, but it's been used for decades.” While more modern tribunal cases are still rare, Aoki outlines a recent case in the US where a tutoring company settled a claim that its AI program automatically rejected older candidates, saying: “it certainly opens the door to more scrutiny.”
Both guests agree that it is incumbent on the employer to understand how the AI programme they are using works and makes decisions. “I don't think an employer is going to be able to essentially wash its hands of any liability or responsibility simply because it decided to contract with an AI company,” says Aoki. There will no doubt be a “battle of warranties and indemnities with the third-party provider,” but as Major points out, for employers, there will always be “that smear on the reputation.”
To avoid the legal and reputational fall out, Major advises employers to ensure that they understand the AI tool that they are using and to have clear policies for its use, “so that employees understand, job applicants understand, and… to avoid misconduct issues or data breaches, breaches of confidential information…” Aoki adds that having diverse executive teams, and running unconscious bias training can also help ensure users are asking the right questions to avoid discrimination within an AI system.
The ECJ decision in the SCHUFA case is discussed before guests explore the issue of IP rights in the context of using AI to create text, images, music, or code, where Aoki says authorship is usually “restricted to natural persons or legal entities.” Significant reputational and financial risks relate to employees covertly using AI to create content or inputting confidential data into an open-source AI system, and the need for policies to address the issue: “I think we will see a growing trend of misconduct issues around employees using generative AI in a covert way to effectively do their job for them,” she concludes.
结束