Artificial Intelligence in the Employment and Recruitment Sector

  • Market Insight 19 December 2024 19 December 2024
  • UK & Europe

  • Technology risk

The implementation of and investment in artificial intelligence driven solutions in recruitment is now widespread. The benefits of leveraging AI to automate and enhance recruitment at various stages of the recruitment process are now viewed by many employers as a necessity, with AI recruitment software now widely available.

However, the concerns about the effectiveness of these tools in selecting the best candidates and the identified bias and unfairness of certain algorithms are now well publicised (see for example Department for Science, Innovation, and Technology. Responsible AI in Recruitment guide (25 March 2024).

Concerns that these tools could be impacting on job seekers negatively and concerns surrounding the security of their data recently came to the fore again following an audit carried out by the UK’s Information Commissioner’s Office (ICO). The ICO conducted consensual audits of a number of developers and providers of artificial intelligence (AI) powered sourcing, screening, and selection tools used in recruitment to monitor compliance with UK data protection law. The ICO’s findings and recommendations have been published in their AI in Recruitment Outcomes Report 6 November 2024.

The findings confirmed that there are many areas for improvement in data protection compliance and the management of privacy risks in AI as well as areas of good practice. ICO auditors made 296 recommendations and 42 advisory notes across all engagements. However, the vast majority of organisations responded positively and were willing to take swift action to improve compliance on a voluntary basis.

Reading the report in full is highly recommended for recruiters and AI providers alike. Some of the key takeaways from the report include:

Using personal information to train and test AI

Concerns around the quality of the datasets being used to monitor for potential or actual fairness, accuracy and bias issues were flagged. Almost all AI providers had trained and tested their tools using candidate information that they had already collected from recruiters which they had pseudonymised, de-identified, or anonymised before using it to train or test the AI. This is important to prevent the AI from relying on irrelevant data when screening.

However, particular issues arise where organisations use ‘inferred information’ whereby AI providers have estimated or inferred characteristics from personal information rather than collecting this data directly. Using this data to measure, monitor, and address bias in AI tools has several limitations. In particular, it can estimate and monitor bias against gender, ethnicity and age but other protected characteristics under the Equality Act 2010 could not be estimated reliably. AI providers using inferred information were generally unable to demonstrate that it was reliable and accurate enough to mitigate bias effectively in their AI tools.

Inferred information is “special category” data like the personal data from which it was inferred. However, many AI providers failed to treat inferred or estimated information as such and questions arose regarding the lawfulness of this data processing as well as transparency issues. The report recommends that demographic information is collected directly from candidates instead of using inferred information and clear consent is obtained.

AI providers should train AI using quality and representative datasets but also separate datasets to ensure it produces consistent and reliable outputs to comply with UK GDPR articles 5(1)(a) and (b), and 5(2).  Where AI is trained with information, then tested with the same information, accuracy or bias issues may remain undetected.

Processing unnecessary and re-purposed data

In assessing compliance with UK GDPR articles 5(1)(a)-(e), the ICO reported that AI developers in some cases used more data than they needed to in order to train AI solutions. Whilst most AI providers had assessed the minimum personal information needed to operate their AI tool effectively, concerns were flagged about some processing unnecessary data. Some AI developers maintained candidate databases containing a wide swath of personal information scraped from social media and job networking sites which was processed without adequate consent from the candidates. Further, some data was being re-purposed for other incompatible uses and certain providers had failed to consider whether the new purpose and lawful basis was compatible with the purpose and lawful basis for which it was originally collected.

Supply chains

The picture becomes more complicated and UK GDPR compliance more of a challenge where AI systems involve complex supply chains, which is often the case. The ICO audited compliance with UK GDPR articles 5(1)(e) and (f), and 24 to 29, to ensure that AI providers had in place adequate contracts or data processing agreements in place with recruiters. Some of the contracts reviewed were too broad and lacked sufficient detail. The ICO emphasised the need for the controller and processor obligations of each party to be set out clearly and for transparency regarding the nature of the proposed data processing. The agreement should detail technical and organisational measures for each party to implement and how information in AI models would be handled when the contract ended.

The regulatory landscape

The above is only a brief summary of the ICO report’s key findings. The issues uncovered by the audit naturally provokes re-consideration of the suitability of the UK government’s current approach to AI regulation. This is especially the case when one considers the ICO’s findings of a lax approach by some organisations to their data protection impact assessments (as well as DPIAs and risk mitigation which failed to capture evolving risks) and the instances of certain AI providers failing to understand whether they were operating as a ‘controller’ or ‘processor’ for the purposes of UK GDPR and the inevitable confusion around their duties under UK GDPR caused by this.

Unlike the EU, which has introduced the EU Artificial Intelligence Act creating statutory rules for AI across all sectors and applications within the AI jurisdiction, the UK currently operates under a set of principles whereby existing regulators are empowered to devise bespoke regulatory approaches for their own specific sectors. The UK framework is underpinned by five key principles including safety, security and robustness, transparency, accountability and fairness.

The ICO’s AI in Recruitment Outcomes Report does bring renewed focus on what is a potential policy gap into which further legislation could well be introduced in the not too distant future.

The direction of travel now in the UK is towards some form of statutory regulation for AI which is a departure from the previous Government’s policy, set out in their White Paper on AI in August 2023, which concluded that a light touch, principles-based approach was preferable to  legislation. Peter Kyle, the Secretary of State for Science, Innovation, and Technology, announced on 6 November 2024 that the UK will get an AI Bill in 2025 at the Financial Times’ future of AI summit (DSIT secretary confirms UK to legislate on AI risks in 2025). The scope of the AI Bill is expected to be narrower than the EU Artificial Intelligence Act.

The move towards a legislative AI framework in the UK follows efforts in the House of Lords when Lord Holmes of Richmond introduced his Private Members’ Artificial Intelligence (Regulation) Bill which has had its third reading in the House of Lords this year and has now been sent to the Commons for its first reading. The Bill envisages the creation of an AI authority in the UK which would address the lack of a regulator to enforce the five principles in domains such as recruitment and employment.

The perceived need to regulate the use of AI technologies in the workplace was addressed in May 2023 by the Artificial Intelligence (Regulation and Workers’ Right) Private Members’ Bill by Mick Whitley MP. However, the Bill will not progress any further beyond its first reading in the Commons. The Trades Union Congress continues to advocate for such a Bill to be introduced and has drafted its own Artificial Intelligence (Employment and Regulation) Bill as part of a wider conversation around AI regulation in the workplace.

The upshot is that, whilst there is not yet legislation regulating AI in the employment and recruitment sector, and the UK does not look destined to have a comprehensive set of statutory rules such as that contained in the EU Artificial Intelligence Act any time soon, organisations should nevertheless take their own steps to properly understand the AI tools they are utilising in their business, especially in relation to recruitment, and to make sure they do not fall foul of the issues identified in the Report. The ICO’s detailed guidance on these matters should be referred to.

End

Additional authors:

Angus Gillies, Senior Associate, Glasgow

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!