Five key steps employers should take when using Artificial Intelligence (AI) in the workplace

  • Market Insight 24 January 2025 24 January 2025
  • UK & Europe

  • Tech & AI evolution

Artificial intelligence (AI) offers significant benefits for employers. However, it brings with it risks that need careful management. We set out five key steps for employers to take to help manage those risks.

Artificial intelligence technology is widely used by employers to help with the recruitment and management of employees, helping them make decisions more quickly and efficiently.

However, it is important that businesses understand that AI tools can expose them to risks, for example the risk of making discriminatory or biased decisions and data privacy and security risks.

This means it is important that employers using AI take steps to carefully manage and mitigate the risks. This article sets out five key steps you can take to protect your business.

     1. Know the risk

The first step to manage the risks of using AI is to ensure you have a clear understanding of what those risks and challenges might be.  Perhaps the biggest risk in the employment sphere is the risk of biased decision making. There are inherent risks when utilising AI tools to make or assist in making employment decisions that the output will contain hidden biases. This is for three key reasons: 

  1. AI tools are built and ‘trained’ by humans, which means they can be subject to our conscious and subconscious biases. For example, if a person programming the AI software uses an inadvertently discriminatory criteria for job application screening technology, then the AI software will apply that same criteria.
  2. They are designed to ‘learn’ from past outcomes and historical data. This is known as ‘machine learning’ and means the technology can amplify any existing prejudices. For example, if people from a certain university have historically worked at a company, the AI software may learn to favour employees from that university in an employee screening process.
  3. AI does not have autonomy to look beyond the data, rules or information it has been given. This means that, unlike humans, it cannot mitigate against its own biases or learnt discriminatory behaviour. Also, whilst we can train machines, we cannot give them a human level of context to inform decision-making.  This can lead to concerns about the way decisions have been taken and the lack of “compassion”, which can be particularly damaging in the employment context.

These factors could lead to discriminatory or unfair outcomes, such as biased recruitment decisions or the unequal treatment of employees, or the perception of unfair outcomes (which can be equally damaging). In turn, that could lead to employment claims, most commonly under the Equality Act, but also claims for unfair dismissal, breach of contract and data protection breaches. 

The use of AI technology can also lead to risks to an employer’s confidential information. AI tools rely on data that each individual user inputs. If an employee uses AI in their work, they may be disclosing confidential company information. The AI tool may store that information and use it to respond to future user requests, creating a risk of inadvertently revealing the confidential information to those other users.

There is also the potential for errors or technical limitations of a tool to lead to incorrect or inaccurate results or unreliable recommendations being made. Keep in mind that AI tools are not infallible. 

     2. Understand your AI

The next step to managing the risks of AI is to ensure you have a full understanding of where and how you are using AI technology in your HR processes and can explain how decisions using AI have been made.

If an AI tool is providing information or making decisions, make sure you have a clear understanding of what information the technology considers and can explain how the tool arrives at its outcomes. If the software developer is not able or willing to explain this process to you, then it is not a product you should be utilising in your business.  There will come a time where you are asked to explain a particular decision and how it has been reached and if you are not able to explain how AI has made or contributed to the decision to a job applicant or employee - or indeed an Employment Tribunal if the employee brings a claim - then your business could be at risk.  

Where issues of fairness and reasonableness are concerned, there is the risk that a Judge will be sceptical of the technology and make an adverse finding if an employer cannot explain how the technology works.  

For example, it would be difficult to defend a claim that a redundancy was due to a person’s age if you used AI to select an employee for redundancy and did not understand how the AI software decided that the employee in question was the appropriate employee to select for redundancy. 

Carry out a risk assessment or audit of your AI systems and liaise with your AI providers to ensure you have a clear understanding of how the AI tools you use work and arrive at their outcomes. You may wish to do this under legal privilege. You may also want to ensure your contracts with the AI software providers contain appropriate provisions around the sharing of information in the event of a challenge and, if you can negotiate them, indemnities in the event their software is found to have hidden biases.  

It is likely that risk assessments of this nature will be required by any future regulation around the use of AI, so as well as protecting the business now it should assist with future compliance obligations. 

     3. Take steps to manage the risks

The next step is to put in place safeguards around the use of AI to help protect your business and ensure AI is used responsibly. Here are some suggested steps to take:

  • Consider having an AI committee - Consider setting up an AI committee to take ownership and responsibility for how AI is used in the workplace, put in place policies and safeguarding measures and monitor compliance.
  • Develop an AI policy - Put in place an AI policy on use of AI in the workplace. The policy should explain what is and is not a permitted use of AI, the safeguards and processes to follow, who to report issues and concerns to and the implications of non-compliance.
  • Regularly audit and monitor your AI tools - Carry out regular audits of your AI systems so that there is a proper understanding of how they work and what information is considered when decisions are made. Also consider how your AI tools ensure that personal data and business confidential information is kept secure.
  • Maintain human oversight - Having a human involved in decisions is key. Decision making should not be entirely automatic, so ensure any decision making is subject to human involvement and oversight. Make sure you build in opportunities for feedback, explanations and someone for people to raise any concerns with.
  • Train your employees - Train your employees who use AI, such as those in HR, on the risks involved in the use of AI technology, and how those risks should be managed.
  • Regularly update your AI systems and use anti-bias software - There are tools that you can use to mitigate against the risk of possible inherent biases associated with AI software. For example, there is software that can make AI ‘blind’ to data related to gender, religion or race. As technology evolves and develops, more updates may become available which reduce the risk of bias and improve the accuracy and effectiveness of AI tools.

     4. Follow industry and regulatory guidelines

The UK Government has published guidelines on ‘Responsible AI in recruitment’, recommending steps that business should take in the recruitment and hiring process.  Make sure you review this guidance if you are using AI tools for this purpose.  

In January 2024, the UK Government published generative AI guidance for the civil service which may also be useful for other employers to read. 

The Information Commissioner’s Office has also produced guidance and toolkits to help employers ensure data is kept secure and used appropriately when using AI. There is helpful guidance and information in their Guidance on AI and data protection, the AI in Recruitment Outcomes Report and Key questions when procuring an AI tool for recruitment.

There are a number of industry guidelines publicly available that can assist businesses with ensuring they are adopting responsible AI practices. For example, both Google and Microsoft have published their own guidelines and principles on this topic. 

Look for opportunities in your industry to share best practices and insights about responsible AI use and follow any industry standards and guidelines.

     5. Watch this (regulatory) space

Currently, there are no specific UK laws regulating how businesses can use AI.  However, it is likely that there is going to be some UK regulation on AI in the future. Some existing legislation does already indirectly impact how AI is used for example discrimination and data protection laws. 

Up until now, the UK government has adopted a “pro innovation” approach to AI regulation. In the King’s Speech in July 2024, the Labour government set out its plans to establish "appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models"

This ties in with Labour’s manifesto pledge to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models”

The government has now published its AI Opportunities Action Plan, which looks to further the ‘pro innovation’ approach, but says little about how that will be regulated and what safeguards will be put in place. You can read more about the AI Opportunities Action Plan and the government's approach to AI here.

The Digital Information and Smart Data Bill has also been announced, alongside reforms to data-related laws aimed at supporting the safe development and deployment of new technologies which may include AI. 

In the EU, the EU AI Act will apply to developers, deployers and users who operate in, or from, an EU market.

Watch this space for future regulatory developments.

 

If you are interested in this topic and want to know more, you can read our update on Artificial Intelligence in the Employment and Recruitment Sector which looks at data protection issues and our update on the AI Opportunities Action Plan

 

End

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!

You might be interested in...