Insights

AI in HR - what you need to know

AI in HR - what you need to know

Dec 10, 2024
Download PDFDownload PDF
Print
Share

Summary

BCLP recently hosted a seminar on AI in HR. In this thought-provoking session, we considered how AI is used in HR and its regulation in the EU and the UK, and then engaged in some discussions around two theoretical scenarios. For those who were not able to attend, we have put together a summary of the key takeaways. 

How is AI used in HR?

AI provides opportunities for use at each stage of the employment lifecycle. Recruiters can use the tools to draft job descriptions, screen applications and assist in the use of psychometric testing. Once an individual has joined the organisation, the employer can use AI to monitor performance and conduct. AI tools can also be used to increase accessibility by offering virtual assistance or transcriptions of meetings.

EU Approach to AI

The EU AI Act 2024 came into force on 1 August 2024 and sets out obligations for various organisations who make use of AI systems and have links to the EU market. The Act establishes a risk-based approach and categorises different AI systems according to the risk that each system could generate. It then provides an approach that should be taken. Under the Act, AI systems used in employment are considered “high-risk”. This could be anything that falls within recruitment, selection, promotion and termination. For “high-risk” systems, the Act imposes obligations including the requirement for data training, record-keeping, transparency and more.

Whilst the UK is no longer in the EU, certain aspects of the Act may apply to employers in the UK as the Act is designed to capture employers who are located in non-EU member states but where the output of the system they are using is used in the EU. In an employment context, this may be relevant where AI tools are being used in recruitment (think applicants that are applying for a position from a country within the EU) or performance management (where a team in the UK manages teams based in countries within the EU).

UK Approach to AI

The previous Conservative Government outlined its approach in its AI Regulation White Paper earlier this year. The paper acknowledged the regulatory challenges posed by rapid development in this area and that while these were being tackled by voluntary measures, ultimately, the challenges posed by AI will ultimately require legislative action. It was suggested that such legislative measures would be introduced when there is greater understanding of the risk posed. The (then) government set out five principles for regulators to interpret and apply where AI innovation were proposed. These principles were:

  1. Safety, security and robustness;
  2. Appropriate transparency and explainability;
  3. Fairness;
  4. Accountability and Governance; and
  5. Contestability and redress.

In the King’s speech this summer, references were made to “appropriate legislation” to be placed on parties working to develop the “most powerful artificial intelligence models” but the AI bill was notably left out. While we are yet to see exactly what this looks like, it does seem that the Labour Government will impose a more restrictive approach to regulating AI than was envisaged by their predecessor.

More recently, on 23 October 2024, the New Data Access Bill was published in the House of Lords. The intention here is to permit automated decision making as long as the organisation implements safeguards. Notably, automated decision-making using sensitive data is automatically prohibited.

Key takeaways for those in HR

Statistics show that the use of AI in HR has increased by 300% between 2023 and 2024 and is now used by 1 in 4 employers. It is likely that this marks the beginning of a radical shift for HR professionals, with the increasing use of AI across the board seeming inevitable. But with rapid change, it is often the case that policy and practice struggles to keep pace with the technology. With that in mind and in light of the legislative framework outlined above, our key takeaways for companies adopting AI in their HR practice (or indeed in their wider business) are as follows:

  1. Ensure that any AI technology is only adopted in a manner which is consistent with data protection obligations.
  2. Put in place a policy which sets out acceptable use of AI and the consequences for breach and make sure that this policy is drawn to the attention of staff.
  3. Where AI is used for HR processes, make sure that all decisions have a final human oversight and that the reason for any decision can be explained.
  4. For multi-national organisations, consider whether AI legislation in other jurisdictions is relevant to the use of AI in the UK.

The Information Commissioner’s Office’s Recommendations

Finally, on 6 November 2024, the ICO published an outcomes report on AI tools in recruitment (the Report) following consensual audit engagements carried out by the ICO with developers and providers of AI recruitment tools between August 2023 and May 2024. The Report marks part of the ICO’s ongoing upstream monitoring of the wider AI ecosystem to ensure compliance with UK data protection law.

The use of AI tools has the potential to greatly improve efficiency in many aspects of the employment lifecycle, but it can also create significant risks if the tools are not deployed lawfully and carefully. The latest developments in this area highlight that regulators and legislators are wary of this rapidly developing technology and it is important that organisations react accordingly to the obligations and recommendations that have been, are being and will be introduced.


Interested to hear more about AI? Have a read of the following article produced by our American colleagues: AI in the Workplace: Using Artificial Intelligence Intelligently

Related Practice Areas

  • Employment & Labor

  • Data Privacy & Security

Meet The Team

This material is not comprehensive, is for informational purposes only, and is not legal advice. Your use or receipt of this material does not create an attorney-client relationship between us. If you require legal advice, you should consult an attorney regarding your particular circumstances. The choice of a lawyer is an important decision and should not be based solely upon advertisements. This material may be “Attorney Advertising” under the ethics and professional rules of certain jurisdictions. For advertising purposes, St. Louis, Missouri, is designated BCLP’s principal office and Kathrine Dixon (kathrine.dixon@bclplaw.com) as the responsible attorney.