By Indeed Editorial Team

Artificial intelligence (AI) is already playing multiple roles within HR departments at many companies. A recent survey of more than 300 HR leaders found that 85% are using AI in their daily tasks, and 55% expect they will use it more as time goes on. 

Today, AI tools can be used to help with reviewing CVs and scoring job candidates, sourcing talent for open roles, writing job descriptions, identifying opportunities to promote employees, and even sending automated messages to applicants. 

'You name it, and there’s an AI tool being built today to work on it,' says Trey Causey, head of Responsible AI and senior director of data science at Indeed. 

However, the sophistication of these tools varies, and so does their developers’ attention to risk. Organisations should be aware of the risk spectrum involved in using AI and develop strategies for how to use it responsibly. 

AI has the potential to reduce human bias, particularly in recruitment, creating better opportunities for workers while streamlining rote tasks so HR professionals can focus on the more human aspects of their roles. But AI can also perpetuate and even amplify inherent biases – and waste both money and time. 

Here are four steps organisations can take to identify risks and make sure their use of AI is fair, ethical and effective.

 

1. Evaluate the risks and rewards for your organisation

First, ask if AI tools are a good fit for your company when it comes to HR. AI systems can scale up processes, such as identifying and scoring many more job candidates than could be processed manually. 

However, 'You can also scale up mistakes and errors, as no system is perfect,' says Jey Kumarasamy, an associate at Luminos.Law, a law firm focused on AI. 'Even if you have 90% accuracy, which is being generous, if you are processing thousands of applications, there’s going to be a sizeable number of applications that were assessed incorrectly.' 

The starting point for evaluating AI-powered HR tools should be an understanding that the tools are imperfect. 'Biases are inevitable, so companies will need to figure out how they plan to address them, or accept that they are at risk,' Causey says. 

While some companies accept the risk because of the productivity boost, others may feel that the potential margin of error compromises their values or creates too much complexity in the face of increased regulatory pressures. 

If you move forward with AI, choose your tools wisely. AI that provides transcripts of interview conversations, for example, is typically a relatively low-risk application (although it can perform poorly when used with speech from non-native speakers). In contrast, AI that assesses and scores candidates based on their performance in video interviews 'is probably the most problematic area because there are a lot of risks and ways it can go wrong,' Kumarasamy says.

Ultimately, AI should augment and improve human processes, not replace them. Before adopting AI tools, make sure your HR team is sufficiently staffed so that humans can review every step of any process that AI automates. Leave critical HR matters to people, for example final recruitment decisions, promotions and employee support. Luckily, if AI tackles the mundane tasks, HR professionals will have much more time and flexibility for those duties.

2. Screen third-party vendors that provide AI-powered tools

Once you’ve decided what kind of AI tools are best for your organisation’s needs, you might approach prospective vendors with specific questions, such as: 

  • How do they audit their system? When was the last time it was tested, and what metrics were used?
  • Was the testing done internally or by an external group?
  • How is bias mitigated? If they claim their system has minimal bias, what does that mean, and how is that bias measured? 
  • Are there testing metrics available for you to review as a prospective client?
  • If the model degrades in performance, do the vendors provide post-deployment services to help train your employees in configuring and maintaining the system?
  • Are they compliant with current and emerging regulations? 'I spoke with a vendor last year and asked if they were compliant with a specific regulation, and they hadn’t heard of it before,' Causey says. Not only was that a red flag, but 'it clearly, directly impacted their product.' 
  • Will they comply with any AI audits you conduct? 'When you do an AI audit, chances are you need a vendor to help – and that’s usually not the best time to find out that your vendor doesn’t want to cooperate with you or provide you with documentation or results,' Kumarasamy says.

3. Identify and monitor bias

AI algorithms are only as unbiased as the data used to train them. While employers can’t modify how algorithms are developed, there are ways to test out the tools before implementing them. For example, employers could conduct third-party bias auditing and publish audit summaries before launching AI for their recruitment.

Organisations can also use a process known as 'counterfactual analysis' to see how an AI model reacts to different inputs. For example, if AI evaluates CVs for job candidates, try changing the candidate’s name or the school they attended – does the algorithm rank the candidate differently? 

'This has been done since the ’50s, with sociologists sending CVs to employers but changing just one thing on their CV to see how the callback rates differ,' Causey says. 'We can do that with AI, too, and pull in a lot of existing social scientific knowledge about how we can evaluate bias in AI models.'

As you implement AI systems, continuously monitor them to identify and correct any discriminatory patterns when they emerge, and stay apprised of developing research on data science and AI. 'When you have humans making decisions, it’s difficult to know if they’re biased,' Causey says. 'You can’t get into someone’s brain to ask about why they said yes to this candidate but no to that candidate; whereas with a model, we can do that.'

There’s no standard suite of tests to evaluate HR tools for bias. At a minimum, employers should clearly understand how AI is being used within the organisation, which could include keeping an inventory of all AI models in use. Organisations should document which tools were provided by which vendor, along with the use cases for each tool. 

In the best-case scenario, audits will bring together different departments, including in-house legal teams as well as data scientists, alongside external counsel or third-party auditors. There are also publicly available tools to help organisations audit their own AI tools – for example, in the US, the National Institute of Standards and Technology (NIST) has set out a four-part risk management protocol: govern, map, measure and manage. 

4. Stay ahead of evolving legislation

The potential risks of automated HR tools are not just reputational and financial – they’re legal too. Legislation is quickly emerging around the world in response to the proliferation of AI in the workplace. 

In the European Union, the proposed AI Act aims to assign risk levels to AI applications based on their potential to be unsafe or discriminatory, and then regulate them based on their ranking. For example, the current proposal considers AI applications that scan resumes and CVs to be 'high-risk' applications that would be subject to strict compliance requirements.

In the UK, there are currently no laws explicitly governing the use of AI at work. In March 2023, the UK Government published a white paper outlining a framework for the regulation of AI, taking a non-statutory approach. However, many existing laws, including the Equality Act 2010, the Human Rights Act 1998, and Article 22 of UK GDPR, restrict the use of AI tools in practice and may therefore apply to employment decisions made by AI. 'There is a misconception that if a law doesn’t directly address AI systems, it doesn’t affect an AI system,' Kumarasamy says. 'That’s not true, especially when we’re talking about employment.'

While audits are a good starting point, the best way to prepare for emerging regulatory requirements and ensure that your AI is operating effectively and equitably is to build out a larger AI governance programme. 

Governance systems document the organisation’s principles with respect to AI and create processes for continually assessing tools, detecting issues and rectifying any problems. For example, Indeed has developed and publicly published its own principles for the ethical and beneficial use of AI at the company. Indeed has also created a cross-functional AI Ethics team that builds tools, systems and processes to help ensure that technology is used responsibly. 

Even with safeguards, the new generation of AI tools are complex and fallible. 

However, putting in the effort to use them responsibly opens the door to building better processes. AI can help humans be more efficient and less biased, but only if humans provide the necessary oversight. For example, there are opportunities to think critically about the parameters that an AI algorithm should consider for job qualification, radically improving the way candidates are evaluated. 

'How do we really get to the core of what it means to be successful in a job?' Causey asks. Skills-based recruitment can be less biased than relying on school or company names – something that AI can be tuned into in a way humans might not be. 'There’s a real potential for levelling the playing field with AI for jobseekers,' Causey says.