What is the current picture of AI and diversity and inclusion?
DE&I stands for diversity, equity and inclusion. Sometimes it is also abbreviated to DEIB+, which means diversity, equity, inclusion, belonging and more. The plus here refers to the fact that there is often more work to be done than meets the eye. It can be important to remain agile and consider the various perspectives and experiences that inform the approach.
When it comes to the relationship between diversity and inclusion initiatives and AI, the impacts are varied. Some AI tools may perpetuate discrimination, such as during the CV screening process. However, some AI new AI tools may help to counter discrimination by screening communications and marketing materials. That is why it can be useful for employers to inform themselves about what AI-driven DE&I tools are available as well as DE&I factors to consider when using AI.
AI-driven diversity and inclusion in tools
As workplace AI tools become more varied and sophisticated, several new tools are available on the market to help create inclusive messaging both internally and externally. For example, some AI technology can be used to spot unconscious bias in text written by comms teams, as well as coach employees in using more inclusive language. The benefit of this coaching means that comms teams aren’t using this technology passively; it’s a learning tool as well. Other tools can also be used to automatically track inclusion and belonging in the workplace, highlighting any unconscious bias in marketing strategies, policies, company branding and job descriptions.
As explained in Indeed’s guide to authentic diversity and inclusion initiatives, there are other steps to creating an inclusive workforce, such as removing what’s known as ‘the glass ceiling’ and barriers to career progression for people from marginalised backgrounds. As AI can be used to provide unbiased feedback on which employees are performing well and may be eligible for a promotion, this can potentially help employers eliminate unconscious bias when identifying candidates for promotions or bonuses.
AI’s challenges to diversity and inclusion
While there are many clear benefits of using AI to improve diversity and inclusion, it can be useful for employers to consider these tools in a careful and considerate way. It can be tempting to embrace innovative new technology to facilitate communications and HR, but it can be important to be considerate and calculating about the ways it can affect employees and candidates.
It may not be able to replace comms
Replacing every aspect of internal and external communications with AI-generated messaging could mean that an employer’s branding does not present a human face that responds sensitively or appropriately to customer needs. Businesses may need to check AI-generated text before publishing it in case it contains any culturally insensitive wording, for example.
AI tools may not necessarily be able to replace a company’s employee resource groups (ERGs) who have deep, personally-informed knowledge and sensitivity to cultural and religious practices. It therefore is up to businesses to consider how often to use AI-generated content or suggestions for their corporate branding and messaging.
We believe that HR and recruiting are fundamentally human processes. They usually require soft skills such as emotional literacy, listening skills and the patience to learn about someone else’s perspective. There is still, therefore, an argument for keeping the ‘human’ element front and centre in comms and HR.
AI bias during hiring
Another potential issue of AI is bias during hiring. Sometimes, AI CV screening tools may screen out candidates based on gender or ethnicity. This can mean that businesses using these tools may not be reaching the best possible candidates. Sometimes, it is not clear why AI makes decisions that discriminate against certain candidates who would make a great fit for a role.
Auditing AI tools for bias
It can be tempting for employers to view AI as neutral, as it is a form of technology. However, despite this image, AI can carry over human bias from the datasets that software engineers train AI technology on. This is because historical datasets created by human recruiters could reflect discriminatory decisions made in the past regarding candidate applications. AI trained on data from companies that disproportionately hired men may continue to favour male candidate applications over those of women.
Therefore, it can be useful to audit the AI tools that they are using for any weaknesses, including discriminatory bias. Auditing means conducting an inspection of the product used, to ensure compliance with regulations and a high standard of quality.
What happens when companies audit their AI tools?
With the right AI tools, employers can reduce human bias rather than amplify it. A CV screening tool may reduce human bias, for example, as people can carry their unique biases with them that these tools can potentially remove. However, this may be more achievable using AI tools that have been audited for quality and complying with the company’s own DE&I targets and standards.
How does bias emerge in AI?
Bias can emerge from the software developers themselves, or just simply remains overlooked perhaps due to a lack of inclusivity knowledge. Additionally, as employees from minority backgrounds may be underrepresented in tech, it may be that there are unidentified human biases in AI tools which remain unchecked. However, whether AI tools have any bias will depend on the tool itself. It may be especially useful for employers to consider the possibility of any bias when using facial recognition AI or CV scanning AI technology for instance.
Being transparent about AI and bias
AI has both strengths and weaknesses and employees might feel more positively when employers are more transparent about their adoption of AI. They may also respond well in particular to transparency on how they consider bias and audit tools to find out their strengths and limitations.
The UK government’s current stance is ‘pro-innovation and pro-safety’ and highlights concerns regarding potential societal harms, misuse risks and autonomy risks. The UK government has not created any legislation so far. However, they have devised five core principles as a framework for legislation, which may be useful for employers looking to create their own AI policies:
- Safety
- Security
- Appropriate transparency
- Fairness
- Accountability and governance
- Contestability and redress
As principles such as fairness, accountability, contestability and transparency relate to ensuring diversity and inclusion policies are respected, it can be important for businesses to assess how their technology responds to these principles. For more up-to-date information on AI regulations, please visit the UK government website.
Reducing AI bias by closing diversity gaps
While AI can revolutionise diversity and inclusion, there is still a diversity gap in the UK’s technology industries. By closing this gap, employers may be able to benefit from a wider range of tools thanks to different perspectives on the technology.
Our DEIB+: What it means, why it matters, and how to do the work guide discovered that many tech companies are beginning to de-prioritise diversity and inclusion. But as Indeed’s Vice President of DEIB+ Misty Gaither found:
‘To successfully attract, retain and develop talent in this environment – and to improve representation in workforces – it’s vital to show how you’re taking meaningful action on these issues’.
There are several options available to employers who are looking to close diversity gaps in the AI, although they are currently not specific to this suggested initiative. The UK government’s National AI Strategy provides information on how people can learn more about AI. Businesses looking to invest heavily in AI can co-fund sponsorships for eligible employees to undergo AI and Data Science conversion courses.
While companies may be quick to adopt AI, it can be worth considering how it might not meet their DE&I standards. That is because many AI tools can have residual human bias in them due to the datasets they were trained on. Because of this, businesses looking to use AI can benefit from being transparent about their usage, as well as auditing their tools for any possible bias. There are some emerging tools on the market which can even check communications and marketing messages for discriminatory language, which means the future of AI may become more in favour of DE&I.