There’s no doubt that AI has transformative potential. How we work and live could change considerably over the coming years. But could there be a darker side to innovation? In this article we explore the intersection of AI and mental health, unpicking the five biggest dangers of AI for your people's health and wellbeing.

AI and mental health: the darker side of innovation  

The AI conversation has gained major momentum since the launch of ChatGPT in November last year, which brought generative AI to the public with a bang.

IBM define generative AI as ‘deep-learning models that can generate high-quality […] content based on the data they were trained on’. What ChatGPT brought to the table was an innovative conversational interface that made these generative capabilities accessible to anyone with a computer and internet.

Plenty of people took advantage. ChatGPT set the record for the fastest-growing user base, estimated to hit 100-million users within two months of launching.

This ‘leap forward in natural language processing’, as IBM put it, heralds major changes in how we live and work. According to McKinsey, AI could boost the UK economy by 22% by 2030—more than the global average, because the UK is well positioned to take advantage of AI.

But is generative AI an all-round positive? There’s plenty of evidence suggesting the picture isn’t all rosy. Not least the 33,000-signature-strong open letter from a host of industry leaders calling on ‘all AI labs to immediately pause for at least 6 months’.

The letter urges caution, warning that AI now becoming ‘human-competitive’ brings major upheaval—and big risks, as our way of existing changes. AI could have potential consequences for employees' security, wellbeing, and mental health.

5 biggest AI risks

For all the positives AI might bring, it’s important to also consider the potential dangers of AI—and the possible mental health consequences for employees. 

Workplace mental health continues to hold an important position on the leadership agenda. To prioritise employee wellbeing, organisations must consider the impact of the biggest technological advance of our time. Let's explore the five biggest AI risks, and talk about the potential impact on your people.

1 – Job losses and changes due to automation

Automation is one of the biggest benefits of AI—but has always been one of the biggest concerns too. PwC analysed over 200,000 jobs in 29 countries, finding that 30% of jobs across the UK could be automated by 2030.

In the open letter, the authors ask: ‘Should we automate away all the jobs, including the fulfilling ones?'. Perhaps in an ideal vision, AI would handle repetitive, boring, manual tasks and humans would be free to work on more fulfilling projects. But this is a sweeping statement that belies real issues.

More than half of UK businesses are already experiencing skills shortages. If jobs change and demand new skills, this adds to the reskilling burden for organisations.

Moreover, it has potentially severe consequences for employee health and wellbeing. If people can’t find paying work, how else would they provide for their basic needs? Would there be enough ‘fulfilling’ jobs to go around? Would workers even agree their changed job is more fulfilling?

At best, job losses or changes threaten job satisfaction—but they could also threaten people’s fundamental ability to survive. 

2 – Perpetuating discrimination and inequality

One of the top concerns about AI is the perpetuation of societal biases and discrimination. Any AI model is only as good as its data and algorithmic design—so if these foundations are unfair, then the AI model perpetuates unfair decision-making.

For instance, there are many excellent use cases for AI within recruitment—like creating job adverts and interview questions—but this can bring challenges for inclusion and diversity.

  • How do you ensure questions are fair and inclusive?
  • How do you ensure job adverts use neutral non-biased language?

If we can’t answer these questions, AI risks undoing the hard work many employers have been engaging with, to build a fairer and more inclusive recruitment process and culture. In turn, this risks demoralising employees, hurting well-being and inclusion, and negatively impacting engagement, productivity, and retention.

3 – Privacy and security breaches 

The major criticism of the open letter is that the growth of AI has outpaced the growth of the regulatory controls that govern AI. One way this makes itself particularly felt is around privacy and security.

Cybersecurity is already an issue for many organisations before you consider the rampant evolution of AI. For instance, Ipsos report that 51% of businesses have a cybersecurity basic skills gap.

As AI continues to evolve, it creates an evolving privacy and security threat that few businesses are equipped to meet—and these breaches can have serious implications both for organisations and employee mental health. 

For example, a breach of privacy or security could:

  • Damage employee morale and engagement
  • Increase systems downtime and damage productivity
  • Cause significant personal distress
  • Increase the risk of legal liability and fines

4 – Spread of misinformation

CIO.com talk about another major risk of generative AI:

‘Another key challenge of generative AI today is its obliviousness to the truth. It is not a “liar,” because that would indicate an awareness of fact vs. fiction. It is simply unaware of truthfulness, as it is optimized to predict the most likely response based on the context of the current conversation, the prompt provided, and the data set it is trained on.’

Misinformation is a big potential problem—whether employees are exposed inside or outside work.

In your internal communications, misinformation can slow productivity, cause quality problems, and hurt trust. More broadly, the spread of misinformation can quickly inflame tensions, be discriminatory, and damage connectedness and collaboration—all factors that contribute to a healthy, safe workplace culture (or not).

5 – Loss of connection 

Another of the big AI risks is diminishing connection. AI-based tools often have efficiency and speed as cornerstones, but could this come at the cost of empathy, connection and collaboration—in our wider lives and within the workplace?

Some of the best things humans are capable of are messy and inefficient, relying less on optimisation than experimentation. Creativity and innovation are prime examples.

This quote from The British Council’s The Human Spark report demonstrates the potential limitations of AI—and focussing only on learning from the past:

‘Most leaders are actually followers, following what others have done before. Pioneers – creative social entrepreneurs, people making things that haven’t been made before – they make their own ladders. No one has gone there before. There is no handbook for where they are going.’ Karen Newman, Birmingham Open Media (BOM), UK and DICE Collaborator

If AI is allowed to evolve unchecked, there’s an argument that organisations risk losing the fundamental human, creative spark that’s as important to organisational growth as efficiency and productivity. And eroding job satisfaction and fulfilment at the same time.

Leaders must thoughtfully combat these dangers of AI

Generative AI heralds enormous potential changes—changes that hit right at the fabric of how we live and work. There’s little doubt that many of these changes will be beneficial but the open letter seems correct to urge caution.

Rampant development before we’ve had a chance to consider and combat the issues at play doesn’t make sense. Leaders mustn’t be swayed by exciting new technologies without also taking the time to consider the risks and impact on employees' mental health and wellbeing.

Has your organisation considered AI and mental health? How are you mitigating the potential dangers of AI? What AI risks are you most concerned about?