What is the EU AI Act?
The EU AI Act (AIA) is new legislation to regulate AI systems in the European Union. It is a cross-sector framework that aims to standardise the rules regarding the use of artificial intelligence, including generative and general-purpose AI. The Act categorises AI systems by their potential risks and imposes obligations on businesses using AI based on risks to health, safety, and fundamental rights.
What the EU AI Act means for employers
The EU AI Act affects organisations and human resources teams that use AI systems. Although the Act is created by the EU, it has extraterritorial reach. Businesses that fall within scope may wish to consider how the legislation could affect their use of AI. The objective of the EU AI Act is to:
- Ensure that the use of AI is trustworthy and human-centric
- Safeguard that the way artificial intelligence is used respects fundamental rights and values
- Implement rules specific to the use of AI in the workplace
- Protect human lives, fundamental rights, and society from the impact of AI systems through a risk-based approach
Related: 6 benefits of international assignments
How is the EU AI Act relevant to UK companies?
The extraterritorial aspect of the Act means that the United Kingdom is affected by it. Businesses that deploy AI systems for the EU market may fall within the scope of the regulation. UK organisations can also be in scope if their AI models are not deployed in the EU but their outputs are used there
The EU AI Act serves as a reference point in terms of AI regulations. Other regions are developing their own artificial intelligence regulations and may use the EU AI Act as a benchmark.
The UK government recognises that AI challenges will need laws and regulations. Currently, it is using existing rules, as it believes more time is needed to fully understand the risks and opportunities. While many agree on the risks and key principles of AI, differences in regulations could still pose challenges for businesses in the near future.
Related: Compliance and risk management: how they differ
Key milestones and timeline of the EU AI Act
The EU AI Act is to be fully applicable as of August 2027. However, there are some milestones prior to that date. The timeframe for the Act to be enforced is defined as follows:
- August 2024: the EU AI Act came into force
- February 2025: any artificial intelligence systems deemed unacceptable in terms of risks are banned
- May 2025: from that date, the codes of practice for general-purpose AI apply
- August 2025: governance obligations for general-purpose AI come into force
- August 2026: the EU AI Act for AI systems starts to apply
- August 2027: the EU AI Act applies to all risk categories
Related: AI toolkit for talent leaders
The EU AI Act high-risk classification in a nutshell
The EU AI Act is based on a risk approach that classifies risks by importance. Here is a snapshot of this classification and the features it is based on.
Ban for risky AI practices
This feature includes a ban on risky AI practices, with rules against using AI to read emotions in the workplace. It also covers AI that uses manipulative tactics, targets people’s vulnerabilities, or sorts individuals based on characteristics such as race or union membership using biometric data.
Implementation of strict obligations for AI systems classified as high-risk
Artificial intelligence used in hiring, promotion, termination, task allocation, or monitoring employees is considered high-risk under the EU AI Act. This includes not just traditional employment but also platform workers, self-employed consultants and agency staff.
Transparency requirements for AI systems and usage
AI systems are obligated to be clear with users about how they work and what they do. They should also indicate if any content is created or altered by AI. This includes AI systems like chatbots, emotion recognition, biometric systems and generative AI. The Act includes transparency provisions that sit alongside requirements for high-risk AI systems.
Rules to regulate General Purpose AI (GPAI) models
The EU AI Act includes a framework intended to regulate GPAI systems. These are AI models designed to perform a wide range of tasks. The rules will apply to all models and some may even be subject to additional requirements for those that could have a larger-scale impact or present risks to society, companies, or individuals.
Duty of AI literacy requirement
The EU AI Act includes expectations around AI literacy for organisations that develop, deploy or use certain AI systems. In practice, this means organisations should ensure that the people involved in selecting or working with these systems understand their basic capabilities, limitations and risks. The goal is to support informed use and appropriate human oversight, especially where AI tools may influence decisions affecting individuals. The specific steps an organisation chooses to take will vary depending on the systems they use and their internal processes.
Related: Equality Act 2010: explained for employers
7 steps to get your business ready for the EU AI Act
Organisations that want to prepare for the EU AI Act may consider a range of approaches. Here are seven steps to help your organisation when the Act comes into effect.
Step 1: create an AI inventory to prevent issues
One approach some organisations take is to create an inventory of the AI systems they use. You may focus only on EU operations at this stage. Creating an inventory helps you understand how the business may be affected by the Act. It forms the basis for developing strategies that will mitigate the risks and ensure compliance with the regulation when using or selling AI systems in the EU.
Step 2: draft compliance and mitigation plans
Some organisations choose to develop strategies that reflect the Act’s themes. Depending on their situation, some organisations conduct risk assessments. Bias audits or the implementation of technical measures and processes may be used by organisations aiming to align with the Act’s transparency themes.
Related: Compliance training for employees: objectives and strategies
Step 3: create AI policies and procedures
Some organisations update internal or customer-facing policies to reflect the Act’s key themes. Transparency, fairness, explainability and non-discrimination in AI decisions should be addressed in the policy. This will help the organisation handle any challenges to AI decisions or questions from regulators.
Step 4: deliver employee training
Some organisations provide information to teams about the Act and how it may relate to their roles. This helps meet human oversight requirements and demonstrates that the organisation is taking steps to manage AI risks. Regular training and open conversations will ensure everyone stays informed and aligned.
Related: 5 steps to creating an effective training and development programme
Step 5: stay up to date with the Act and any potential UK AI regulations
Stay updated on the latest changes in rules and guidelines in this fast-moving field. The UK, for example, is focusing on innovation and will allow industry regulators to handle AI risks, transparency, bias, and safety. The EU AI Act may evolve and additional regulations may emerge.
Step 6: build a strong AI governance model and risk management framework
Review or implement clear AI governance to meet the EU AI Act and any other AI regulations. Establish robust risk management procedures with testing and regular checks. Also, ensure that your data is well managed and protected.
Step 7: map any interdependencies
Understand how your AI systems depend on others. Identifying these dependencies reduces risks and builds trust with third-party providers. Make sure that roles and responsibilities are clearly defined and that safety measures are in place.
Related: Young workers and their rights at work
UK businesses are not legally required to follow the EU’s AI Act; however organisations will need to navigate the differences in regulations if they work with EU partners or customers. It appears that the EU is currently setting stricter rules than the UK, where we will endeavour to balance innovation with staying competitive globally. For now, UK businesses may benefit from aligning with EU standards to comply when dealing with the European market.
Related: