What is the EU AI Act?
The EU AI Act (AIA) is new legislation to regulate AI systems in the European Union. It is a cross-sector framework that aims to standardise the rules regarding the use of artificial intelligence, including generative and general-purpose AI. The Act categorises AI systems by their potential risks and imposes obligations on businesses using AI based on risks to health, safety, and fundamental rights.
What the EU AI Act means for employers
The EU AI Act affects organisations and human resources teams that use AI systems. Although the Act is created by the EU, it has extraterritorial reach. Businesses that use AI in their operations, recruitment, talent management, workforce monitoring and more need to prepare to comply with the legislation. The objective of the EU AI Act is to:
- Ensure that the use of AI is trustworthy and human-centric
- Safeguard that the way artificial intelligence is used respects fundamental rights and values
- Implement rules specific to the use of AI in the workplace
- Protect human lives, fundamental rights, and society from the impact of AI systems through a risk-based approach
- Implement controls that can lead to fines of up to EUR 35 million and 7% of the undertaking’s global annual turnover for the most serious infringements
Related: 6 benefits of international assignments
How is the EU AI Act relevant to UK companies?
The extraterritorial aspect of the Act means that the United Kingdom is affected by it. UK businesses that deploy AI systems for the EU market need to follow the regulation. UK organisations are also within the scope of the Act if their AI models are not deployed in the EU but their outputs are used in the EU.
The EU AI Act serves as a reference point in terms of AI regulations. Other regions are developing their own artificial intelligence regulations and may use the EU AI Act as a benchmark.
The UK government recognises that AI challenges will need laws and regulations. Currently, it is using existing rules, as it believes more time is needed to fully understand the risks and opportunities. While many agree on the risks and key principles of AI, differences in regulations could still pose challenges for businesses in the near future.
Related: Compliance and risk management: how they differ
Key milestones and timeline of the EU AI Act
The EU AI Act is to be fully applicable as of August 2027. However, there are some milestones prior to that date. The timeframe for the Act to be enforced is defined as follows:
- August 2024: the EU AI Act came into force
- February 2025: any artificial intelligence systems deemed unacceptable in terms of risks are banned
- May 2025: from that date, the codes of practice for general-purpose AI apply
- August 2025: governance obligations for general-purpose AI come into force
- August 2026: the EU AI Act for AI systems starts to apply
- August 2027: the EU AI Act applies to all risk categories
Related: AI toolkit for talent leaders
The EU AI Act high-risk classification in a nutshell
The EU AI Act is based on a risk approach that classifies risks by importance. Here is a snapshot of this classification and the features it is based on.
Ban for risky AI practices
This feature includes a ban on risky AI practices, with rules against using AI to read emotions in the workplace. It also covers AI that uses manipulative tactics, targets people’s vulnerabilities, or sorts individuals based on characteristics such as race or union membership using biometric data.
Implementation of strict obligations for AI systems classified as high-risk
Artificial intelligence used in hiring, promotion, termination, task allocation, or monitoring employees is considered high-risk under the EU AI Act. This includes not just traditional employment but also platform workers, self-employed consultants and agency staff. The Act sets strict rules for using high-risk AI in the workplace to ensure fairness and safety.
Transparency requirements for AI systems and usage
AI systems are obligated to be clear with users about how they work and what they do. They should also indicate if any content is created or altered by AI. This includes AI systems like chatbots, emotion recognition, biometric systems and generative AI. These transparency rules apply in addition to the requirements for high-risk AI systems.
Rules to regulate General Purpose AI (GPAI) models
The EU AI Act will implement regulations to oversee GPAI systems. These are AI models designed to perform a wide range of tasks. The rules will apply to all models and some may even be subject to additional requirements for those that could have a larger-scale impact or present risks to society, companies, or individuals.
Duty of AI literacy requirement
Companies that provide and deploy AI systems are required to ensure a sufficient level of AI literacy among their employees. This obligation also extends to individuals who use AI systems on behalf of the business, even if they are not employees.
Related: Equality Act 2010: explained for employers
7 steps to get your business ready for the EU AI Act
UK businesses can implement several measures to comply with the EU AI Act. Here are seven steps to help your organisation when the Act comes into effect.
Step 1: create an AI inventory to prevent issues
Begin by creating an inventory of all the AI systems used in your operations. You may focus only on EU operations at this stage. Creating an inventory helps you understand how the business may be affected by the Act. It forms the basis for developing strategies that will mitigate the risks and ensure compliance with the regulation when using or selling AI systems in the EU.
Step 2: draft compliance and mitigation plans
Develop strategies to meet the new requirements. Depending on the AI systems used by the business and their purpose, you may need to conduct risk assessments. Bias audits or the implementation of technical measures and processes may also be necessary to ensure compliance with transparency standards.
Related: Compliance training for employees: objectives and strategies
Step 3: create AI policies and procedures
Update your internal and customer-facing policies to align with the Act’s key principles. Transparency, fairness, explainability and non-discrimination in AI decisions should be addressed in the policy. This will help the organisation handle any challenges to AI decisions or questions from regulators.
Step 4: deliver employee training
Make sure teams understand the Act and how it impacts their work. This helps meet human oversight requirements and demonstrates that the organisation is taking steps to manage AI risks. Regular training and open conversations will ensure everyone stays informed and aligned.
Related: 5 steps to creating an effective training and development programme
Step 5: stay up to date with the Act and any potential UK AI regulations
Stay updated on the latest changes in rules and guidelines in this fast-moving field. The UK, for example, is focusing on innovation and will allow industry regulators to handle AI risks, transparency, bias, and safety. The EU AI Act may evolve and additional regulations may emerge.
Step 6: build a strong AI governance model and risk management framework
Review or implement clear AI governance to meet the EU AI Act and any other AI regulations. Establish robust risk management procedures with testing and regular checks. Also, ensure that your data is well managed and protected.
Step 7: map any interdependencies
Understand how your AI systems depend on others. Identifying these dependencies reduces risks and builds trust with third-party providers. Make sure that roles and responsibilities are clearly defined and that safety measures are in place.
Related: Young workers and their rights at work
UK businesses are not legally required to follow the EU’s AI Act; however organisations will need to navigate the differences in regulations if they work with EU partners or customers. It appears that the EU is currently setting stricter rules than the UK, where we will endeavour to balance innovation with staying competitive globally. For now, UK businesses may benefit from aligning with EU standards to comply when dealing with the European market.
Related: