Awareness

Navigating the EU's Artificial Intelligence Act: A Comprehensive Guide for Your Organization

Learn about the EU AI Act requirements, timeline, who must comply, and how CyberCoach helps organizations meet training and risk assessment requirements.

Subscribe

Subscribe

Recognizing the need for responsible development and use of AI, the European Union has officially adopted the Artificial Intelligence Act (AI Act) 1st of August this year (2024). This landmark legislation establishes a comprehensive regulatory framework to ensure AI systems are safe, transparent, and respect fundamental rights. The new law will be enforced in stages between 2025 and 2030, starting with the enforcement of prohibited AI practices already in February 2025. 

In this blog post, we'll explore the key aspects of the AI Act, its requirements, who must comply, and how CyberCoach can help your organization navigate this new regulatory landscape.


Understanding the Artificial Intelligence Act

The AI Act is the EU's ambitious effort to regulate AI technologies, aiming to balance innovation with protecting fundamental rights and public interests. Here are the key elements of the Act:

1. Risk-Based Classification

The AI Act introduces a risk-based approach, categorizing AI systems into different levels based on their potential impact:

  • Prohibited AI Practices: AI systems that pose unacceptable risks are banned. This includes AI that manipulates human behavior to the detriment of users, systems used for social scoring by governments, and certain types of biometric surveillance.

  • High-Risk AI Systems: These systems have a significant impact on individuals' safety or fundamental rights. Examples include AI used in critical infrastructure, education, employment, credit scoring, law enforcement, and migration management. High-risk AI systems are subject to strict obligations before they can be placed on the market or put into service.

  • Limited Risk AI Systems: AI systems with specific transparency obligations, such as chatbots and deepfake generators. Users must be informed that they are interacting with an AI system.

  • Minimal Risk AI Systems: All other AI systems that pose minimal or no risk are allowed with no additional legal requirements.

2. Mandatory Requirements for High-Risk AI Systems

High-risk AI systems must comply with stringent requirements, including:

Risk Management System: Monitor and reduce risks throughout the AI system's life.
 
Data governance: Use high-quality, error-free, unbiased data.

Technical documentation: Keep detailed records showing how the AI Act is followed, including design details and the system's purpose.
 
Record-Keeping: Keep logs of the AI system's decisions and operations.
 
Transparency and Provision of Information: Provide information to users about the AI system's capabilities, limitations, and intended uses.
 
Human Oversight: Design AI systems to allow for human oversight.
 
Accuracy, Robustness, and Cybersecurity: Ensure AI systems are accurate, reliable, and secure.

3. Transparency Obligations for Certain AI Systems

Besides the transparency requirements for high-risk AI systems, providers of certain systems, including general-purpose AI systems, are subject to specific transparency obligations. This includes the responsibility to ensure that individuals know they are interacting with an AI system, unless it is clear from the perspective of a reasonably well-informed person. It is also the responsibility of providers to ensure that any synthetic audio, image, video, or text output is clearly identifiable as having been generated artificially. All "deep fake" images, videos, or audio must be labeled as having been artificially generated or manipulated.

The transparency obligation does not apply in certain law enforcement use cases, or if the content is clearly an artistic, creative, satirical, fictional, or analogous work or program. For artistic use, the disclosure can be done in a way that does not hamper the display or environment of the work.

4. Conformity Assessment and CE Marking

Before high-risk AI systems can be placed on the EU market, they must undergo a conformity assessment to verify compliance with the AI Act. Successful systems will receive a CE marking, indicating conformity with EU safety, health, and environmental protection requirements. For high-risk AI systems embedded in a product, a physical CE marking should be affixed, and may be complemented by a digital CE marking. For high-risk AI systems only provided digitally, a digital CE marking should be used.

5. Market Surveillance and Enforcement

The Act establishes national supervisory authorities and a European Artificial Intelligence Board to oversee compliance, conduct market surveillance, and enforce regulations. 

The maximum fines are pretty high:

  • €35 million or 7% of turnover for serious breaches involving prohibited AI activities.
  • €15 million or 3% of turnover for non-compliance with other requirements.
  • €7.5 million or 1% of turnover for providing false or misleading information to authorities.

For startups and SMEs, the lower of the two will apply. For larger organizations, it will be the higher.

6. Timeline

The EU AI Office has launched the AI Pact, which calls on AI system providers and users to voluntarily implement some of the key provisions of the AI Act before it comes into force.

2 February 2025
Prohibitions on AI that poses unacceptable risks.
2 August 2025
  • Obligations for providers of general purpose AI models.

  • Designation of competent authorities in Member States.

  •  Annual review by the Commission of the list of prohibited AI and possible legislative amendments.
2 February 2026

Commission to implement legislation on post-market monitoring.

2 August 2026
  • Obligations enter into force for high-risk AI systems specifically listed in Annex III, including systems in the fields of biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration and the administration of justice.

  • Member States to have implemented rules on sanctions, including administrative fines.

  • Member State authorities to have established at least one operational AI regulatory sandbox.

  • The Commission to review and possibly amend the list of high-risk AI systems.
2 August 2027
  • Obligations enter into force for high-risk AI systems that are not listed in Annex III but are used as a safety component of a product.

  • Obligations come into force for high-risk AI systems where the AI itself is a product, and the product is subject to third party conformity assessment under existing specific EU legislation, e.g. toys, radio equipment, in vitro diagnostic medical devices, civil aviation safety and agricultural vehicles.
By the end of 2030

Obligations will come into force for certain AI systems that are part of large-scale information technology systems established by EU law in freedom, security and justice, such as the Schengen Information System.

 


Who Must Comply?

The AI Act has a broad scope and applies to:

  • Providers: Organizations that develop or place AI systems on the EU market or put them into service, regardless of whether they are established within the EU or in a third country.

  • Users: Individuals or entities using AI systems within the EU, especially high-risk AI systems.

  • Importers and Distributors: Those who import or distribute AI systems within the EU must ensure the systems comply with the AI Act's requirements.

  • Authorized Representatives: Non-EU providers must appoint an EU-based representative responsible for compliance.


Implications for Your Organization

Compliance with the AI Act is both a legal obligation and an opportunity to build trust with customers and stakeholders. Key considerations include:

  • Assessing AI Systems: Identify which of your AI systems are classified as high-risk and understand the specific obligations that apply.

  • Implementing Compliance Measures: Establish processes for risk management, data governance, technical documentation, transparency, and human oversight.

  • Updating Operational Practices: Adapt your development, deployment, and monitoring practices to meet the AI Act's requirements.

  • Training and Awareness: Ensure your team understands the regulatory obligations and how to implement them effectively.


How CyberCoach Can Help

At CyberCoach, we specialize in keeping your employees aware of what matters.

1. Role-Based Training

Choose from a comprehensive library of role-based learning content to keep your employees trained in the safe and compliant use of AI technologies.

2. Psychologically Safe Learning

Most security training platforms today put employees at risk by feeding their behavioral data into AI systems and risk profiling them. Training your employees to use AI safely and responsibly starts with processing their personal data responsibly. Choose a tool like CyberCoach that does not use AI for profiling employees.

3. Always up-to-date

You can count on CyberCoach content updating monthly to cover latest threats and relevant regulatory developments.

4. Risk Assessments

No need to juggle multiple platforms—CyberCoach's in-chat Risk Self Assessments also cover AI/ML risks, and are available within Teams or Slack (or in the Browser). You can choose the assessments that are relevant for your operations, and target them to individuals based on their role.


Take the Next Step with CyberCoach

Ready to empower your team to use AI responsibly?


 


Disclaimer: This blog post is for informational purposes only and does not constitute legal advice. For specific guidance on complying with the Artificial Intelligence Act, please consult legal professionals or official EU publications.

Similar posts

Get notified of the latest security awareness insights

Expert Tips: Stay informed with curated content, expert opinions, and case studies that are relevant to your organization's security awareness strategy.

Special Offers:
 Access to CyberCoach discounts and early bird offers.

Stay Informed:
 Get the latest insights and updates on security trends, threats, and best practices delivered directly to your inbox.