Awareness

EU AI Act: A Comprehensive Guide for Navigating it with Your Organization

EU AI Act explained in this article: requirements, timeline, who must comply, and how CyberCoach helps organizations meet those requirements.

Subscribe

Subscribe

Recognizing the need for responsible development and use of AI, the European Union has officially adopted the EU AI Act (Artificial Intelligence Act) as of 1st August 2024. This landmark legislation establishes a comprehensive regulatory framework to ensure AI systems are safe, transparent, and respect fundamental rights. The EU AI Act will be enforced in stages between 2025 and 2030, starting with the enforcement of prohibited AI practices already in February 2025.

In this blog post, we'll explore the key aspects of the EU AI Act, its requirements, who must comply, and how CyberCoach can help your organization navigate this new regulatory landscape.


Understanding the EU AI Act

The EU AI Act is the European Union's ambitious effort to regulate AI technologies, aiming to balance innovation with protecting fundamental rights and public interests. Here are the key elements of the EU AI Act:

1. EU AI Act Risk-Based Classification Explained

The EU AI Act introduces a risk-based approach, categorizing AI systems into different levels based on their potential impact:

  • Prohibited AI Practices: AI systems posing unacceptable risks are banned under the EU AI Act. This includes AI that manipulates human behavior detrimentally, social scoring by governments, and certain biometric surveillance types.

  • High-Risk AI Systems: These systems significantly impact individuals' safety or fundamental rights. Examples include AI used in critical infrastructure, education, employment, credit scoring, law enforcement, and migration management. High-risk AI systems are subject to strict obligations under the EU AI Act before market placement or use.

  • Limited Risk AI Systems: AI systems with specific transparency obligations, such as chatbots and deepfake generators, must comply with EU AI Act transparency rules.

  • Minimal Risk AI Systems: All other AI systems with minimal or no risk are allowed without additional EU AI Act legal requirements.

2. Mandatory Requirements Under the EU AI Act for High-Risk AI Systems

High-risk AI systems under the EU AI Act must comply with stringent requirements, including:

  • Risk Management System: Continuous monitoring and mitigation of risks throughout the AI system's lifecycle.

  • Data Governance: Use of high-quality, unbiased data per EU AI Act standards.

  • Technical Documentation: Maintain detailed records proving compliance with the EU AI Act, including design and purpose.

  • Record-Keeping: Log decisions and operations for auditability.

  • Transparency and Information Provision: Inform users about the AI system’s capabilities, limits, and intended uses according to EU AI Act mandates.

  • Human Oversight: Design AI for human intervention.

  • Accuracy, Robustness, and Cybersecurity: Ensure AI system reliability and security as required by the EU AI Act.


3. EU AI Act Transparency Obligations: What AI Providers Must Do

Besides the transparency requirements for high-risk AI systems, the EU AI Act imposes specific transparency obligations on providers of certain systems, including general-purpose AI systems. This includes the responsibility to ensure that individuals know they are interacting with an AI system, unless it is clear from the perspective of a reasonably well-informed person. It is also the responsibility of providers to ensure that any synthetic audio, image, video, or text output is clearly identifiable as having been generated artificially. All "deep fake" images, videos, or audio must be labeled as having been artificially generated or manipulated, in line with EU AI Act requirements.

The transparency obligation under the EU AI Act does not apply in certain law enforcement use cases, or if the content is clearly an artistic, creative, satirical, fictional, or analogous work or program. For artistic use, the disclosure can be done in a way that does not hamper the display or environment of the work.


4. Conformity Assessment and CE Marking

Before high-risk AI systems can be placed on the EU market, the EU AI Act requires that they undergo a conformity assessment to verify compliance. Successful systems will receive a CE marking, indicating conformity with EU safety, health, and environmental protection requirements. For high-risk AI systems embedded in a product, a physical CE marking should be affixed and may be complemented by a digital CE marking. For high-risk AI systems only provided digitally, a digital CE marking should be used, as specified by the EU AI Act.

5. Conformity Assessment and CE Marking in the EU AI Act

The EU AI Act establishes national supervisory authorities and a European Artificial Intelligence Board to oversee compliance, conduct market surveillance, and enforce regulations. 

Penalties for breaches under the EU AI Act can reach:

  • €35 million or 7% of turnover for serious breaches involving prohibited AI activities.
  • €15 million or 3% of turnover for non-compliance with other requirements.
  • €7.5 million or 1% of turnover for providing false or misleading information to authorities.

For startups and SMEs, the lower of the two will apply. For larger organizations, it will be the higher.

6. EU AI Act Compliance Timeline: Important Dates to Remember

Key dates for EU AI Act enforcement range from prohibitions starting February 2025 to broader obligations rolling out through 2030. The EU AI Office has launched the AI Pact, which calls on AI system providers and users to voluntarily implement some of the key provisions of the AI Act before it comes into force.

2 February 2025
Prohibitions on AI that poses unacceptable risks.
2 August 2025
  • Obligations for providers of general purpose AI models.

  • Designation of competent authorities in Member States.

  •  Annual review by the Commission of the list of prohibited AI and possible legislative amendments.
2 February 2026

Commission to implement legislation on post-market monitoring.

2 August 2026
  • Obligations enter into force for high-risk AI systems specifically listed in Annex III, including systems in the fields of biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration and the administration of justice.

  • Member States to have implemented rules on sanctions, including administrative fines.

  • Member State authorities to have established at least one operational AI regulatory sandbox.

  • The Commission to review and possibly amend the list of high-risk AI systems.
2 August 2027
  • Obligations enter into force for high-risk AI systems that are not listed in Annex III but are used as a safety component of a product.

  • Obligations come into force for high-risk AI systems where the AI itself is a product, and the product is subject to third party conformity assessment under existing specific EU legislation, e.g. toys, radio equipment, in vitro diagnostic medical devices, civil aviation safety and agricultural vehicles.
By the end of 2030

Obligations will come into force for certain AI systems that are part of large-scale information technology systems established by EU law in freedom, security and justice, such as the Schengen Information System.

 


Who Must Comply with the EU AI Act?

The EU AI Act applies broadly to:

  • Providers: Organizations that develop or place AI systems on the EU market or put them into service, regardless of whether they are established within the EU or in a third country.

  • Users: Individuals or entities using AI systems within the EU, especially high-risk AI systems.

  • Importers and Distributors: Those who import or distribute AI systems within the EU are responsible for ensuring EU AI Act compliance of imported or distributed AI.

  • Authorized Representatives: Non-EU providers must appoint an EU-based representative responsible for compliance.


Implications of the EU AI Act for Your Organization

Compliance with the EU AI Act is both a legal obligation and an opportunity to build trust with customers and stakeholders. Key considerations include:

  • Assessing AI Systems: Identify which of your AI systems are classified as high-risk in the EU AI Act and understand the specific obligations that apply.

  • Implementing Compliance Measures: Establish processes for risk management, data governance, technical documentation, transparency, and human oversight.

  • Updating Operational Practices: Adapt your development, deployment, and monitoring practices to meet the EU AI Act's requirements.

  • Training and Awareness: Training your workforce on the EU AI Act regulations and compliance.


How CyberCoach Can Help With the EU AI Act

At CyberCoach, we specialize in keeping your employees aware of what matters, like the EU AI Act compliance:

1. Role-Based Training

Choose from a comprehensive library of role-based learning content to keep your employees trained in the safe and compliant use of the EU AI Act.

2. Psychologically Safe Learning

Most security training platforms today put employees at risk by feeding their behavioral data into AI systems and risk profiling them. We avoid profiling employees with AI risk scoring, aligning with ethical data handling under the EU AI Act. Training your employees to use AI safely and responsibly starts with processing their personal data responsibly. Choose a tool like CyberCoach that does not use AI for profiling employees. 

3. Always up-to-date

You can count on CyberCoach content updating monthly to cover latest threats and relevant regulatory developments. Monthly updates keep your team informed on evolving EU AI Act requirements and AI risks.

4. Risk Assessments

No need to juggle multiple platforms—CyberCoach's in-chat Risk Self Assessments also cover AI/ML risks, and are available within Teams or Slack (or in the Browser). You can choose the assessments that are relevant for your operations, and target them to individuals based on their role.


Take the Next Step with CyberCoach and the EU AI Act

Ready to empower your team to navigate the EU AI Act responsibly and effectively?


 


Disclaimer: This blog post is for informational purposes only and does not constitute legal advice. For specific guidance on complying with the  EU AI Act, please consult legal professionals or official EU publications.

Similar posts

Get notified of the latest security awareness insights

Expert Tips: Stay informed with curated content, expert opinions, and case studies that are relevant to your organization's security awareness strategy.

Special Offers:
 Access to CyberCoach discounts and early bird offers.

Stay Informed:
 Get the latest insights and updates on security trends, threats, and best practices delivered directly to your inbox.