Time to Address Cyber Social Responsibility in ESG Strategies
Cyber Social Responsibility is not just the right thing to do, it can benefit your bottom line. Learn more about what it means and how to get started.
EU AI Act explained in this article: requirements, timeline, who must comply, and how CyberCoach helps organizations meet those requirements.
Recognizing the need for responsible development and use of AI, the European Union has officially adopted the EU AI Act (Artificial Intelligence Act) as of 1st August 2024. This landmark legislation establishes a comprehensive regulatory framework to ensure AI systems are safe, transparent, and respect fundamental rights. The EU AI Act will be enforced in stages between 2025 and 2030, starting with the enforcement of prohibited AI practices already in February 2025.
In this blog post, we'll explore the key aspects of the EU AI Act, its requirements, who must comply, and how CyberCoach can help your organization navigate this new regulatory landscape.
The EU AI Act is the European Union's ambitious effort to regulate AI technologies, aiming to balance innovation with protecting fundamental rights and public interests. Here are the key elements of the EU AI Act:
The EU AI Act introduces a risk-based approach, categorizing AI systems into different levels based on their potential impact:
Prohibited AI Practices: AI systems posing unacceptable risks are banned under the EU AI Act. This includes AI that manipulates human behavior detrimentally, social scoring by governments, and certain biometric surveillance types.
High-Risk AI Systems: These systems significantly impact individuals' safety or fundamental rights. Examples include AI used in critical infrastructure, education, employment, credit scoring, law enforcement, and migration management. High-risk AI systems are subject to strict obligations under the EU AI Act before market placement or use.
Limited Risk AI Systems: AI systems with specific transparency obligations, such as chatbots and deepfake generators, must comply with EU AI Act transparency rules.
Minimal Risk AI Systems: All other AI systems with minimal or no risk are allowed without additional EU AI Act legal requirements.
High-risk AI systems under the EU AI Act must comply with stringent requirements, including:
Risk Management System: Continuous monitoring and mitigation of risks throughout the AI system's lifecycle.
Data Governance: Use of high-quality, unbiased data per EU AI Act standards.
Technical Documentation: Maintain detailed records proving compliance with the EU AI Act, including design and purpose.
Record-Keeping: Log decisions and operations for auditability.
Transparency and Information Provision: Inform users about the AI system’s capabilities, limits, and intended uses according to EU AI Act mandates.
Human Oversight: Design AI for human intervention.
Accuracy, Robustness, and Cybersecurity: Ensure AI system reliability and security as required by the EU AI Act.
Besides the transparency requirements for high-risk AI systems, the EU AI Act imposes specific transparency obligations on providers of certain systems, including general-purpose AI systems. This includes the responsibility to ensure that individuals know they are interacting with an AI system, unless it is clear from the perspective of a reasonably well-informed person. It is also the responsibility of providers to ensure that any synthetic audio, image, video, or text output is clearly identifiable as having been generated artificially. All "deep fake" images, videos, or audio must be labeled as having been artificially generated or manipulated, in line with EU AI Act requirements.
The transparency obligation under the EU AI Act does not apply in certain law enforcement use cases, or if the content is clearly an artistic, creative, satirical, fictional, or analogous work or program. For artistic use, the disclosure can be done in a way that does not hamper the display or environment of the work.
Before high-risk AI systems can be placed on the EU market, the EU AI Act requires that they undergo a conformity assessment to verify compliance. Successful systems will receive a CE marking, indicating conformity with EU safety, health, and environmental protection requirements. For high-risk AI systems embedded in a product, a physical CE marking should be affixed and may be complemented by a digital CE marking. For high-risk AI systems only provided digitally, a digital CE marking should be used, as specified by the EU AI Act.
The EU AI Act establishes national supervisory authorities and a European Artificial Intelligence Board to oversee compliance, conduct market surveillance, and enforce regulations.
Penalties for breaches under the EU AI Act can reach:
For startups and SMEs, the lower of the two will apply. For larger organizations, it will be the higher.
Key dates for EU AI Act enforcement range from prohibitions starting February 2025 to broader obligations rolling out through 2030. The EU AI Office has launched the AI Pact, which calls on AI system providers and users to voluntarily implement some of the key provisions of the AI Act before it comes into force.
2 February 2025 |
Prohibitions on AI that poses unacceptable risks. |
2 August 2025 |
|
2 February 2026 |
Commission to implement legislation on post-market monitoring. |
2 August 2026 |
|
2 August 2027 |
|
By the end of 2030 |
Obligations will come into force for certain AI systems that are part of large-scale information technology systems established by EU law in freedom, security and justice, such as the Schengen Information System. |
The EU AI Act applies broadly to:
Providers: Organizations that develop or place AI systems on the EU market or put them into service, regardless of whether they are established within the EU or in a third country.
Users: Individuals or entities using AI systems within the EU, especially high-risk AI systems.
Importers and Distributors: Those who import or distribute AI systems within the EU are responsible for ensuring EU AI Act compliance of imported or distributed AI.
Authorized Representatives: Non-EU providers must appoint an EU-based representative responsible for compliance.
Compliance with the EU AI Act is both a legal obligation and an opportunity to build trust with customers and stakeholders. Key considerations include:
Assessing AI Systems: Identify which of your AI systems are classified as high-risk in the EU AI Act and understand the specific obligations that apply.
Implementing Compliance Measures: Establish processes for risk management, data governance, technical documentation, transparency, and human oversight.
Updating Operational Practices: Adapt your development, deployment, and monitoring practices to meet the EU AI Act's requirements.
Training and Awareness: Training your workforce on the EU AI Act regulations and compliance.
At CyberCoach, we specialize in keeping your employees aware of what matters, like the EU AI Act compliance:
Choose from a comprehensive library of role-based learning content to keep your employees trained in the safe and compliant use of the EU AI Act.
Most security training platforms today put employees at risk by feeding their behavioral data into AI systems and risk profiling them. We avoid profiling employees with AI risk scoring, aligning with ethical data handling under the EU AI Act. Training your employees to use AI safely and responsibly starts with processing their personal data responsibly. Choose a tool like CyberCoach that does not use AI for profiling employees.
You can count on CyberCoach content updating monthly to cover latest threats and relevant regulatory developments. Monthly updates keep your team informed on evolving EU AI Act requirements and AI risks.
No need to juggle multiple platforms—CyberCoach's in-chat Risk Self Assessments also cover AI/ML risks, and are available within Teams or Slack (or in the Browser). You can choose the assessments that are relevant for your operations, and target them to individuals based on their role.
Ready to empower your team to navigate the EU AI Act responsibly and effectively?
Disclaimer: This blog post is for informational purposes only and does not constitute legal advice. For specific guidance on complying with the EU AI Act, please consult legal professionals or official EU publications.
Cyber Social Responsibility is not just the right thing to do, it can benefit your bottom line. Learn more about what it means and how to get started.
Embedded training after phishing simulation exercises may not make employees more resilient to phishing, but it can actually have unexpected side...
By following these Do's and Don'ts, companies can create an impactful code of conduct and ethics training that is engaging and relevant to employees.