Automate Personalized Learning Paths with CyberCoach
Learning is not One-Size-Fits-All. Learn how to automate a role-relevant training program that is tailored to user skill level with CyberCoach.
Learn about the EU AI Act requirements, timeline, who must comply, and how CyberCoach helps organizations meet training and risk assessment requirements.
Recognizing the need for responsible development and use of AI, the European Union has officially adopted the Artificial Intelligence Act (AI Act) 1st of August this year (2024). This landmark legislation establishes a comprehensive regulatory framework to ensure AI systems are safe, transparent, and respect fundamental rights. The new law will be enforced in stages between 2025 and 2030, starting with the enforcement of prohibited AI practices already in February 2025.
In this blog post, we'll explore the key aspects of the AI Act, its requirements, who must comply, and how CyberCoach can help your organization navigate this new regulatory landscape.
The AI Act is the EU's ambitious effort to regulate AI technologies, aiming to balance innovation with protecting fundamental rights and public interests. Here are the key elements of the Act:
The AI Act introduces a risk-based approach, categorizing AI systems into different levels based on their potential impact:
Prohibited AI Practices: AI systems that pose unacceptable risks are banned. This includes AI that manipulates human behavior to the detriment of users, systems used for social scoring by governments, and certain types of biometric surveillance.
High-Risk AI Systems: These systems have a significant impact on individuals' safety or fundamental rights. Examples include AI used in critical infrastructure, education, employment, credit scoring, law enforcement, and migration management. High-risk AI systems are subject to strict obligations before they can be placed on the market or put into service.
Limited Risk AI Systems: AI systems with specific transparency obligations, such as chatbots and deepfake generators. Users must be informed that they are interacting with an AI system.
Minimal Risk AI Systems: All other AI systems that pose minimal or no risk are allowed with no additional legal requirements.
High-risk AI systems must comply with stringent requirements, including:
Besides the transparency requirements for high-risk AI systems, providers of certain systems, including general-purpose AI systems, are subject to specific transparency obligations. This includes the responsibility to ensure that individuals know they are interacting with an AI system, unless it is clear from the perspective of a reasonably well-informed person. It is also the responsibility of providers to ensure that any synthetic audio, image, video, or text output is clearly identifiable as having been generated artificially. All "deep fake" images, videos, or audio must be labeled as having been artificially generated or manipulated.
The transparency obligation does not apply in certain law enforcement use cases, or if the content is clearly an artistic, creative, satirical, fictional, or analogous work or program. For artistic use, the disclosure can be done in a way that does not hamper the display or environment of the work.
Before high-risk AI systems can be placed on the EU market, they must undergo a conformity assessment to verify compliance with the AI Act. Successful systems will receive a CE marking, indicating conformity with EU safety, health, and environmental protection requirements. For high-risk AI systems embedded in a product, a physical CE marking should be affixed, and may be complemented by a digital CE marking. For high-risk AI systems only provided digitally, a digital CE marking should be used.
The Act establishes national supervisory authorities and a European Artificial Intelligence Board to oversee compliance, conduct market surveillance, and enforce regulations.
The maximum fines are pretty high:
For startups and SMEs, the lower of the two will apply. For larger organizations, it will be the higher.
The EU AI Office has launched the AI Pact, which calls on AI system providers and users to voluntarily implement some of the key provisions of the AI Act before it comes into force.
2 February 2025 |
Prohibitions on AI that poses unacceptable risks. |
2 August 2025 |
|
2 February 2026 |
Commission to implement legislation on post-market monitoring. |
2 August 2026 |
|
2 August 2027 |
|
By the end of 2030 |
Obligations will come into force for certain AI systems that are part of large-scale information technology systems established by EU law in freedom, security and justice, such as the Schengen Information System. |
The AI Act has a broad scope and applies to:
Providers: Organizations that develop or place AI systems on the EU market or put them into service, regardless of whether they are established within the EU or in a third country.
Users: Individuals or entities using AI systems within the EU, especially high-risk AI systems.
Importers and Distributors: Those who import or distribute AI systems within the EU must ensure the systems comply with the AI Act's requirements.
Authorized Representatives: Non-EU providers must appoint an EU-based representative responsible for compliance.
Compliance with the AI Act is both a legal obligation and an opportunity to build trust with customers and stakeholders. Key considerations include:
Assessing AI Systems: Identify which of your AI systems are classified as high-risk and understand the specific obligations that apply.
Implementing Compliance Measures: Establish processes for risk management, data governance, technical documentation, transparency, and human oversight.
Updating Operational Practices: Adapt your development, deployment, and monitoring practices to meet the AI Act's requirements.
Training and Awareness: Ensure your team understands the regulatory obligations and how to implement them effectively.
At CyberCoach, we specialize in keeping your employees aware of what matters.
Choose from a comprehensive library of role-based learning content to keep your employees trained in the safe and compliant use of AI technologies.
Most security training platforms today put employees at risk by feeding their behavioral data into AI systems and risk profiling them. Training your employees to use AI safely and responsibly starts with processing their personal data responsibly. Choose a tool like CyberCoach that does not use AI for profiling employees.
You can count on CyberCoach content updating monthly to cover latest threats and relevant regulatory developments.
No need to juggle multiple platforms—CyberCoach's in-chat Risk Self Assessments also cover AI/ML risks, and are available within Teams or Slack (or in the Browser). You can choose the assessments that are relevant for your operations, and target them to individuals based on their role.
Ready to empower your team to use AI responsibly?
Disclaimer: This blog post is for informational purposes only and does not constitute legal advice. For specific guidance on complying with the Artificial Intelligence Act, please consult legal professionals or official EU publications.
Learning is not One-Size-Fits-All. Learn how to automate a role-relevant training program that is tailored to user skill level with CyberCoach.
CyberCoach is based on dialogic learning and storification in order to maximize learning retention. Learn more about the pedagogy behind CyberCoach.
Psychologically safe awareness programs lead to better training results, a stronger company culture, and less risk to both employees and your...