2h length

Helping Great Companies Get Better at Compliance

Course Overview

Your team is building AI systems. Legal says you need ISO 42001 certification to win enterprise clients. Regulators are asking about your AI governance. But you're not sure where to start.

What policies do you need? What risk assessments? What documentation satisfies auditors? ISO 42001 isn’t written for practitioners, it’s dense text that's hard to translate into action.

This course breaks down ISO 42001 implementation into clear, practical steps you can follow.

 

You'll learn:

  • What ISO 42001 actually requires (in plain language), 
  •  How to build an AI management system that passes certification audits, 
  • How to conduct AI risk assessments that meet international standards, 
  • What governance structures and accountability you must establish, 
  • What documentation auditors expect (policies, procedures, records), 
  • How ISO 42001 aligns with EU AI Act requirements, 
  • How to implement controls for high-risk AI systems, and 
  • How to prepare for certification audits.


You'll get ready-to-use tools:

  1. AI policy templates, 
  2. risk assessment frameworks, 
  3. AI system inventory templates, and 
  4. documentation templates.

You'll see real implementation examples like establishing an AI governance board, classifying AI systems by risk level, documenting training data and model decisions, implementing human oversight mechanisms, creating incident response procedures, and preparing technical files for compliance.

By the end, you'll know exactly what you need to implement for ISO 42001 certification, how to prioritize based on your AI systems and risk, what documentation to create and maintain, and how to demonstrate compliance to auditors and customers.

Stop feeling overwhelmed by AI standards. Start implementing them systematically.


Modules

  • A Systematic Approach to AI Risk Management and Continuous Improvement – Learn how to apply international AI standards in a structured way to manage risks and keep improving over time. This module explains how to set clear policies, define roles, and keep your AI systems safe, compliant, and trustworthy
  • AI Model Compliance – Explore how to ensure AI models follow data protection rules and keep information private. This module covers data minimisation, technical and organisational safeguards, and resilience testing so you can spot risks early and demonstrate compliance.
  • Testing AI Model Anonymity – This module walks you through membership inference, model inversion, and data extraction attacks, showing how attackers work and how you can defend against them.
  • Data Governance – See how to manage the full lifecycle of AI data responsibly. This module explains how to document sources, keep data quality high, anonymise sensitive information, and prepare datasets so your AI projects stay transparent, fair, and legally sound. 
  • Quality Management System – Learn how to structure AI development and use. This module covers writing policies, assigning responsibilities, managing resources, and verifying systems, while also creating clear reporting channels for employees, suppliers, and users. 
  • Implementing AI Standards – Discover how to turn AI standards into action. This module shows you how to run a gap analysis, set objectives, build an implementation roadmap, train teams, and monitor systems – embedding event logging and continuous improvement from the start.

Lessons

  1. Chapter 1

    Poglavlje 1

    A Systematic Approach to AI Risk Management and Continuous Improvement

  2. Chapter 2

    Poglavlje 2

    AI Model Compliance

  3. Chapter 3

    Poglavlje 3

    Testing AI Model Anonymity

  4. Chapter 4

    Poglavlje 4

    Data Governance

  5. Chapter 5

    Poglavlje 5

    Quality Management System

  6. Chapter 6

    Poglavlje 6

    Implementing AI Standards

Why Register?

  • Understand AI standards: Learn the principles of ISO/IEC 42001 and the EU AI Act and how they underpin trust.

  • Ensure compliance: Gain practical strategies to align AI projects with international and regulatory requirements.

  • Embed trust into AI systems: Apply standardised methods for transparency, fairness, and accountability.

  • Support cross-functional governance: Become a trusted resource for integrating AI standards across teams and partnerships.

  • Advance your career: Earn a certification that demonstrates your expertise in standards-based AI governance and compliance.

Reach your full potential.