Loading...
Integrated Specialist Program in Artificial Intelligence Management Systems
Description: After completing the course, participants will be able to demonstrate the following competences:
 
  • Define, implement, and improve AI management systems aligned with ISO/IEC 42001.
  • Assess, document, and apply AI-specific controls and objectives.
  • Conduct and report AI risk assessments based on ISO/IEC 23894.
  • Perform AI impact assessments considering ethical, technical, and legal impacts.
  • Apply lifecycle thinking in AI system planning, development, and governance.
  • Support inclusive and socially acceptable AI solutions.
  • Establish AI governance structures and ensure cross-functional coordination.

Previous skills/knowledge: Participants are expected to have the following basic knowledge:
  • Familiarity with ISO/IEC 27001 and basic information security principles.
  • Understanding of risk management, organizational resilience, and digital infrastructure.
  • Basic awareness of management systems (PDCA) and their role in maintaining operations.
Authorized Partners:

Teaching requirements: Trainers should meet the following requirements:
  • Subject Matter Expertise: deep and broad knowledge of ISO/IEC 27035-1/2/3/4, ISO 22301, ISO/IEC 27031, and proven experience in implementing ISMS, BCMS, and DR frameworks.
  • Certifications: recommended credentials include ISO/IEC 27001, ISO/IEC 27031, and ISO 22301 Lead Implementer or Auditor, and specialized qualifications in incident handling, continuity coordination, and disaster recovery.
  • Training & Practical Experience: minimum of 3 years in the field, covering incident response, BIA and risk analysis, business continuity planning, DR testing and coordination of crisis or recovery teams.
Objectives to achieve: This program aims to provide participants with comprehensive skills in AI system management, risk, compliance, ethics, and lifecycle implementation:
 
  • Understand and apply ISO/IEC 42001 and ISO/IEC 23894 principles in AI management.
  • Gain skills in designing, implementing, and evaluating AI-specific controls and risk treatments.
  • Identify ethical and legal risks in AI, and incorporate mitigation strategies.
  • Use AI terminology and lifecycle concepts to support governance, risk, and compliance activities.
  • Perform structured AI impact assessments and communicate findings effectively.
  • Align AI objectives with organizational goals and regulatory expectations.
  • Embed ethics and trustworthiness into AI system design and operation
Authorized Partners: