Artificial Intelligence Impact Assessment (AIIA)
Description: After completing the course, participants will be able to demonstrate the following competences:
- Describe and justify the need for AIIA in different AI contexts.
- Identify potential harms across legal, ethical, and technical domains.
- Apply structured approaches for conducting and documenting AIIA.
- Communicate findings and support mitigation planning.
- Integrate AIIA results into the organization’s AI governance framework.
Previous skills/knowledge: Participants are expected to have the following basic knowledge:
- Basic understanding of AI functionalities and applications.
- Familiarity with assessment processes in compliance, data protection, or risk domains.
- Awareness of legal and ethical risks related to AI systems.
Authorized Partners:
Teaching requirements: Trainers should meet the following requirements:
- Subject Matter Expertise: comprehensive knowledge of AI impact assessment methodologies based on ISO/IEC 42001, ISO/IEC 42005, and regulatory frameworks such as the EU AI Act.
- Certifications: relevant qualifications in risk assessment, AI governance, and compliance (e.g. ISO/IEC 23894, ISO/IEC 31000, or data protection impact assessment frameworks).
- Training & Practical Experience: minimum 2–3 years of experience in conducting assessments of AI systems, including legal, ethical, and organizational dimensions.
Objectives to achieve: The course aims to achieve the following objectives:
- Understand the purpose and principles of AI impact assessment (AIIA).
- Learn to identify and evaluate potential negative impacts of AI systems.
- Become familiar with risk categories such as discrimination, security, and loss of control.
- Gain skills for applying structured AIIA methodologies and documentation practices.
- Support regulatory readiness and organizational responsibility in AI deployment.


