Ethics and Social Acceptability of Artificial Intelligence
Description: After completing the course, participants will be able to demonstrate the following competences:
- Recognize and explain ethical challenges in AI systems.
- Interpret societal expectations for responsible AI use.
- Apply ethical frameworks and standards such as ISO/IEC 42001.
- Support inclusive, fair, and human-centered AI system design.
- Contribute to the organizational culture of ethical and socially acceptable AI.
Previous skills/knowledge: Participants are expected to have the following basic knowledge:
- Basic understanding of AI systems and their societal applications.
- Awareness of ethical principles such as fairness, autonomy, and transparency.
- Familiarity with regulatory and public concerns related to emerging technologies.
Authorized Partners:
Teaching requirements: Trainers should meet the following requirements:
- Subject Matter Expertise: in-depth knowledge of AI ethics, societal impact, human rights, and relevant frameworks such as ISO/IEC 42001 and UNESCO recommendations.
- Certifications: preferred qualifications in AI ethics, data protection (e.g., ISO/IEC 27701), or related governance standards.
- Training & Practical Experience: at least 2 years of practical experience in addressing ethical implications of AI or facilitating human-centered design processes.
Objectives to achieve: The course aims to achieve the following objectives:
- Understand ethical risks and responsibilities in AI development and deployment.
- Identify key societal expectations and values related to trustworthy AI.
- Learn about standards, principles, and frameworks guiding ethical AI.
- Gain insight into human rights, equity, and environmental dimensions of AI systems.
- Develop the ability to embed ethics into AI management processes and organizational governance.


