Your Quick Guide To Managing Ethics & Compliance

AI Ethics - it's coming, are you ready!?

AI, ethics and another ISO


ISO standards play a significant role in risk management. In some areas (like risk management, ISO 30001 and the infosec heavy ISO 27001), these standards are widely adopted. They provide a common framework for understanding and addressing risks. However, as risks have become more complex (political, regulatory, reputational, sustainability, etc.), these standards are showing their limitations. So it’s wonderful news that we have another standard, ISO 42001, shaping regulatory policy! 😬

What is that? Well, a very brief summary:

Risk Management: specifically around AI systems security, data quality, privacy, fairness, and transparency.

Risk Assessment: assessment and documenting the potential impacts of AI systems on people and groups throughout the system’s lifecycle.

Ethics: focusing on considering societal impacts and how to promote accountability.

Compliance: cites the EU AI Act, supplier relationships, and integration with existing systems (enter ISO 27001).

So far, so uninspiring and vague. We may need to look at where 42001 deviates from 27001.


Risk Management: 27001 is broad (securing information assets). 42001 picks out areas like biased decision-making, lack of transparency in algorithms, and the misuse of technology. More simply, 27001 is worried about harm (compromise) to the data, and 42001 is concerned about the impact caused by the system.

Risk Assessment: Errr, unclear. See below.

Ethics: Under 42001, we are supposed to implement AI that aligns with ethical standards, regulatory requirements, and societal expectations.

Compliance: 42001 required organisations to create specialised training programmes for AI project staff and implement “robust asset management strategies” to safeguard AI-related intellectual property and data. Furthermore, we must “establish clear criteria for AI procurement, emphasising transparency, fairness, and security.”

The vague risk assessment requirements under 42001 aren’t a bad thing entirely. It represents an opportunity to move from the classical risk management approach – probability and consequences – to something more fitting a dynamic and emerging technology. For instance, requiring analysis across the lifecycle could (if done well) allow us to integrate risk-based decision-making at each stage of the AI system development. How we do that is for a longer discussion.

Why should you care? People will start getting audited on AI this year. Infosys already has. Getting certified also fulfils regulatory requirements (EU AI Act). That’s important as I can guarantee we will soon (1-3 years) start seeing significant regulatory issues involving AI seeping into traditional integrity risk, sustainability, and compliance (fraud to human rights). If AI ethics aren’t already on your radar (or your remit), it’s time to ensure we’re part of a discussion that should not be led by information security (and HR) types alone!

Need more?

Book a (free) strategy session, get new articles, and other content designed to be useful and fun.

Your Quick Guide To Managing Ethics & Compliance

Be the first to know

Subscribe to receive a weekly newsletter with trends, news, and hacks for all things risk. PLUS, behavioural science, investigations, human risk, and alternate perspectives.