Your Quick Guide To Managing Ethics & Compliance

How to Spot Biases that Undermine Assessments

We’ve been testing a co-authored tool (and accompanying guidance) to help organisations conduct risk assessments and due diligence. One challenge in smaller or mid-cap organisations is “where to start?”

The graphic below reimagines the integrity risk lifecycle. We often try to start with policies, procedures, training, and “tone from the top” before we’re clear about what risks are relevant (material). This is understandable; it’s how most regulations are written. It’s also a recipe for disaster. For instance, training without calibrating risk teaches people how to prepare a meal without ingredients.

If we’re to change and start with a risk assessment, what might undermine that?

A lot. But let’s pick a non-exhaustive list of biases and thinking models and examine how they’ve undermined (recent) assessments.

💡 Availability bias: we overestimate the likelihood of more “available” events in our memory. For example, if we read about a catastrophic corruption scandal in our sector, we might start there without examining if there is overlap (business model, locations, choice of partners, etc.).

💡 Overconfidence bias is a massive challenge in founder-dominated businesses, where we tend to overstate our abilities and predictions. This week, a CFO told me, “No fraudulent payments would get past me; I review everything.”

 

💡 Anchoring bias is the overreliance on the first information we encounter when assessing risk. For example, if your cyber posture is predicated on Hollywood hacker movies (I’m not joking).

💡 Confirmation bias: we seek information to support our existing belief while ignoring contradictory evidence. Last week, we heard the “I know my country, and it’s improving” from a highly skilled professional working in a thriving metropolis where governance issues had been a middle-class cause célèbre. Her account directly contradicted site-level colleagues in remote rural areas hundreds of miles away where “bandits” (corrupt officials and organised criminals) still operate with impunity.

💡 Sunk cost fallacy: using risk assessment systems and frameworks relevant to data-driven areas (see above) rather than considering if they’re fit for purpose for integrity risk (behavioural).

💡 Worst-case analysis: Focusing excessively on the most negative possible outcomes. This might explain why an engineering and construction company spent inordinate time screening suppliers for sanctions violations (after a prior investor imbued their risk bias) despite very low levels of exposure. Ignoring a much more pressing supply chain risk: labour violations and conflict minerals.

💡 Social biases: In particular, group attribution error (generalising the behaviour of a group based on interactions with one or a few members). “We know the Customs guy well; he’d never request a payment like that.” Or in-group bias: favouring our group over others. This often happens when competitors’ woes highlight possible sector risks, which are discounted because those competitors are “cowboys [or similar].”

Understanding these different biases is crucial for improving risk assessment. When we hear these phrases and spot the biases, we should dig deeper, gather more input or data, and challenge them. What would you add?

Need more?

Book a (free) strategy session, get new articles, and other content designed to be useful and fun.

Your Quick Guide To Managing Ethics & Compliance