If there’s one thing we’ve learned over the last few years, it’s that uncertainty isn’t a glitch in the system; it is the system. Geopolitics, AI, climate shocks, skill shortages: every business plan assumes some measure of stability that reality doesn’t provide. Organisations talk a lot about resilience but when you scratch the surface it often reveals a technical plan built on a very human fault line.
The problem isn’t uncertainty itself; it’s our relationship with it. Most risk frameworks treat uncertainty as a statistical phenomenon or a risk type to be modelled, mitigated or transferred. But uncertainty is also emotional, social and profoundly human in its impact; the way people feel about risk will often determine how an organisation actually performs when things don’t go according to plan.
This is where the idea of people risk changes the conversation. The behavioural view is critical: people risk can be seen as the gap between how humans are expected to behave and how they actually do when pressure, bias or fear take over. It’s the human volatility that sits within, and defines, organisational systems. Take decision-making. Under stress, cognitive biases are known to surge: confirmation bias makes leaders seek reassurance instead of truth; optimism bias blinds them to downside scenarios; and groupthink rewards agreement over insight. In this way entire institutions can end up misjudging risk because they don’t create the psychological conditions necessary for productive dissent. Then there’s information risk. Many risk registers list “poor communication”, yet few examine why. Often, it’s not a lack of data but a lack of candour. People don’t escalate issues early because they fear blame. They under-report errors, manage upwards, and rely on informal messaging. The result can be a system that looks calm and informed right up until the moment it isn’t.
The most sophisticated organisations realise you can’t manage your way out of this with spreadsheets and registers. Instead, they treat people risk as a system of dynamic mitigations. This means investing not only in controls and technology, but in leadership behaviours, ethical awareness and decision quality. It also means building feedback loops that surface weak signals and designing ways of working that make it easier to speak up than to stay silent.
Human behaviour is context dependent. The same employee who cuts corners in one environment may act with integrity in another. That’s not because of moral inconsistency, it’s because of social logic. Systems shape our behaviour through incentives, workload, norms and role modelling. In other words, if your people are taking reckless risks, the problem may not be them – it may be the system teaching them that that’s what success looks like.
The irony is that humans are both the source of organisational fragility and its greatest stabiliser. When conditions collapse, it’s not processes that save a system; its judgement, trust and the willingness to improvise together. Managing uncertainty through a people risk lens means recognising that resilience is built through habits, mindsets and relationships that formed long before a crisis hits. We know we can’t forecast everything, but we can design cultures where people are less likely to hide bad news, more likely to challenge assumptions, and be equipped to make sense of ambiguity. That’s not HR stuff, it’s core risk management infrastructure.
Printed Copy:
Would you also like to receive CIR Magazine in print?
Data Use:
We will also send you our free daily email newsletters and other relevant communications, which you can opt out of at any time. Thank you.








YOU MIGHT ALSO LIKE