Battling the crisis cycle
Sarah Armstrong-Smith shares her perspectives with Deborah Ritchie on the perpetual principles of resilience in a fast-paced world, and on the need to start genuinely learning lessons from major crises
Sarah Armstrong-Smith has over 20 years’ experience delivering and advising C-suite leaders on large scale business and ICT continuity, cyber security, data protection and resilience programmes, and is recognised as one of the most influential women in cyber security and UK tech.
How has your journey through the profession shaped your outlook on resilient practice?
Initially I did not set out specifically on a ‘business continuity’ path. I was working for Thames Water, and volunteered to support a project management team that was preparing for the Millennium Bug. Water utility companies are amongst the most heavily regulated utilities, with very strict requirements around water quality and pressure, but also to have, and maintain, a 24/7 emergency call centre – the duty for which became mine. After midnight came and went without drama, everyone fell into two camps as to whether this was good planning or a waste of money. I was in the former camp.
Whilst still rooted in IT, my subsequent role with Axa pivoted from business continuity into disaster recovery – of both datacentres and IT, as well as the company’s wider business continuity programmes. Not long after joining, 9/11 happened, and everything changed with regards to back-up attitudes as the 72-hour window for tape just became too long. My next role, for EY, which began on the day of the 7/7 London Bombings in 2005, was my pivot to the next level: crisis management.
Subsequently, I spent 12 years with Fujitsu, where I pivoted into cyber security, but always looking at it from the business perspective. A lot of people do look at cyber security as an IT issue, and that is the case to a certain extent – we are here to protect the cloud, servers or endpoints and networks, but those things don’t have a value until you start adding data and people into the mix. It is at that point that you have to start to ask what you’re really trying to protect, and why.
My current role with Microsoft began in 2020. True to my form of starting a new role just as a major incident is taking place, I joined just as the world was going into lockdown due to the pandemic, and so like many others, the first 18 months of my role was 100% remote.
One of the things I often challenge on is Black Swan events. I don’t believe in them. Even the Covid pandemic could have been predicted – we had H1N1 10 years prior. I think that what companies plan for is quite often never the event that actually happens – and very rarely is it the worst-case scenario; just as with Covid-19, a lot of companies had a pandemic plan, but did not count on needing it for three years.
Ever since reading my first public inquiry – for Piper Alpha at aged 12 – I have been interested in duty of care, and about learning lessons from large-scale events. The very fact that a public inquiry takes place means it’s an event of such magnitude – and often with fatalities. Time and again you see this scenario reoccurring.
At some point we are going to have a cyber attack of such magnitude that it’s going to cause an industrial accident or other event of similar magnitude. There may be fatalities, and, again, people will then start asking questions about how we got there.
This is a topic that I cover in my first book, Effective Crisis Management (BPB Publications, October 2022) where people experience near misses, wipe their brow, and move on without learning any lessons from it.
I also think that businesses just don’t test enough, instead waving away the problem because they “have a plan”. The problem with this emerges when you reach that ‘perfect storm’ moment, where you’ve done everything that you’re supposed to do, but you haven’t counted on the incident turning into one of such magnitude that’s really going to test it. And it might be a freak incident – and I’ve seen a few of those over the course of my career so far – that it means that your plan doesn’t work, or at least not to the same degree that you might hope.
Your latest book, Understand the Cyber Attacker Mindset, explores the humans behind some of the most prolific attacks. What can you tell readers about their motivations?
With this book, I set out to try to remove the stigma of cyber crime, as well as the media portrayal of what a cyber criminal looks like: this dark, hooded figure cramped in a corner over a laptop. I wanted to show the humans behind the attacks. Who are they? What are they doing, and why?
Humans are drawn in by our deepest emotions, and so I talk about the ‘seven sins of attackers’, who are motivated by pride, greed, gluttony, sloth, wrath and so on. These deep seated emotions are what’s driving some of these activities – and if you think about social engineering and phishing emails, romance scams, ransomware, all of that is pulling on those emotions. I sought to bring this to life through a series of case studies based on interviews with some of the most prolific protagonists.
For organisations, managing the risk is about getting people to reflect on trying to remove the human emotion from the situation, so as to be able to make rational decisions using the facts that you have right in front of you.
How might artificial intelligence impact the cyber security horizon?
In essence, AI becomes a force multiplier. In my view, it all comes back down to accountability and due diligence, and to really gaining a true understanding of the risks. Attackers will evolve to use AI and emerging technology. To deal with it, we must truly understand their motivations, as that is an element of human nature that doesn’t change.
This tension, with humans fighting each other for one reason or another, goes back to the dawn of time and that’s never going away. With AI, it’s just a different tool for attackers to use. We’ve got to make their job as difficult as possible in this constant battleground.
When I speak to CISOs about this current realm, I talk about designing in business continuity from the get-go. Historically, cyber security has almost been thought about as a cost centre and a necessary evil but a lot of the cyber security professionals that I speak to are working hard to change that mindset. This could be an entire shift left in essence. So rather than it being the thing that you do at the end, it’s the thing that you do right at the beginning. So you have security, privacy, resilience by design and by default, which means having the conversations with the marketing people, the business people before you even think about the product and the technology. This would make resilience an enabler for innovation.
With the pace of change in AI, there’s a huge competitiveness to be the innovator, the one that beats all others to the punch. The danger here is skipping due diligence and testing.
With AI, when something does go wrong, or the unexpected happens, accountability, responsibility and crisis management, become even more important because arguably the impact could be so much bigger than we could ever have anticipated.
A lot of this is about super-sizing, just as the cloud super-sized compute power and data. I think there is a real danger here that we’re going so fast that we’re missing some of the potential issues. And when something big happens, again we’ll start asking ‘How did we get here?’ ‘Where are the lessons learned?’.
I feel that we continue to go around in circles when it comes to this, and that businesses and governments still pay lip service to learning the lessons from major incidents.
How do you address these issues in your role at Microsoft?
I work multi-sector and multi-country, speaking to CISOs and CIO in retailers, financial services companies and energy firms and so on – about exactly what we’ve just been reflecting on.
Amongst the key focus areas for some of these organisations are smart technologies, IoT, building the factory of the future, and with that comes the increased risks inherent in hyper-interconnectivity.
With all these once separate environments becoming one huge global supply chain, plus an increased interest in systemic risks from regulators, as well as geopolitical risks, the landscape is really changing. And when we start overlaying these risks onto critical infrastructure, you can begin to see the potential for a large scale industrial accident or something of that magnitude in healthcare systems or energy grids.
All this comes full circle when you consider some of the social implications of some of these things. And technology is smack bang in the middle of all of this – amplifying the role of cyber security and business continuity.
I think we really have to consider the social impacts of not getting this right. A lot of work I’m doing at the moment is focused on enablement, embracing the art of the possible with security front of mind in those conversations.
Sarah Armstrong-Smith's latest book, Understand the Cyber Attacker Mindset, is available from Kogan Page.
This article was published in the Q1 2024 issue of CIR Magazine.
Download PDF
Contact the editor
Printed Copy:
Would you also like to receive CIR Magazine in print?
Data Use:
We will also send you our free daily email newsletters and other relevant communications, which you can opt out of at any time. Thank you.
YOU MIGHT ALSO LIKE