Will risk management drive AI adoption?
📍Online
📆 June 2024
Andrew Tollinton, CEO of SIRV was invited by to answer the question: Will risk management drive AI adoption?
Full conference recording; starts with Andrew Tollinton addressing the university and attendees online.
Intro
In June 2024, Andrew Tollinton, CEO of SIRV, joined the University of Warsaw and Uniwersytet SWPS to explore a simple question with complex consequences. Will risk management drive AI adoption, or will something more human move faster. This article summarises Andrew’s talk and sets out what we are seeing in the field.
What we mean by AI and risk
To keep this discussion practical we focus on narrow, task specific AI rather than general intelligence.
By risk management, we mean the process of identifying, assessing and controlling threats across five themes: strategic, financial, legal, reputational and operational. SIRV works principally in operational risk – power outages, supply interruptions and security incidents.
Fear is powerful but it is not risk
Behavioural science reminds us that people move away from fear, isolation and pain, and towards hope, social acceptance and pleasure. Of these motivators, fear is often the loudest. Yet fear is emotional while risk is logical. In public life, fear can overwhelm probability driven decision making, which explains why investment can cluster around low probability threats while higher frequency harms receive less attention.
A near future shaped by mediated choices
One plausible path is assistant led safety. Wearable or ambient AI guides individuals through the built environment, consuming local feeds such as CCTV, incident logs and open data to recommend safer routes and actions. The enabling steps are already here or arriving fast:
-
Growing data availability, sometimes at the expense of privacy
-
AI that can read and write APIs so systems interoperate in real time
Devices like pendant microphones or screenless assistants hint at this mediated future, where AI sits between people and place. Whether people want that level of control is an open question. Uncertainty, risk and skill are part of what make experience feel alive.
Safety cases and autonomous vehicles
Autonomous vehicles are often sold on safety rather than convenience. Philosophical arguments about agency compete with statistical claims about reduced collisions. The pattern is familiar. In the face of strong safety cases, regulation tends to follow.
What we are seeing at SIRV
Our London based team works with global brands to bring AI into risk management. Since the arrival of general purpose AI tools, project conversations have changed. What was once a red flag is now a budget line. To keep projects grounded we use a simple three level model:
-
Describe – what is happening, where and when
-
Predict – what might happen next
-
Prescribe – what action should we take
Value and complexity increase as you move up the stack. The challenge is not only technical. It is organisational.
Case study – workforce deployment and survivor bias
A client asked whether AI could improve the deployment of staff across a large urban area. The starting point was incident data, from antisocial behaviour to violent crime. Incident data is useful but it carries survivor bias. If teams are deployed in the same hot spots, reports reinforce the same pattern. To counter this we requested complementary datasets already held in the organisation. Access friction was the main blocker rather than modelling. The lesson is simple. Better data beats more model.
Highlights from the conference
Emotion and rationality
Excitement around AI is understandable, but risk management remains a rational discipline.
Security practice
Assumptions from the past do not always carry forward. The last two years have challenged ideas of truth and verification.
Future of work
For students and early career professionals, opportunities are growing. The need is for motivated people who understand real organisational needs.
Other perspectives shared
-
Understanding emotions can improve risk decisions and resilience
-
People often rely on emotion when estimating risk
-
Staff will use AI tools regardless of policy. Better to control use and extend existing policies rather than write new ones from scratch. Many existing security controls still apply
-
Criminal focus will shift to AI models as soon as those models become decision critical
Where this leaves AI adoption
Many are anxious about the path ahead. Our view is pragmatic. Diffusion beats invention. The next breakthrough matters less than how far current tools spread through society and operations. Crises accelerate adoption. The UK COVID app under delivered, but a future public health crisis could normalise personal assistants that persist after the crisis passes. The question is not whether risk management can use AI. It is how much we trade for speed and certainty.
Key takeaways
-
Fear moves faster than formal risk, so design controls that keep decisions disciplined
-
Use describe, predict, prescribe to scope value and complexity
-
Challenge survivor bias with broader internal datasets
-
Plan for adoption during crises, not only in steady state
Transcript
Will risk management drive AI adoption? — Clean transcript
Event: Online lecture, University of Warsaw and Uniwersytet SWPS
Date: June 2024
Speaker: Andrew Tollinton, CEO, SIRV
Introduction
Thank you, Piotr, for inviting me, and for giving me the good grace to define my own topic. I chose “Will real risk management drive AI adoption,” and almost immediately had an intellectual panic. It is a deceptively simple question that becomes hard when you try to answer it well. Like many of us, I turned to my notes, books and the wisdom of others for grounding, and I found comfort in history.
People have often been wrong about the future:
“Radio has no future.” — Lord Kelvin, President of the Royal Society
“Rail travel at high speed is not possible because passengers would be unable to breathe and would die of asphyxia.” — Dr Dionysius Lardner, UCL professor
If great minds have missed the future, I can be forgiven for fumbling it too.
Terms of reference
When I say artificial intelligence, I mean narrow, task driven systems, not artificial general intelligence.
By risk management, I mean identifying, assessing and controlling threats across five themes: strategic, financial, legal, reputational and operational. My background is in operational risk: interruptions to business such as power or water outages, supply chain issues and, at the extreme, terrorist attacks.
What really drives adoption
To judge whether risk management will drive AI, we should first ask what drives humans.
BJ Fogg, a behavioural psychologist, sets out six core motivators. We move away from fear, isolation and pain, and towards hope, social acceptance and pleasure. If you follow Machiavelli, Thomas Hobbes and Daniel Kahneman, fear often emerges as the most powerful driver.
But fear is not risk. Fear is emotional. Risk is logical. We use probabilities to estimate the likelihood of events. Despite this, fear can hijack public decision making. For example, investment in counter-terrorism in cities like London can be disproportionate to the probability of attack. Fear often wins.
A near future shaped by assistant led safety
Daniel Miessler’s essay “The predictable path of AI” offers one plausible future. Imagine individuals wearing or carrying digital assistants that communicate with the environment: cameras, databases and risk feeds. These assistants know your risk appetite and guide you in real time: “I do not like the look of this street. Turn right for a safer route. It only adds two minutes.”
Two enablers make this likely:
-
Data availability at scale, often traded against privacy.
-
AI that can read and write APIs, allowing systems to interoperate in real time.
Some of this is already here. Devices such as screenless assistants and pendants record audio and imagery and act on our behalf. They hint at a world where AI mediates our interaction with place. I call this the mediation of humankind.
Will people want this degree of control? I am not sure. Uncertainty and risk are part of what makes us feel alive. We invite jeopardy when we drive quickly or play contact sports. A fully controlled existence may not appeal at a deep level.
Safety, cars and the pull of regulation
Consider autonomous vehicles. Their strongest argument is safety. Philosophical claims about agency compete with statistical promises of fewer collisions. I suspect the safety lobby will win the policy debate.
What we see at SIRV
At SIRV in London, our team of data and research scientists works with global brands to apply AI to risk. Since the arrival of general purpose AI tools, conversations have changed. “AI” used to be a red flag. Today, projects that include AI often attract budget.
We use a three level model to frame capability and value:
-
Describe: What is happening, where and when.
-
Predict: What might happen next or how long it may last.
-
Prescribe: So what. What action should we take.
Complexity and value increase as you move up the stack.
Case study: Workforce deployment and survivor bias
A client asked whether AI could improve staff deployment across a large urban area. We began with incident data, from antisocial behaviour to violent crime. Incident data is useful, but it contains survivor bias. If patrols focus on known hot spots, reports reinforce those same areas, which does not mean other areas are safe.
To counter this, we sought complementary datasets already held inside the organisation: access control, maintenance tickets, footfall, sensor data and so on. The main challenge was access, not modelling. A simple lesson follows: better data often beats more model.
Reflections on the path ahead
Many people are anxious about AI’s trajectory, with good reason. My view is pragmatic:
-
If you are motivated and understand real organisational needs, you will remain essential in an AI enabled future.
-
We should shift attention from innovation to diffusion. The impact of AI depends less on the next breakthrough and more on how deeply current tools embed in everyday operations.
Crises accelerate adoption. The UK’s COVID app under-delivered, but the next public health emergency could normalise personal assistants that persist after the crisis. The real question is not whether we can develop AI to manage risk. It is whether, in fear driven urgency, we will surrender too much in the process.
Conference highlights
Emotion versus rationality
Excitement about AI is understandable, but risk management is a rational discipline. (Attribution: Piotr Borkowski)
Security
Practices from the past may not hold. The last two years have shaken ideas of truth and verification. (Attribution: Urszula Jessen)
Future
Students today face significant opportunity. (Attribution: Tomasz Ludwicki)
Other points raised
-
Understand your emotions and you improve risk decisions and resilience. (Attribution: John K.)
-
People are not naturally good at estimating risk and often lean on emotion. (Attribution: Dr David Rubens)
-
Staff will use AI regardless of policy. Control its use in the workplace by extending existing policies rather than creating entirely new ones. Around 90% of security controls still apply. (Attribution: Piotr Borkowski)
-
Criminals are not yet focused on hacking AI models because models are not widely used for final decisions. This will change when they are. (Attribution: Piotr Borkowski)
Closing
In summary: fear moves faster than formal risk, data access matters more than we admit, and diffusion will determine impact. My name is Andrew Tollinton. I work for SIRV. Thank you for your time.
FAQs
Q1. What does SIRV mean by operational risk in AI projects
Operational risk covers the events that disrupt day to day operations, such as outages, supply interruptions and security incidents. AI can help describe events faster, predict patterns and prescribe actions.
Q2. Why does survivor bias matter for incident led deployment
If teams only patrol where incidents were previously recorded, reporting skews to the same areas. The result is over allocation to known hot spots and blind spots elsewhere.
Q3. What kinds of data improve predictions beyond incident logs
Access control data, maintenance tickets, footfall, environmental sensors and customer reports. Often these exist inside the organisation but sit in separate systems.
Q4. How do you phase AI capability without overreach
Use describe, predict, prescribe. Ship describe first, validate, then add prediction. Prescriptions come last and only where operational ownership is clear.
Q5. Will privacy concerns block assistant led safety
Adoption depends on trust, clear consent and value. In crises, people accept more data sharing. Outside crises, controls and transparency matter.
Q6. How do you measure success in risk management AI
Time to detection, precision and recall on incident classification, lead time on predictions, and real world outcomes such as response time or harm reduction.
Q7. Where should a security team start
Start with a narrow use case tied to an operational KPI. Audit available datasets, remove survivor bias where possible, and prototype a describe level service.
Q8. Does AI replace human judgement in risk
No. AI augments situational awareness and consistency. Human judgement remains essential for trade offs, duty of care and accountability.