Buyer vs seller: Risk management and artificial intelligence
📆 29 February 2024
📍Online
Introduction
At an ISRM London Chapter session on 29 February 2024, security and risk leaders shared how AI is helping in the real world. The conclusion was simple. Success starts with a clear problem, the right data and a supplier who shares risk and speaks the language of the business.
Presenting on the Institute of Strategic Risk Management‘s Buyer vs Seller discussion:
- Douglas Gray, ISRM, (Chair)
- Andrew Tollinton, SIRV, (Seller)
- Mads Pærregaard, Human Risks, (Seller)
- Alisa Sultmane, Docklands Light Railway, (Buyer)
- Gasper Hladnik, Dominus Tech, (Buyer)
Define the problem before choosing the tool
AI, automation and predictive analytics are attractive, but they are not a strategy. Buyers who start with a specific outcome make faster progress. In rail, that means tackling trespass and anti-social behaviour, reducing investigation time and improving situational awareness with live dashboards.
Use the data you already have
Organisations sit on three valuable streams:
-
Internal data: incident logs, reports and reviews
-
Open-source intelligence: news, social and trusted threat feeds
-
Human insights: front-line knowledge from staff and partners
The work is to connect these sources and make them usable. Better data often outperforms bigger models.
Trial small, then scale
Innovation is less about being first and more about shared risk. Buyers and sellers pick a small, high value problem, run a contained trial, measure outcomes and scale if it works. Sellers should explain the approach in business terms, define metrics and be transparent about limits.
Data sharing needs governance
Think in three levels of sharing:
- Internal only for sensitive incidents and personal data
- Selected partners such as police or neighbouring venues
- Public information for timetables, disruptions and preparedness
Clear governance under GDPR and local policy is essential.
Readiness varies across organisations
Some teams are still digitising paper workflows. Others are ready for AI dashboards and predictive services. Using Technology Readiness Levels helps set scope, cost and time expectations.
What good looks like
A practical definition of success is straightforward: people spend less time assembling information, decisions improve and safety outcomes are better. When users say the tool changed how they work, adoption follows.
Key takeaways
-
Start with a defined operational problem
-
Trial a single use case before scaling
-
Combine internal, open and human data
-
Build a business case with clear metrics
-
Choose partners who share risk and communicate plainly
-
Design for people and process, not just technology
Transcript
Event: ISRM London Chapter — Buyer vs seller: risk management and artificial intelligence
Date: 29 February 2024
Format: Online panel
Chair: Douglas Gray (ISRM)
Panellists: Andrew Tollinton (SIRV), Mads Pærregaard (Human Risks), Alisa Sultmane (Docklands Light Railway), Gašper Hladnik (Dominus Tech)
Douglas Gray (Chair):
Welcome everyone. We have a global audience today to discuss AI and risk management. Let’s start with brief introductions.
Andrew Tollinton (SIRV):
I am co-founder of SIRV. Our focus is applying data and AI to operational risk problems.
Mads Pærregaard (Human Risks):
I am the founder of Human Risks in Denmark. We help organisations manage risk with structured data and AI.
Alisa Sultmane (Docklands Light Railway):
I lead security and emergency planning at DLR. We are exploring AI to improve safety and efficiency across rail operations.
Gašper Hladnik (Dominus Tech):
Dominus Tech works with buyers and vendors to put practical risk technology into the field.
Start with the problem, not the technology
Alisa:
Before talking to sellers, we define the problem. In rail we face trespass, train surfing and anti-social behaviour. Putting a police officer at every station is not realistic, so we look at where AI can help: predicting issues, saving time on investigations and improving situational awareness. One investigation recently took me 80 hours. With the right tools, a first pass could take minutes and produce a draft report to review. We also use dashboards to anticipate protests and disruptions so we can prepare.
Andrew:
We see three broad data sources: internal reports and logs, open-source intelligence, and human insights. The challenge is bringing them together so they are usable. We do not start with AI as the answer. We start with the job to be done, then add AI where it is genuinely needed, making the risk and limitations clear.
Mads:
Across clients we work on three patterns: recognition of trends in large datasets, faster content generation for assessments and reports, and risk analysis. These overlap in practice. The UK is a good example of structured open data that makes AI more effective.
Innovation needs shared risk and clear language
Alisa:
The buyer and seller should share risk. We pick a small, specific problem, run a limited trial, and decide whether to scale. Sellers must speak the language of the business: what the solution does, how it will be measured and what success looks like.
Andrew:
Organisational maturity varies. Some great companies still use pen and paper for risk. In regulated transport, maturity is higher and experimentation is possible. We use Technology Readiness Levels to frame expectations: scope, time, cost and likely return.
Data sharing: internal, partner and public
Alisa:
We think in three levels:
-
Internal only: sensitive incidents and personal data that cannot be shared.
-
Selected partners: for example, British Transport Police or neighbouring venues like ExCeL and London Stadium.
-
Public: timetables, disruptions and certain preparedness information such as weather or flood alerts.
Under GDPR and internal policies we decide what can be shared and how.
Andrew:
Where external sharing is useful, anonymisation and controlled hand-offs help. The right approach depends on the organisation’s security posture and governance.
Business case and adoption
Alisa:
A clear business case unlocks trials. Define the problem, options, expected outcomes and costs. Start small to keep risk and spend manageable. Keep the business updated with concise progress reports. If it works, scale. If it does not, close it out with a clear rationale.
Andrew:
For us, success is impact: when a client says the product changed the way they work and improved safety. That matters more than vanity metrics.
Frequently asked questions
Q1. Where should we start with AI in risk management
Begin with one operational problem and a measurable outcome, for example cutting investigation cycle time by 50%.
Q2. What data do we need for a first trial
Use existing incident logs, relevant open-source feeds and input from front-line teams. Focus on quality and access, not volume.
Q3. How do we measure success
Pick 2–3 metrics tied to the problem, such as time to first insight, accuracy of alerts or reduced disruption minutes.
Q4. How do we manage privacy and GDPR
Keep sensitive data internal, share with named partners under agreements and limit public data to non-sensitive information. Use anonymisation where appropriate.
Q5. Do we need advanced models from day one
No. Start with data integration and simple analytics. Add AI where it clearly improves accuracy or speed.
Q6. How long should a pilot run
Long enough to gather a representative sample of events, typically 8–12 weeks, with fortnightly check-ins.
Q7. What skills are required in the buyer’s team
An operational owner, a data or IT liaison and a business sponsor who can evaluate outcomes against the case.
Q8. How does this relate to SIRV’s products
SIRV supports describe, predict and prescribe workflows. Cal can help teams surface relevant signals and automate reporting while keeping humans in control.