AI in security: From awareness to practical use

Thoughts from Andrew Tollinton, SIRV CEO, after speaking and moderating an AI panel at the Counter Terrorism and Risk Management Conference at AO Arena

For all the talk about AI, one impression stood out clearly after I recently spoke and moderated an AI panel at the Counter Terrorism and Risk Management Conference at AO Arena.

Andrew Tollinton speaking on AI adoption at the Counter Terrorism and Risk Management Conference at AO Arena

Many people in the room did not seem opposed to AI. But many still seemed distant from it and that matters. It suggests the next challenge is not persuading security professionals that AI is significant. It is helping them understand how it can be used safely, practically, and usefully in their world.

Interest is ahead of confidence

The reaction in the room suggested something I suspect is true more broadly across protective security, resilience, and counter-terrorism: Interest in AI is ahead of operational confidence.

People can see that the operating environment is becoming more complex. Threats are moving faster, adapting more quickly, and generating more digital signals. At the same time, the systems used to manage those threats, from CCTV and access control to alarms, building systems, and communications platforms, are producing more information than teams can process unaided.

So the need is there. But the starting point still feels unclear. Many practitioners are, quite reasonably, asking:

  • What would I actually use AI for in my role?
  • What is safe to start with?
  • Where should human approval remain mandatory?
  • How do we avoid introducing new risk while trying to reduce existing burden?

These are good questions. In fact, they are the right questions.

The gap is now translation

For a while, the conversation around AI in security has pulled in two directions at once.

On one side, there is a strong sense that AI will matter. On the other, there is understandable concern about hallucinations, inconsistency, weak controls, poor memory, model drift, and over-trusting a fluent system in a high-consequence environment.

Both instincts are valid. What struck me after the conference was that the more important gap now may be one of translation. Security professionals do not need more abstract claims that AI is transformative. They need to see where it fits, where it does not, and what a safe first step looks like.

The first value is support, not autonomy

My view is that organisations should not begin with open-ended AI use in security. They should begin with bounded support for familiar, real workflows. That includes:

  • Retrieving the right approved procedure quickly during an incident
  • Triaging fast-moving reports and updates
  • Drafting initial summaries or briefings for human review
  • Reviewing plans and procedures for gaps or inconsistencies
  • Retaining lessons learned in a form that can be reused later

These are practical problems security teams already have. AI can help with them, but only if it is used inside clear boundaries. That is one reason why the phrase “AI in security” can be unhelpful. It can sound abstract, futuristic, or over-ambitious. In practice, what most teams need first is something more grounded: Help with retrieval, triage, synthesis, review, and memory.

Why caution is still right

None of this removes the underlying risk. In high-consequence settings, AI can still:

  • Invent details that are not in the source material
  • Accept plausible but false inputs
  • Behave inconsistently
  • Lose important context
  • Change in performance when models update
  • Fail at the wrong moment if fallback is poor

So the lesson is not “use AI everywhere”. It is “use AI where it can be bounded, evidenced, and reviewed”. That means having:

  • Approved sources
  • Clear workflow limits
  • Human approval at the right points
  • Durable memory
  • Traceability and audit trail
  • Safe fallback when the system is weak or unavailable

Without those, the discussion stays theoretical.

What a sensible starting point looks like

For security and resilience teams, the best early use cases are usually the least glamorous ones. Not fully autonomous decisions. Not black-box recommendations. Not open-ended chat with no controls. Instead, the first gains are often found in:

Procedure access under pressure
Helping teams get to the right approved response quickly when an incident is unfolding.

Report triage
Sorting fast-moving incoming information so teams can focus first on what is urgent, credible, and operationally important.

Plan and procedure review
Checking whether documents are clear, aligned, and usable in real conditions, rather than simply existing on paper.

Lessons learned and memory
Making sure previous incidents, debriefs, and decisions are available when similar issues arise again.

This is where AI starts to become operationally useful.

Why the surrounding design matters

A model on its own is not enough. In serious operational settings, the surrounding design matters just as much as the model itself. That means thinking about:

  • Repeatability
  • Evidence
  • Workflow
  • Assurance
  • Memory
  • Failure handling

In other words, the question is not only whether the AI is capable. It is whether it is being used inside a controlled system that helps people work better without asking them to trust it blindly.

The real opportunity

The strongest takeaway for me from AO Arena was this: The gap is no longer mainly awareness. It is practical confidence. The opportunity now is to help organisations move from curiosity to safe adoption. That means showing security professionals not just why AI matters, but where it fits into the work they already do, what good looks like, and how to start without losing control.

If the industry does that well, confidence will follow. If it does not, AI will remain something people accept in principle but keep at arm’s length operationally.

That would be a mistake. The environment is not standing still. As threats, systems, and information flows become more complex, the ability to combine human judgement with safe machine support will matter more, not less.

The organisations that do best will not simply be the ones that adopt AI fastest. They will be the ones that adopt it in a way that is controlled, practical, and genuinely useful.

SIRV AI is built around that principle. Not AI for its own sake, but operational AI designed to help security teams act with more speed, clarity, and control.

Frequently asked questions

1) What are safe first use cases for AI in security?

Start with bounded (see point 3) support for familiar workflows where the output can be checked and where you can define clear rules. In practice, that usually means:

  • Procedure access under pressure: helping a team find the right approved procedure quickly during an incident.
  • Report triage: sorting fast-moving updates so teams focus on what is urgent and credible.
  • Briefing drafts for human review: producing an initial summary that a duty manager can edit and approve.
  • Plan and procedure review: highlighting gaps, clashes, or unclear wording before they cause problems.
  • Lessons learned: making previous incidents and decisions searchable so teams do not repeat mistakes.

These are valuable because they reduce load and improve consistency without asking teams to outsource judgement.

2) Why do many security teams still feel distant from practical AI use?

Because interest is ahead of operational confidence. People can see AI matters, but they often do not see:

  • What it is safe to use first
  • What needs human approval
  • How outputs remain traceable and defensible
  • How failures are handled when the system is uncertain or unavailable

Until those questions have credible answers, AI stays interesting but not operational.

3) What does “bounded use” mean in practice?

It means defining the workflow and its limits:

  • The sources the system is allowed to use
  • What the output format must be
  • What uncertainty looks like and how it is flagged
  • When the system should refuse or escalate
  • Where human approval is mandatory
  • What gets stored as an audit record

Bounded use reduces surprises. It makes performance more consistent, and it makes outcomes easier to trust.

4) What should remain human-led in protective security and counter-terrorism contexts?

Anything that involves high consequence decisions or sensitive judgement should remain human-led. For example:

  • Declaring a threat level
  • Deciding whether to evacuate or lock down
  • Regulatory notifications
  • Communications that create legal, reputational, or safety exposure
  • Decisions based on incomplete or contested information

AI can support those decisions by surfacing relevant procedures, summarising inputs, and logging what was known, but it should not be treated as the decision-maker.

5) How do you prevent over-reliance on AI in high-consequence settings?

You design for it explicitly:

  • Require evidence for important claims
  • Make uncertainty visible
  • Keep clear approval points
  • Record what sources and versions were used
  • Provide safe fallback when the system is weak or unavailable
  • Train teams on when not to use the tool

Most over-reliance problems come from unclear boundaries and poor operating discipline, not from the model alone.

6) What does “assurance” mean for AI outputs?

Assurance means you can justify using the output. Typically that requires:

  • Evidence linkage to approved sources
  • Repeatability (similar questions produce consistent results)
  • Audit trail (what was asked, what was used, what was produced, who approved)
  • Controls appropriate to the stakes (stricter checks for higher-impact questions)
  • Failure handling that is safe, not silent

In security, assurance is what separates “useful” from “usable”.

7) What are “gaps and clashes” in security procedures?

“Gaps” are missing coverage, such as no approved guidance for a scenario, site, or role.
“Clashes” are conflicts, such as two documents giving different instructions, or a newer procedure contradicting an older one that is still in circulation.

Both create risk under pressure, because operators either improvise or follow the wrong version.

8) How should memory work if AI is used in security operations?

Memory should be treated as part of operational discipline, not a convenience feature. The system should store:

  • Key decisions, actions, and outcomes
  • The evidence relied on
  • Who approved what
  • What uncertainty existed at the time

The goal is to make “lessons learned” usable later, and to reduce key-person dependency.

9) How do you handle model updates or drift over time?

Assume behaviour will change and design around it:

  • Test high-stakes workflows regularly
  • Monitor for performance changes and retrieval failures
  • Keep the system anchored to approved sources
  • Use assurance checks and human approval points for high-impact outputs
  • Maintain versioning for both documents and prompts/workflows

Operational reliability comes from the full system, not just the model.

10) What is a sensible way to start if we are cautious?

Pick one workflow where value is obvious and boundaries are clear, for example procedure retrieval during incidents or report triage. Run a short pilot or sprint with:

  • Defined scope and approved sources
  • Clear success measures (time saved, consistency, fewer missed issues)
  • Mandatory approval for higher-stakes outputs
  • An audit trail for any output used operationally

Then expand only once the workflow is stable and trusted.

Author bio: Andrew Tollinton

Andrew Tollinton Founder SIRV and author

Andrew Tollinton is CEO and Co-Founder of SIRV, which builds operational AI for safety, security and resilience teams. He focuses on practical, controlled AI use in serious environments, with particular interest in evidence, accountability and human judgement. Andrew chairs the Institute of Strategic Risk Management’s AI in Risk Management Special Interest Group and speaks regularly on AI governance and operational resilience.

More compliance, risk and resilience case studies:

Improving operations at DLR with SIRV
Enhancing workplace safety and journey confidence at UCB by SIRV Case Study
Reducing compliance risk at Ao

"SIRV helped us move beyond basic reporting into a system that actively supports decision-making". Les O'Gorman, Director of Facilities, UCB - Pharma and Life Sciences

css.php