How AI helps security practitioners?

Association of Security Consultants

📍Online

📆 28 January 2025

Overview

AI is here, but it is not magic. In this ASC webinar, Andrew Tollinton (CEO, SIRV) explained how AI supports security teams today and why governance and verification are essential. The core message: AI is only as good as the data it receives, and human oversight remains non-negotiable.

“Always verify AI outputs. Treat it like an intelligent autocorrect, it predicts, it does not know.”

Contents

  • How AI works: a practical primer

  • Can we predict incidents reliably?

  • Boosting intelligence analysis with AI

  • Social media signals: a shrinking window

  • Visualising AI insights clearly

  • Ethical risks and safe adoption

  • Key takeaway

How AI works: a practical primer

Neural networks learn from data to recognise patterns and make predictions. Unlike fixed software that returns the same output for the same input, models evolve, powerful, but less predictable.
Implication for teams: treat outputs as drafts to verify, especially for high-stakes decisions.

Can we predict incidents reliably?

Predictive analytics is attractive, but sparse data undermines accuracy. In a case with 7,000 antisocial behaviour reports spread over years and 50 sites, signals were too thin to be truly predictive. What helps:

  • Positive reporting (logging when nothing happens) to expand the dataset.

  • Start with trends and narrow scopes before attempting full AI prediction.

  • Standardise reporting across locations and time windows.

Boosting intelligence analysis with AI

Large language models can parse free-text incident reports and extract who, what, where, when, why and how, producing daily summaries. This saves analysts hours and makes historic analysis easier.
Guardrails: keep human-in-the-loop, label AI-generated syntheses, and store structured outputs for trend analysis in Internal reports.

Social media signals: a shrinking window

Open social data has become fragmented, gated and costly. Rather than chasing volume, follow fewer, verified sources and monitor sentiment for your official channels.
Tip: use governed workflows and Cal AI Agent to route trusted updates to the right teams.

Visualising AI insights clearly

LLMs can draft on-the-fly charts and maps, but the output changes with prompts and versions. For management reporting and audits, use standardised dashboards with stable formats and Maps & visualisations for geospatial context.

Rule: ad-hoc visualisations for exploration; standardised visuals for communication.

Ethical risks and safe adoption

  • Deepfakes and impersonation raise the bar for verification.

  • Data security and privacy: prefer governed, closed deployments; consider open-source models hosted privately.

  • Skills and prompts: training improves quality and reduces risk.

  • Practical steps: start small, standardise reporting (including positive data), and run a contained pilot with clear success criteria.

Key takeaways

  • AI helps with incident analysis, triage and visualisation today.

  • Verification and governance are critical for trust.

  • Positive reporting improves model performance.

  • Use verified sources over volume for public signals.

  • AI should augment, not replace, practitioners.

Call to action

See how SIRV supports governable AI for security and risk.

Transcript: How AI helps security practitioners

Association of Security Consultants Webinar – 28 January 2025


Rick Mounfield (ASC Director):
Welcome everyone, and good afternoon if you’re joining from the UK. We have over 200 people registered for this first ASC webinar on AI in security. I’m joined by Andrew Tollinton and Calum Doran from SIRV, who will take us through how AI can support security practitioners. Please save your questions for the Q&A.

Andrew Tollinton (CEO, SIRV):
Thanks Rick, and hello everyone. My brother and I founded SIRV back in 2010, originally working with mobile apps for incident management. Over the years we’ve focused on data, decision trees and now AI. Our aim is simple: to help organisations anticipate, manage, and recover from incidents and disruption. Today I’ll show real-world use cases of AI in security – what it can do now, and where it still needs human oversight.

Calum Doran (Head of AI, SIRV):
Hi all, I studied maths and statistics before completing an MSc in Data Science. At SIRV I lead AI projects, deploying models to analyse large datasets from security operations. My role is to ensure the technology is applied effectively and securely.


How AI works

Andrew:
Traditional software always produces the same output for the same input. AI, particularly neural networks, is different: it learns and evolves. That makes it powerful, but also unpredictable. Always verify AI outputs – think of it as a very smart autocorrect that predicts rather than knows.


Predicting incidents

Andrew:
We tested predictive analytics on 7,000 antisocial behaviour reports. Spread across 50 sites and several years, the data was too sparse to be predictive. The lesson: positive reporting (logging when nothing happens) is as important as reporting incidents. It gives AI more balanced data and makes predictions more accurate.


Supporting intelligence analysts

Andrew:
AI can digest free-text incident reports, extract who/what/where/when/why, and produce clear summaries. This saves analysts hours, especially when processing thousands of daily entries. But AI summaries must always be reviewed by humans before action.

Calum:
Exactly – these tools are great at triage, but final judgement needs a trained analyst.


Social media monitoring

Andrew:
In 2018, researchers could predict protests with 75% accuracy using open Twitter data. Today, access is restricted and expensive – $5,000 per million tweets and users are moving to closed networks like Telegram. Our advice: follow fewer, verified sources, rather than relying on mass scraping.


Communicating AI insights

Andrew:
LLMs can produce charts and dashboards on the fly. That’s useful for quick analysis, but visualisations may change depending on prompts or model updates. For formal reporting, standardise outputs with platforms like Maps & visualisations. Consistency builds trust.


Ethical and security considerations

Calum:
AI is a tool. It doesn’t harm people, but bad actors can misuse it for deepfakes, propaganda, or cybercrime. The safest route is using open-source AI models hosted in closed environments, you keep control of the data, without sending sensitive information outside your organisation.


Closing thoughts

Andrew:
Start small, iterate, and don’t oversell AI. Change your reporting practices, add positive data, and try running a contained AI project. You’ll learn faster and see where AI truly adds value.

“Trust AI less than a person — but don’t ignore it.”

Rick Mounfield:
Thank you Andrew and Calum. That was excellent. We’ve had over 80 participants from across the world today. This conversation will continue, and members can access the recording afterwards. Thanks all for joining.

css.php