AI reshapes the security industry – lesson from SJUK Leaders in security conference

Association of Security Consultants

📍Manchester, UK

📆 25 June 2025

Introduction

Artificial intelligence (AI) is no longer a future concept. It is already transforming how security teams assess risk, manage incidents and make decisions. At the SJUK Leaders in Security conference in Manchester (25 June 2025), experts from across the industry (inlcuding SIRV Co-Founder Andrew Tollinton) discussed how AI is moving from hype to operational reality and what leaders must do to ensure trust, compliance and resilience.

Contents

  • From hype to reality: AI in action

  • Understanding AI in a security context

  • Building trust with governance and oversight

  • Preparing the workforce for AI adoption

  • Learning from failures and successes

  • Final thoughts: augment, don’t replace

  • Key takeaways

From hype to reality: AI in action

The panel emphasised that AI has already moved beyond theory. It is being used to: 

  • Analyse crowd behaviour in public spaces

  • Interpret security footage with vision systems

  • Enhance incident reporting workflows

Rosie Richardson (Createc) described how AI models detect behavioural patterns to improve safety in large venues. Katie Baccouche (Bays Consulting) highlighted how predictive modelling helps forecast risks and support proactive security planning.

Understanding AI in a security context

Panel members defined AI in this domain as software that improves over time using large datasets, with applications ranging from anomaly detection to predictive analytics.

Andrew Tollinton (SIRV) introduced the idea of AI agents: systems that do not simply support users but can autonomously select tools and execute tasks. This makes governance and oversight essential as AI becomes embedded in workflows.

Building trust with governance and oversight

Trust was a consistent theme.

  • Andrew explained the value of grounded AI: training systems only on pre-authorised, verifiable sources to reduce hallucinations.

  • Rosie stressed that human verification remains critical, especially in intrusion detection or high-stakes scenarios.

  • Katie reminded leaders that defining AI’s limits is not just a compliance exercise, but a way to manage risk responsibly.

Audit trails and explainability were highlighted as non-negotiable for regulators and stakeholders.

Preparing the workforce for AI adoption

Education emerged as the most urgent task for leaders.

  • Empower staff with approved tools before they adopt AI unsafely on their own.

  • Encourage experimentation at leadership level to build understanding.

  • Train teams to know both capabilities and limitations of AI.

Dave Marsh (Strixx) warned that failing to lead AI adoption means organisations will soon be reacting to change rather than directing it. Andrew encouraged attendees to try low-code tools such as Lovable to understand AI’s potential hands-on.

Learning from failures and successes

The panel balanced optimism with caution.

  • Horror stories: hallucinations, data misuse and poor governance have led to reputational and financial damage.

  • Success stories: AI in medical imaging now saves lives by analysing scans faster and more accurately than humans.

The lesson: AI is powerful but must be deployed with verification, transparency and clear limits.

Final thoughts: augment, don’t replace

Speakers agreed that AI should support human teams, not replace them. It is a force multiplier that strengthens decision-making when integrated responsibly.

“Think big, but start small.” Begin with pilot projects, define governance, and expand as confidence builds.

Key takeaways

  • AI is already delivering measurable value in risk and security.

  • Trust, transparency and governance are central to adoption.

  • Human-in-the-loop verification is essential for high-stakes use.

  • Leaders must engage and educate their teams early.

  • AI should augment, not supplant, human expertise.

SIRV builds governable AI agents and resilience platforms for enterprise security and risk teams.

Full transcript from the SJUK Leaders in Security panel on AI reshaping the security industry

Transcript: SJUK Leaders in Security – The AI Chapter

Moderator:
Welcome everyone to our panel this afternoon, The AI chapter: how the rules of risk are being rewritten.

We are at a moment in time where AI has moved beyond flashy presentations and into practical, operational applications. This is not just about tools and technology, it is about how decisions are made and how incidents and risks are assessed, managed and understood.

Crucially, it is also about trust and establishing guardrails to ensure AI can be deployed safely in the security industry. Let’s begin with some introductions.


Panel introductions

Katie Baccouche (Bays Consulting):
I’m the operations and governance manager at Bays Consulting. We work with mathematical modelling, data analytics and AI solutions. My job is to ensure innovations are implemented safely, scalably and responsibly, without getting blocked by red tape.

Dave Marsh (Strixx):
I’m co-founder of Strixx, an AI startup focused on supply chain security. Our aim is to improve how incident reports are gathered and converted into actionable intelligence.

Rosie Richardson (Createc):
I’m product and strategy director at Createc, an R&D company specialising in robotics and sensing. We deploy sensing and robotic systems to make environments safer.

Andrew Tollinton (SIRV):
I lead SIRV, Systematic Intelligent Risk Valuation. We began with decision-tree systems over ten years ago, and now we are developing AI agents that act as connective tissue for risk data.


Defining AI in the security context

Moderator:
How would you define AI in a security context?

Rosie Richardson:
At Createc, we use AI in two ways. First, to analyse crowd behaviour and detect suspicious activity or violence. Second, in robotics, using video analytics and image recognition so patrol robots can perceive and learn from their environment.

Katie Baccouche:
AI is broad. It includes detection, predictive modelling, and governance. My role focuses on setting the limits of what AI can and should do in security, ensuring responsible use.

Dave Marsh:
AI started with machine learning models, then large language models (LLMs), and now we see agentic AI bringing both together. This makes AI accessible in low-code or no-code environments.

Andrew Tollinton:
Agentic AI goes further, it executes tasks autonomously and chooses which tools to use. That is why governance and oversight are essential.


Governance and trust

Andrew Tollinton:
We promote the idea of grounded AI: systems trained only on pre-authorised sources such as SOPs or verified third-party data. This reduces hallucinations and increases reliability.

Rosie Richardson:
Human verification is still essential. For intrusion detection, for example, you need people to validate what AI systems report. Systems must also provide an audit trail to explain how decisions were reached.

Katie Baccouche:
Defining boundaries is not just about compliance, it is about managing risk responsibly and maintaining trust.


Preparing the workforce

Dave Marsh:
Leaders need to embrace AI. If you don’t lead adoption, your organisation will be reacting instead of directing. Staff are already experimenting with AI, sometimes without safeguards.

Andrew Tollinton:
Try AI tools for yourself. Platforms like Lovable allow you to build applications in natural language, which helps leaders understand the potential first-hand.

Katie Baccouche:
It is not about preparing for AI, it is already here. Leaders should engage, educate, and ensure staff understand both capabilities and risks.


Failures and successes

Katie Baccouche:
There are horror stories of AI misuse: hallucinations, mishandling data, reputational and financial damage.

Rosie Richardson:
But there are also success stories. In healthcare, AI analyses medical images faster and more accurately than humans, saving lives. The security sector can learn from this: with proper oversight, AI can elevate performance.

Dave Marsh:
The key is “trust but verify.” AI outputs should be treated as a starting point, not a final answer, unless they are validated.


Final thoughts

Moderator:
To close, what is your advice to security leaders?

Andrew Tollinton:
Think big but start small. Begin with pilot projects, establish governance, and build trust step by step.

Rosie Richardson:
AI should augment, not replace, human teams.

Katie Baccouche:
Transparency and education are essential.

Dave Marsh:
Leaders must take the initiative, or risk being left behind.


Key message

AI is already reshaping security and risk management. With trust, transparency and oversight, it can become a powerful ally, a tool that augments human decision-making rather than replacing it.

 

css.php