AI model rise –  The incredible rise of the AI model

Food and Drink Security Association

📍Online

📆 5 June 2025

Overview

Andrew Tollinton, CEO of SIRV presented to the Food and Drink Security Association on ‘Artificial intelligence – The incredible rise of the AI model‘.

The slideshow is an excerpt from the closed group webinar, held on the 5 June 2025. Confidential information has been redacted.

Introduction

Artificial intelligence continues to transform the security sector. What started with narrow, task-specific tools has expanded into large language models (LLMs) and now AI agents. These technologies bring opportunities for productivity, but also new risks for organisations to manage.

This update, delivered to FDSA in June 2025, highlights:

  • Current AI trends and their impact on work

  • Security applications and emerging use cases

  • Risks, governance considerations and next steps for organisations

Industry trends

Three kinds of AI

  • Narrow AI: The systems we use today for defined tasks.

  • General AI: Active research, with prototypes emerging.

  • Super AI: Still in the realm of science fiction.

The rise of large language models

The growth of LLMs such as ChatGPT, Claude, Gemini, Llama, Grok and DeepSeek R1 has been extraordinary. Adoption has spread across industries, changing how people search, write, and analyse data.

Impact on the job market

LLMs are altering expectations in the workplace:

  • Easy tasks are being automated.

  • Hard tasks are becoming easier with AI assistance.

  • “Impossible” tasks may now be within reach.

Employers increasingly expect teams to show how they use AI before requesting new headcount or resources.

Security applications

AI is now used widely across six categories:

  1. Content creation: Drafting reports, strategy papers, training material.
  2. Research: Scanning open sources and competitor data.
  3. Coding: Even non-coders can automate processes through natural language.
  4. Data analysis: Harmonising information from different systems.
  5. Ideation and strategy: Brainstorming and developing plans.
  6. Automations: Carrying out repeatable, routine tasks.

AI agents

Moving beyond single-task LLMs, AI agents can:

  • Associate data from multiple sources

  • Instruct actions

  • Visualise insights

  • Query datasets

  • Notify stakeholders

  • Simulate scenarios

These agents can interact with the Internet of Things, integrating with video surveillance, mobile devices, and sensors.

What next?

Risks and challenges

The deployment of AI in security raises important concerns:

  • Bias and discrimination

  • Lack of transparency

  • Misinformation and manipulation

  • Security and cyber risk

  • Job displacement

  • Loss of human control and autonomy

Misidentification in surveillance, deepfake access requests, or hacked AI systems could all undermine trust and safety.

Data governance

Compliance with GDPR and other frameworks means defining personally identifiable information (PII), minimising unnecessary data storage, and keeping audit trails of decisions.

Practical steps for organisations

  • Define a narrow job to be done and success metrics.

  • Align projects with leadership priorities.

  • Review data availability and quality.

  • Select an appropriate technology partner.

  • Consider risks and governance.

  • Run short pilots (<3 months).

  • Share results and scale successful use cases.

Model selection

Different models suit different use cases:

  • Low-stakes everyday tasks → faster, cheaper models.

  • High-stakes work (e.g. risk assessments) → more accurate, reasoning-focused models. (See Cal, safe, governable AI agent).

  • Creative or strategic tasks → models optimised for ideation and planning.

Conclusion

AI is no longer experimental, it is embedded in daily work and security operations. Leaders must balance opportunity and risk, building governance that is both realistic and adaptive.

The next stage is not just about tools, but about how organisations sponsor pilots, measure outcomes, and scale what works.

Frequently asked questions

Q1. What are the main types of AI today?
Narrow AI (task-specific), general AI (in research), and super AI (conceptual).

Q2. How are large language models changing security work?
They automate routine tasks, support research, and enable new ways to analyse risk data.

Q3. What are AI agents and how do they differ from LLMs?
AI agents can perform multi-step tasks, associating data, simulating outcomes, and interacting with devices, rather than only generating text.

Q4. What are the top risks of AI in security?
Bias, misinformation, cyber risk, job displacement, and loss of human control.

Q5. How should organisations pilot AI?
Define a narrow scope, align with leadership goals, review data, choose a partner, set a short timeline, and share results before scaling.

Q6. Which AI model should I choose?
Match the model to the task: fast/cheap for everyday jobs, high-accuracy reasoning models for risk-critical work.

Speaker bio: Andrew Tollinton

Andrew Tollinton Founder SIRV and author

Andrew Tollinton is Co-Founder of SIRV, the UK’s enterprise resilience platform. A leader in risk management technology, he chairs the Institute of Strategic Risk Management’s AI in Risk Management group and regularly speaks on AI and resilience at global conferences. A London Business School alumnus, Andrew brings 20+ years’ experience at the intersection of technology, compliance and security.

"SIRV helped us move beyond basic reporting into a system that actively supports decision-making". Les O'Gorman, Director of Facilities, UCB - Pharma and Life Sciences

css.php