AI in Risk Management – Special Interest Group launch Webinar

📍Online

📆 July 2024

Andrew Tollinton

Andrew Tollinton

Group Chair

Co-founder, SIRV, 🇬🇧 London, United Kingdom

Pauline Norstrom

Pauline Norstrom

Co Chair

CEO, Anekanta AI, 🇬🇧 London, United Kingdom 

Peter Senkus

Peter Senkus

Member

Professor, University of Warsaw, 🇵🇱 Warsaw, Poland 

Douglas Gray

Douglas Gray

Member

Strategic Risk Consultant, 🇫🇷 Paris, France

Hart S Brown

Hart S Brown

Co Chair

CEO, Future Point of View, 🇺🇸 Oklahoma City, United States 

Mads Paerregaard

Mads Paerregaard

Member

CEO, Human Risks, 🇩🇰 Aalborg, Denmark 

Overview

Andrew Tollinton, CEO of SIRV chaired The Institute of Strategic Risk Management‘s AI in Risk Management Special Interest Group webinar, July 2024. 

Understanding AI risk and opportunity: Key takeaways from our expert panel discussion


Artificial Intelligence (AI) continues to reshape business, governance, and security at an extraordinary pace. In our recent AI in Risk Management webinar, we gathered a panel of global experts to unpack the most pressing risks, opportunities, and practical considerations of AI adoption across industries. This session was anchored by a key resource: the newly launched AI Glossary, designed to help organisations demystify the technology and navigate its responsible use.


Launch of the AI glossary

Pauline Hart opened the session by introducing the AI Glossary, developed collaboratively with Andrew Tollinton. The glossary is designed to:

  • Help organisations recognise and manage AI risks

  • Offer non-technical definitions of 15 core AI terms

  • Map those terms across strategic, financial, legal, operational, and reputational domains

A standout feature is the dual definition of an “AI System”—both a high-level explanation (“a machine system designed to simulate human intelligence”) and the legal definition adopted by the EU, based on OECD principles.

Pauline emphasised the real-world impact of AI across use cases like fraud detection, facial recognition, and medical diagnostics—while also warning against over-reliance, lack of transparency, and insufficient governance.

“AI should augment human work—not replace it. And it needs to be treated differently from cybersecurity or data privacy. These are adjacent, but not identical concerns.”


AI in local contexts: Lessons from Poland

Peter shared an insightful example from Poland, where a locally developed LLM, “BIK,” is tailored to recognise national context for governmental applications. But he raised concerns about data security:

“A recent Polish newspaper survey revealed over 90% of users were unknowingly sharing sensitive data with public LLMs like ChatGPT and Claude.”

This reflects a growing risk: unintentional oversharing of proprietary or personal information, often due to a lack of organisational guardrails.


Organisational readiness: Barriers and questions

Doug explored the challenges organisations face when trying to become “AI-ready.” These include:

  • Assessing digital maturity and human capital

  • Clarifying financial ROI

  • Establishing governance structures (e.g., do you need an AI committee or Chief AI Officer?)

  • Designing clear plans, policies, and procedures

He also noted that tech adoption is easy, until people get involved. If staff don’t understand, trust, or use the tools, value isn’t realised.


AI’s real value: Efficiency or effectiveness?

Mads added a critical perspective on the true value of AI: should it reduce headcount or improve effectiveness?

“Buying better tools doesn’t necessarily mean laying people off. The real gain is in removing manual tasks and reallocating human talent to higher-value work.”

He also emphasised the gap between risk managers and AI teams, especially in physical security. AI engineers are often focused on supply chains or finance, leaving risk professionals to advocate alone for their own needs.


What happens when AI goes wrong?

Pauline brought forward a chilling case study: the Dutch child welfare scandal, where a biased AI system automatically targeted dual-nationality families for welfare clawbacks, resulting in widespread harm.

“This persisted for five years before being challenged, largely because staff weren’t trained to understand or question the system.”

The lesson? Unchecked AI systems built on biased data, with poor transparency, can cause systemic discrimination. This incident inspired the EU’s AI Act.


Everyday AI: Silent integration & risk for risk managers

AI is becoming ubiquitous, often invisibly. As Andrew pointed out, staff already use tools like ChatGPT in their workflows, even in regulated sectors.

“Risk managers must ensure internal systems offer the same ease, access, and usefulness as tools people already have in their pockets but in a secure, auditable environment.”

This requires aligning internal LLMs with organisation specific datanot just generic web-based models.


Deepfakes: The new face of fraud

Doug demonstrated how real-time deepfake video can now mimic executives convincingly, posing serious security risks. While voice synthesis is more complex, visual impersonation is already accessible.

“Most say they wouldn’t be fooled. But in large organisations, staff may not know what a senior executive looks or sounds like. That’s where the danger lies.”


đź”® Where is AI headed?

Each panelist shared their forward-looking thoughts:

  • Peter: We’re moving toward Augmented General Intelligence, a human-machine symbiosis, not replacement.

  • Mads: We may not see massive leaps, but rather steady refinement, integration, and acceptance.

  • Doug: AI will soon be seamlessly embedded into everyday tools, driving competition around access to talent and infrastructure.

  • Andrew: The next 2–10 years will test how we balance investments between human capital, digital infrastructure, and energy needs like quantum computing and nuclear power.

“AI is no longer optional—it’s inevitable. But we must embed it ethically, safely, and transparently.”


Final thoughts

This session confirmed that AI adoption is not just about technology, it’s about people, purpose, and process.

  • The AI Glossary is a great starting point for organisations new to the space.

  • Internal governance, training, and communication are critical for safe AI use.

  • Privacy, bias, and explainability must remain front and center.

AI can be a force for tremendous good but only when we understand its boundaries, capabilities, and risks.


Download the AI glossary

Visit https://www.theisrm.org/ai-in-risk-management/ to access the AI Glossary and start equipping your organisation with the knowledge it needs to use AI responsibly.


Stay connected

To learn more or join our AI in Risk Management Special Interest Group, contact:

Email: andrew.tollinton@theisrm.org
Website: https://www.theisrm.org/ai-in-risk-management/ 


Thank you to Pauline Hart, Peter, Mads, Doug, and everyone who contributed to this vital conversation. Special thanks to David Rubens and the ISRM team for making this series possible.

    Transcript
    The

     

    "SIRV helped us move beyond basic reporting into a system that actively supports decision-making". Les O'Gorman, Director of Facilities, UCB - Pharma and Life Sciences

    css.php