The
AI glossery we’ve produced is that right
yes that’s right um what the glossery
intends to address is to help organis
organizations recognize whether they’re
using Ai and manage the risks this is
not a technical publication it
essentially walks you through the basics
and when we’re putting this together uh
Andrew and I did work on this together
and we agreed 15 terms that we felt were
probably the most commonly used but to
focus on the top 15 uh and then to
provide a summary of the impacts across
a number of business functions and
domains from strategic Financial
reputation operational and legal
considerations and we have covered the
crucial definition of AI and um I picked
actually one of the definitions to give
you an example of what is in here and um
the pi picking the obvious one is the AI
system terminology because that’s the
most important and um what we’ve covered
is a high level definition so it’s
designed it’s a machine system computer
program designed to simulate human
intelligence but we’ve also included the
legal definition which has been adopted
in the EU and that’s originated from the
oecd AI principles which is a wealth of
resources if you are just learning about
ai go and have a look at oecd doai we’ve
also describe the use cases and in Risk
you’ve got fraud detection um there’s
medical diagnosis fa faal recognition
software but also to look at the key
risk considerations such as the
Strategic considerations the
overreliance on AI lack of transparency
maybe there’s not enough governance in
there or understanding of the technology
and finally the the purpose here is to
encourage the use of AI to uh augment
the work of humans uh to show that it
can work across functions that it can
work proactively and that it can be part
of a business product or service but
also to show it needs to be treated
differently to cyber security or data
privacy those are side effects of using
uh technology the key considerations
that need to be looked at if you’re
starting out and you know procuring
you’re designing in AI the glossery
should be your go-to place just to
explain some of those terms that you may
not have heard before so that’s that’s a
a potted overview and I would encourage
you know the audience to go and download
that and and have a look and certainly
feedback fantastic thank you Pauline
and um Peter if I start with you if I
could ask you what do you see as the
risks and opportunities for AI in this
year yes thank you Andrew uh as we uh
could see from latest reports of using
organization uh it is not so popular AI
could be democratized through
we have example here from Poland uh we
developed in our community polish llm
recognize country context and it could
be therefore used better for uh for
example government proposes because it
understands the issues what is a risk
yesterday polish Daily Newspaper one of
the biggest published a paper that over
90% of people share companies and
sensitive data with public llms that
means chart GPT or Cloud so thanks Peter
there’s a lot there around the idea that
one of the greatest risks is that we
lose our privacy and we share overshare
without knowing we’re oversharing um any
Reflections there on the on the risks
and opportunities and that that idea of
oversharing the opportunities which
everybody will will readily understand
are immense but the challenges for most
organ ganizations right now are
understanding in how to actually move
forward a number of organizations are
trying to go through and become AI ready
within that there’s everything from
understanding their digital footprint
and digital maturity understanding their
human capital as we all know in the
world of technology it’s easy to unplug
or plug in New pieces of Technology but
if the people themselves don’t actually
use it or don’t know how to use it or
refuse to use the new technology you
don’t get the benefit from it we see a a
lot of really interesting challenges
with organizations that say well I want
to use it for this but the technology
isn’t there yet or they don’t understand
their workflow well enough to we the llm
or whatever that the system is they’re
trying to use and then starting to think
through the data element what does that
look like the next big challenge which a
lot of organizations are being really
hesitant of is understanding what the
financials are within the AI space so if
I try and use some of these systems do I
get a financial benefit and then really
the last one is the risk in the
governance piece and that’s very very
much into roles and tasks at least for
us right now who has to do what within
your organization what committees need
to be formed is this a risk committee is
auto committee is it an AI committee do
you need a chief AI officer for example
to be on the line as as all of this and
then going through the organization to
make sure you have all the plans
policies and procedures in place to use
it those are some of the real sort of
significant challenges we see at the
moment but when you’re putting AI into
organization uh how do you stack up the
financial case and what do you see the
are the opportunities and risks for AI
That’s a super really good question
there Andrew I think we can maybe use
some of the previous experiences going
from maybe pen and paper into more
Technology based tools and say okay we
you know our risk management program is
now on paper and it’s in Excel
spreadsheets and now you want me to buy
this fancy tool it’s the same cons
considerations um but it’s not my
experience that buying better tools and
efficiency um actually leads
organizations to say oh then we’re going
to like cut Effectiveness across our
organization and Let Go part of our team
members so are we talking efficiency I’m
going to reduce my team or Effectiveness
right I’m going to be spending my
existing time better removing man manual
tasks and getting more out of the team
have or maybe I have to build the team
in a different way with different
capabilities than today uh so I can root
out some of the more manual task that
would be my Approach and not say oh
you’re buying AI so you can save a lot
of money talking challenges or risks in
in AI ER seen from my perspective ER
being in the security and the physical
security realm here or industry the lack
of insight and knowledge from Risk
Managers typically is a huge barrier
that we’re not going to get the best of
this technology I can also see that some
of the organizations we work with do not
necessarily address the challenges that
the risk manager has but maybe like core
into financial and operations because
that’s where the money goes so risk
manager are some sometimes LIF a little
bit alone to navigate this space how can
we use it and if we identify a reason to
do so how do we get the money to to buy
the the skills because the existing AI
Engineers within the company they’re
busy optimizing the supply chain or
whatever so I think that’s a huge
opportunities very short in my space
where we talking physical security how
do we analyze vast amounts of
information uh sometimes it’s called
intelligence and translate that into to
action yeah have you got any examples
that you can think of from from your
experience of uh AI doing well doing
badly screw-ups that kind of thing all
the examples I’m going to give you are
actually based on classic AIS not even
gen AI just to underline the reason why
action was taken in Europe to create the
EU AI Act and the origins of the the the
law um partly originated from an issue
that arose in the Dutch government um
and this issue crystallized during the
pandemic in a nutshell they’d use
classic AI techniques to develop an
automated system uh Within child welfare
um department and that automated system
automatically sent out reclaims of child
welfare benefits to people who were of
dual nationality that singled out people
who weren’t Dutch Nationals and clawed
back all this money and um caused Untold
harm to these people and it prevailed
for about five years without it being
challenged fully and it was Amnesty
International actually that raised this
issue amongst other uh human rights
organizations and they wrote report
called xenophobic machines which really
highlighted what had happened here and
in effect they had developed on top of
bias data so there are already issues in
terms of reviewing these claims the
people who were using the software were
not fully trained and the developers
didn’t explain how it made its decisions
so complaints couldn’t be addressed what
is the compelling reason to use uh this
software and their compelling reason to
use it was that they would be
incentivized if they created more
demands it was not a virtuous circle
should we say and that’s probably one of
the worst cases the market is moving
really really rapidly at the moment and
chat GPT obviously made a really really
large Splash in the last few years as
that’s maturing what we’re seeing more
and more as AI tools becoming less of a
bespoke thing you are accessing and more
something that’s part and parcel of what
you already have in your pocket every
day and that is only going to increase
that effect that you can simply ask
you’ve been asked to do a risk
assessment to make it very practical for
most Risk Managers you’ve sent out a
request to do a risk identification
project nine times out of 10 we have to
expect that human nature means that
individuals are going to be using these
tools in their pockets to help them in
their workflow to help them answer that
question what was typically a
whiteboarding and a paper based process
and and Years Gone by is going to be
that but the challenges for Risk
Managers is how they make sure that
their internal tools and systems and the
access that’s there can be equivalently
accessible to staff in a secure
environment so they can access that kind
of knowledge and workflow that they have
in their pocket in a secure environment
but also that the models that they’re
using are informed by internal data so
they’re not basing you know Bas Baseline
information that they’re using from AI
systems to help them answer basic
questions and improve their workflows is
based on internal data from the
organization rather than just generic
what check GPT is giving them what is
soon to be giving them um all the likes
of perplexity Google and Microsoft’s
products I suppose in a danger there
there’s a lot of Silent integration
going on right and we’ll soon lose the
ability to identify where where AI is
doing things and uh what is a baseline
Etc yeah one of the interesting things
that we looked at was inde deep fake
technology and again there’s some really
widely now known cases where there has
where we’re now starting to see where we
used to see sort of fishing attempts on
an email basis now we can actually start
to do that over a call to give you
example in the system I’m running right
now I will make a change and you’ll see
my camera will stop for just a second it
will come back on and as it does I am
now in real time on the call with you I
am somebody else you may recognize this
person but it is not difficult for me to
change over and over now a couple things
you’ll recognize pretty quickly my voice
is still the same I haven’t changed my
voice now that can be done it’s a little
bit more complicated more than we want
to get into today but to sync a
different voice and a different face
together is now possible and it’s it is
doable with with a very small delay but
my hair hasn’t changed my clothing
hasn’t changed so the system itself
really still needs to map to some facial
structure that’s similar to the person
you’re trying to mimic it’s helpful if
you can mimic their sort of motions the
way they smile the way they act the way
they talk in order to convince somebody
to do something now most people will
tell me on a deep fake immediately I
well I wouldn’t be fooled by that
because I I’m going to recognize this is
a deep fake but you have to put yourself
into a position of a very large
organization where you may not
personally know some senior Executives
so you don’t know their mannerisms don’t
know the kinds of contextual clues that
you would be able to pick up and if they
tell you or ask you to do something in a
call you’re in some cases likely to do
so can I ask each panelists where they
think AI is headed definity AGI
everybody is talking about with is
artificial general intelligence I I I
don’t I hate that name I could call it
augmented general intelligence because I
think uh human will be still there I
more into leak lugger philosophy that it
will be symbiosis between man achine
progress very very risk and and uh
Danger from what I’m hearing and
understanding from smarter people is
that these huge computational leaps uh
might be resting a little bit so the
computer power that was necessary to
produce chat gbt or the big llms that
might be resting a little bit meaning
that we won’t see these giant leaps in
the future but maybe in increased
development and adaptability and
Adoption of these Technologies right
also as Peter is indicating here that we
have the maybe the existing
toolbox but they will become more and
more uh Advanced and more and more
integrated as um into our daily life and
into becoming tools that we use uh
working and privately and whatever we
might also see a reduction in the
skepticism towards new technology it has
always been there and maybe that will
maybe Flatline a little bit and see okay
we can actually use the tool in our
everyday life in the near term over the
next sort of two years or so we’re going
to see a lot of organizations trying and
figure out how to use this technology
which is going to drive the costs up we
don’t have what we need for industry as
to how much the demand is going to be we
going to start to see organizations
really have to figure out okay what’s am
I going to invest in more digital sort
of power digital uh capital or am I
going to invest into human capital and
every day will be a new Balancing Act as
we start to sort of lean into Quantum
Computing which will again change things
over the next sort of fiveyear horizon
over a 10year horizon we’ll start to
realize we get sort of energy becomes a
bit more abundant we’re going to start
to see things like fusion and other so
we’ll solve some of the elements that
we’re starting to see capacity
constraints that we’ll we’ll experience
now but I think a lot of the biggest
challenges now are just hitting those
capacity constraints as sort of was
mentioned already with this plateau and
how are we going to manage those um you
know organization by organization and
how are we going to fund the development
the continued development whether it’s
nuclear energy or others how are we
going to fund the development of these
technologies that we all want to use the
competition’s very quickly switching to
organizations getting access whether or
not that’s talent whether or not that’s
with key Partnerships and strategic
Partnerships to lock down their
exclusive access against competitors to
that kind of model or to build that
internally and the resources that that’s
going to take what we’re already seeing
is a little bit of a need reaction
around implementation towards uh
efficiency and headcount reduction it’s
it’s the broader economic environment as
well and my sense is that one of the
things we’re going to see is an
increasing realization that these are
augmenting what your staff are doing and
the knowledge that you have across your
sites and across your staff Network
rather than replacing that I would like
to say thank you very much to Pauline
Hart mads Peter and Doug uh and thank
you David ruin for uh obviously setting
up the Network the isrm in the first
place and uh helping us uh get this
conversation going out there into the
wider world uh thank you one and all
those of you that joined us and I hope
you find the recording very useful