The checkbox trap: Why CCTV is a warning for enterprise AI

Reflections from a guest lecture on the University of Portsmouth MSc Risk programme.

I recently gave a guest lecture on the University of Portsmouth’s MSc Risk, Crisis and Resilience Management
programme. Portsmouth has built a strong, openly stated profile in defence, risk and resilience research, spanning themes such as risk management, security, disaster response, and AI and autonomy.

Grenfell Tower Fire at dawn

The cohort discussion kept returning to a familiar organisational pattern: the moment a technology becomes cheap and easy to deploy, many organisations deploy it faster than they can govern it. That is how “capability” becomes a checkbox.

One slide I used was intentionally blunt.

“Install cameras, prevent crime.”
“Deploy AI, automate everything.”

Cameras everywhere, nobody watching.
Recording incidents, not preventing them.
Too many feeds to monitor.
“We have CCTV” became a checkbox.
False sense of security.

AI everywhere, nobody verifying.
Generating outputs, not validating them.
Too many outputs to review.
“We use AI” became a checkbox.
False sense of capability.

The point was not to dismiss CCTV or to dismiss AI. Both are valuable. The point is that enterprises repeatedly confuse deployment with control.

Instrumentation scales faster than oversight

CCTV is a useful analogy because it is physical and intuitive. Cameras are visible. Coverage can be measured. Procurement can be completed. A board can be told: “We have CCTV.”

But CCTV does not automatically produce prevention. At best it produces visibility and evidence. The outcomes people actually care about (deterrence, interruption, response, depend) on an operating model: monitoring, triage, escalation paths, on-site response, incident learning, and periodic testing. We see this with facial recognition: Shopping centre CCTV uses facial recognition to spot person of interest, so what? If the staff can’t apprehend the person and the police are elsewhere then the facial recognition alarm becomes just more noise. 

AI is now on the same trajectory, but faster and in more places at once. In many organisations, it has become trivial to generate outputs at industrial scale: summaries, recommendations, classifications, drafts, and “next step” lists. Generation is not the hard part anymore. The hard part is what comes next: whether anyone can trust the output at the moment it is used.

When output volume scales faster than an organisation’s capacity to verify, the organisation does not become more capable. It becomes more confident than it should be. That is the checkbox trap.

The checkbox trap is not a technology problem

“We have CCTV.”
“We use AI.”

Both statements can be true while the organisation becomes less safe:

  • CCTV can create a false sense of security that reduces attention to other controls.

  • AI can create a false sense of capability that reduces scrutiny and weakens evidential standards.

In risk terms, the most dangerous controls are the ones that look like controls. They satisfy an audit question while quietly hollowing out real assurance.

This is also why AI is, in some ways, harder than CCTV. CCTV footage is obviously “just footage”. AI outputs look like reasoning. They are very persuasive and come packaged in fluent language. That makes them easy to accept, especially under time pressure.

AI is a pattern-learning machine, not a truth machine

Another idea I used in the lecture was: AI is best thought of as a pattern-learning machine.

That framing is useful because it prevents category errors. Pattern learners can be powerful assistants. They can accelerate search, synthesis, drafting, translation, and classification. They can also be confidently wrong in ways that look coherent. Hallucinatioins are pattern completions.

Once you see AI as pattern learning, the governance question becomes clearer: what patterns is the system allowed to learn from, and what checks exist before those patterns are turned into decisions?

This is also where the CCTV analogy tightens. Cameras generate a stream. AI generates a stream. Streams do not create outcomes by themselves. Outcomes come from the controls and workflows around the stream.

“Superhuman” users and the crutch effect

One of the most practical parts of the Portsmouth discussion was about how AI changes the distribution of capability inside teams.

In almost every organisation, there is a group of people with strong domain knowledge and genuine curiosity. When those people use AI, they can become “superhuman” in a specific sense: AI multiplies their ability to explore, compare, draft, interrogate, and iterate. They ask better questions, spot gaps, and push the tool harder because they know what “good” looks like. The technology amplifies expertise.

The opposite pattern is uncomfortable but real. When someone uses AI as a crutch, without deepening their own understanding, they risk becoming less employable over time. Their output may increase, but their judgement does not. They may not notice when the model is wrong, or when it has missed a key constraint. They can become dependent on a system they cannot interrogate.

This matters for leaders because it implies that AI adoption is not just a tooling decision. It is a capability strategy. If you deploy AI without changing how people are trained, evaluated, and held accountable, you may widen variance: your best people get even better, and everyone else becomes more fragile.

Governed AI, done properly, narrows that gap by embedding verification into the workflow so that “good practice” is not dependent on a few exceptional users.

The scale of automation, and the scale of risk

The lecture also touched on why the hype persists: the prize is genuinely large.

McKinsey’s Global Institute has argued that currently demonstrated technologies could, in theory, automate activities accounting for about 57% of US work hours today. The careful wording matters. “In theory” and “activities” are not the same as “jobs”. But the direction is clear: there is a large automation surface area. An area new businesses will exploit better than existing ones. 

That is exactly why the checkbox trap is so dangerous.

When potential is large, organisations feel pressure to move quickly. Speed is not the enemy. Un-governed speed is. If you push AI into high-consequence workflows without scaling assurance, you create a system that can produce wrong outputs at scale, and produce them persuasively.

CCTV scaled visibility. AI scales decisions and decision-adjacent outputs. The governance burden is correspondingly higher.

On AGI: disputed timelines, shared operational reality

I also showed two quotes in Portsmouth that often sit uncomfortably together, but they are useful as a pair because they highlight uncertainty.

Yann LeCun has argued that current LLM approaches are fundamentally limited as a route to human-level intelligence. Whether one agrees or not, the key lesson for risk professionals is: expert views diverge on what is coming and when.

At the same time, Sam Altman has been quoted as saying: “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.”

You do not need to take either quote as prophecy to extract something operationally useful:

  • The upside is large enough that adoption will continue.

  • The uncertainty is large enough that governance cannot be an afterthought.

Enterprises should not wait for the AGI debate to settle. They should build systems that remain safe, auditable, and accountable as capabilities improve.

What “good” looks like: Governed AI, not ubiquitous AI

So what is the practical alternative to the checkbox trap? The answer is not “use less AI”. It is “use governed AI”, meaning AI embedded into a defined operating model with explicit constraints, evidence rules, and approval checkpoint.

A simple way to express this is:

1. Scale verification and accountability at the same rate you scale output.

In practice, that means designing for:

2. Bounded use cases

Clear decisions and workflows where AI is allowed to assist, and where it is not.

3. Evidence-linked outputs

Outputs anchored in approved sources (policies, procedures, regulations, internal records) with provenance, access control, and traceability.

4. Approval checkpoint

Confidence thresholds, contradiction checks, and mandatory citations for high-consequence questions, plus escalation to human review when evidence is weak.

5. Oversight that matches risk

Not “human in the loop” as a slogan, but specific review roles, sampling rates, and responsibilities tied to consequence.

6. Continuous evaluation

Testing against real scenarios and known historical cases, monitoring drift, and measuring error modes, not just usage.

This is where the CCTV analogy becomes constructive. CCTV works when it is part of a system: Design, monitoring, escalation, response, and learning. AI works the same way. Without that system, you get instrumentation, not assurance.

The takeaway

The slide that resonated at Portsmouth was not a condemnation of AI. It was a warning about organisational behaviour. Deploying AI is not deploying capability.

Capability is what happens when outputs are interrogable, auditable, and tied to accountable decisions. Without that, “we use AI” becomes a checkbox, and a checkbox is a poor substitute for real assurance.

If you want AI to deliver real value in security, risk, and resilience, treat governance as the enabling layer, not the brake. Governed AI unlocks AI’s potential because it makes outputs usable in high-consequence environments: evidence-linked, bounded, monitored, and accountable.

If you are rolling out AI and want a practical framework to avoid the checkbox trap, the starting point is simple: define the decisions, define the evidence, define the approval checkpoints, and measure performance in the workflows that matter.

Author bio: Andrew Tollinton

Andrew Tollinton Founder SIRV and author

Andrew Tollinton is Co-Founder of SIRV, the UK’s enterprise resilience platform. A leader in risk management technology, he chairs the Institute of Strategic Risk Management’s AI in Risk Management group and regularly speaks on AI and resilience at global conferences. A London Business School alumnus, Andrew brings 20+ years’ experience at the intersection of technology, compliance and security.

More compliance, risk and resilience case studies:

Improving operations at DLR with SIRV
Enhancing workplace safety and journey confidence at UCB by SIRV Case Study
Reducing compliance risk at Ao

"SIRV helped us move beyond basic reporting into a system that actively supports decision-making". Les O'Gorman, Director of Facilities, UCB - Pharma and Life Sciences

css.php