After Grenfell: How AI “living memory” could strengthen building safety
The Grenfell Inquiry exposed deep failures in how information about building safety risks was recorded, shared and used. This article looks at how AI “living memory” systems could help residents and their advocates preserve complaints and evidence over time, while also helping duty-holders and regulators see patterns earlier and act on them.
On 28 November 2025 I delivered a guest lecture on AI in risk management to MSc Insurance and Risk Management students at Bayes Business School. During the session I referred to the tragic Grenfell Tower fire and suggested that, in future, AI could help tenants and other parties articulate their case more effectively. One way this might be achieved is through “living memory”. That comment prompted some debate, and this article expands on that idea.
On 14 June 2017, the Grenfell Tower fire exposed serious failures across multiple organisations responsible for the building’s design, refurbishment, regulation and management. The Inquiry has been clear that residents were let down by the systems meant to keep them safe. This article looks at one specific lesson for the future: How better information and AI “living memory” could support those duties and reduce the chance that similar warnings are overlooked again.
The Grenfell Tower Inquiry is clear on one central point: The fire was the result of serious failures across multiple organisations responsible for the building’s design, refurbishment, regulation and management. Residents were not adequately protected.
For anyone working in risk, safety or governance, Grenfell is now a reference point. It also raises a specific question for AI: If we’re building “intelligent” systems to help manage risk, what exactly should they remember, and how?
One answer is “living memory”. Not as a slogan, but as a specific facet of AI: systems that continuously ingest, structure and recall information over long timeframes, across many sources, in a governed way.
Nothing in this article makes new factual allegations. It summarises published findings of the Grenfell Tower Inquiry and subsequent public reports, and explores how future information and AI systems could support better governance and resident protection.
Grenfell as an information failure
The Inquiry and subsequent parliamentary debates highlight several themes that are, at root, failures of information – how risks were known, recorded, shared and acted upon.
1. Known risks not translated into action
Central government and regulators had access to evidence about fire risks in high-rise residential buildings: earlier fires, coroners’ recommendations, expert warnings. That information existed, but it did not turn into timely regulatory change or robust enforcement. The system knew more than it used.
2. Residents’ signals not accumulated or weighted properly
The Royal Borough of Kensington and Chelsea and its Tenant Management Organisation received complaints, letters, surveys, blogs and meeting minutes from residents about safety and the refurbishment. The Inquiry found that residents’ concerns were not given sufficient weight in decision-making. In information terms, their signals were not accumulated into a clear picture of risk.
3. Product safety information fragmented and confusing
The Inquiry found that certain manufacturers had given misleading assurances about the performance of some cladding and insulation products, and that the testing and certification regime did not produce a clear, reliable picture of risk. Designers, contractors and building control often worked from partial or confusing information.
4. Critical design information never assembled into a clear risk view
On paper, the external wall system at Grenfell (ACM cladding with a polyethylene core plus other combustible materials) was documented in drawings, specifications, test reports and approvals. In practice, information about that system was spread across different documents and organisations, and was not brought together into a clear assessment of the overall fire risk.
Therefore, Grenfell is about more than engineering faults or isolated errors. It is also about how information was handled over many years.
AI is more than a chatbot: Where “living memory” fits
Most public discussion of AI centres on chatbots and content generation. That is the conversational surface.
Underneath, useful AI in safety-critical domains has at least four structural facets:
1. Ingestion and structuring
Pulling information from many sources – emails, forms, reports, FRAs, tests, sensors – and turning it into structured data.
2. Linking and “living memory”
Continuously attaching that data to stable entities (buildings, products, duties, people, contracts) with timestamps and provenance.
3. Analysis and pattern detection
Using models to surface patterns, anomalies and trends that matter for risk, with statistical and contextual checks.
4. Decision support under governance
Providing recommendations, summaries and alerts with citations, access controls and audit trails.
Living memory sits in the second layer. It is the part of AI that ensures the organisation does not keep starting from zero; that complaints, incidents, tests and recommendations accumulate into something coherent and queryable over years.
Without that, the rest of the AI stack has little reliable foundation. With it, AI can help people build a case for action that is rooted in evidence rather than just opinion.
“Living memory is the part of AI that ensures organisations do not keep starting from zero.”
How AI-driven living memory could help in Grenfell-type situations
If we replay the Grenfell story with this lens, there are several points where an AI-driven living memory would change what was possible for residents, advocates and regulators.
1. Complaints and weak signals that do not disappear
The Inquiry found that residents’ concerns were not given sufficient weight. Many residents felt that their warnings were dismissed or minimised.
An AI-backed living memory would not, by itself, change attitudes. It would, however, make neglect harder to hide:
-
Plugged into sources
Every complaint, survey response, casework email and meeting note relating to a building is ingested automatically from inboxes, portals and document stores. -
Structured and tagged
Natural-language models classify each item (complaint, query, incident, survey response, minute), tag themes (for example “fire door”, “smoke in corridors”, “alarm fault”, “damp and mould”) and link it to the specific building, and where possible to floors or cores. -
Aggregated into a visible history
Simple queries and dashboards can show:-
The volume and themes of safety-related complaints over time.
-
Issues that recur without closure.
-
Actions promised and whether they were actually completed.
-
For residents and advocates, this shifts the conversation from having to reconstruct a history of concern themselves to being able to point to an institutional record that shows repeated safety complaints, unclosed actions and unresolved risks over several years.
AI here is not replacing humans; it is doing the heavy lifting of remembering, grouping and surfacing patterns.
2. Product safety evidence that stays joined up
The Inquiry describes a long record of misleading or incomplete information around certain cladding and insulation products, and weaknesses in the testing and certification regime.
A living memory for products, driven by AI, would:
-
Ingest everything relevant
Test reports, classification documents, assessments, certificates, marketing sheets and technical bulletins for high-risk products. -
Normalise the facts
Models extract key fields: Configurations tested, substrates, fixings, use conditions, limits, failures, caveats. -
Link tests to real-world use
Each certificate is tied to the exact test arrangement. Each building’s external wall system is described as a composition of products and configurations. -
Make queries straightforward
When a cladding system is proposed for a tall building, AI can help answer:-
“Where else has this configuration been used on high-rise residential buildings?”
-
“Have there been previous adverse findings, fires or enforcement actions involving this configuration?”
-
“Is this proposed use clearly within the scope of any successful test, or is it an extrapolation?”
-
The human decision-maker remains responsible. The difference is that they are not relying on a handful of PDFs and sales presentations; they can interrogate the broader evidence base.
3. Turning the “golden thread” into something people can use
Post-Grenfell reforms in England introduced the idea of a “golden thread” of safety information for higher-risk buildings: Accurate, reliable, accessible records maintained through design, construction and occupation.
AI makes that thread usable in practice:
-
By ingesting and reconciling information scattered across asset systems, FRAs, drawings, certificates, site reports and email.
-
By keeping it current as works are done, inspections completed, defects raised and remediations signed off.
-
By allowing natural-language queries from duty-holders, regulators and, within limits, residents and their representatives.
A resident does not need to know where documents sit. They, or an advocate, can ask:
-
“What fire-safety defects have been recorded for this block in the last five years, and which are still open?”
-
“Have there been any formal notices or enforcement actions for this building or others with the same cladding system?”
Here AI is acting as an interface to memory: Translating messy, multi-system records into answers.
4. Tracking whether recommendations actually turn into change
After earlier fires such as Lakanal House, coroners made recommendations on high-rise fire safety. The Grenfell Inquiry has been critical of how slowly and incompletely some recommendations were implemented.
A national or sector-level living memory, driven by AI, would:
-
Treat each Inquiry or coroner recommendation as an object with:
-
An owner (department, regulator, industry body).
-
Specific required actions.
-
Deadlines.
-
Evidence of implementation (regulation changes, guidance, inspection programmes).
-
-
Continuously ingest policy documents, consultation responses, enforcement statistics and programme reports.
-
Provide straightforward oversight questions, such as:
-
“Which recommendations affecting high-rise landlords remain partially or wholly unimplemented several years after acceptance?”
-
“Where are we seeing new incidents that map to previously identified failure modes?”
-
For residents and campaigners, this moves the discussion from chasing vague assurances to pointing at a formal gap between commitment and delivery.
The limits: AI cannot replace ethics, law or courage
There is a risk of over-claiming for AI. The Inquiry describes serious failures of regulation, corporate behaviour, competence and leadership.
No model can fix:
-
Commercial behaviour that misrepresents product risks.
-
Policy decisions that delay necessary change.
-
Organisational cultures that do not put residents’ safety at the centre of decision-making.
What AI-driven living memory can do is narrower and more concrete:
-
Reduce selective forgetting by keeping a dense, time-stamped record of concerns, incidents and evidence.
-
Lower the technical barrier for residents, journalists, regulators and honest insiders to reconstruct what has happened.
-
Make it harder to maintain a gap between what is technically known and what is officially acknowledged.
It does not guarantee action. It makes inaction more visible.
Designing governed AI so living memory genuinely helps people
For living memory to serve future residents, it has to sit inside a governed AI approach, not be deployed as an opaque black box. That implies:
Clear scope
- Define what the AI does: Ingest, classify, link, summarise, alert.
- Define what it does not do: Make legal findings, replace engineering judgement, override professional responsibility.
Data protection and proportionality
- Ingest only what is necessary for safety and governance.
- Apply role-based access controls and retention schedules.
- Separate personal data from aggregated risk patterns wherever possible.
Transparency and audit
- Log how each item was classified and linked.
- Make it possible to see the underlying documents behind any pattern or dashboard.
-
Human oversight and challenge
- Give residents’ panels, safety committees and regulators the ability to interrogate the system and contest its outputs.
- Treat AI outputs as prompts for scrutiny, not as final answers.
In that frame, living memory is one facet of AI: The part that remembers faithfully and helps others to see.
“For residents and advocates, this shifts the conversation from having to prove there was a long-running problem to being able to point to an institutional record.”
After Grenfell
The Grenfell Tower fire killed 72 people and reshaped the conversation about building safety in the UK. The Inquiry’s findings are feeding into new regulation, a new building safety regime and new expectations of landlords, developers and manufacturers.
If AI is going to play a role in this landscape, it should start with something simple but demanding: better memory. Systems that are plugged into real sources, disciplined about entities and time, and open to scrutiny.
For residents trying to build a case for action in the future, that matters. It means that when they say “we have been raising this for years”, there is a living record that can answer, clearly and specifically, whether the system listened and, when it chose not to.
Author bio: Andrew Tollinton
Andrew Tollinton is Co-Founder of SIRV, the UK’s enterprise resilience platform. A leader in risk management technology, he chairs the Institute of Strategic Risk Management’s AI in Risk Management group and regularly speaks on AI and resilience at global conferences. A London Business School alumnus, Andrew brings 20+ years’ experience at the intersection of technology, compliance and security.
More compliance, risk and resilience case studies:
"SIRV helped us move beyond basic reporting into a system that actively supports decision-making". Les O'Gorman, Director of Facilities, UCB - Pharma and Life Sciences