Impact methodology
Here we explain what we measure, how we measure it, and how results are published, so you can trust our impact claims.
What we measure (KPIs)
We publish median improvements. Our core KPI families are (pick those relevant to your domain):
Operations & throughput
Task review minutes, cycle times, first‑time‑right rate, mean time to decision mean time to resolution/recovery.
Quality & compliance
Exception rate, defect/leakage rate to audit, risk assessment and method statement (RAMS)/Standard operating procedure (SOP) adherence where applicable, policy conflicts detected.
Safety & risk
Incident/near‑miss review minutes, recurrence of top themes, control‑gap closures, leading‑indicator scores.
Customer/stakeholder communications
Time‑to‑publish notices/updates, re‑open rate/second‑touch rate, template deviation errors.
We agree the exact KPI list and definitions up‑front.
How we measure
- Baseline window: Last 3–6 months of comparable work or the most recent items.
- Design: Randomised A/B or phased rollout by team/site/case type.
- Statistic: We report medians and sample size (n); means only if distribution is symmetric.
- Definition of done: Documented acceptance criteria per sprint; Only completed, reviewed items count.
- Exclusions: Non‑comparable records (e.g., missing timestamps, training data) are disclosed and excluded from primary stats.
Privacy and governance
-
Personally Identifiable Information (PII) is redacted by default; reversible tokens used only where follow‑up requires identity.
-
Human‑in‑the‑Loop approvals for high‑impact actions.
-
Retrieval‑Augmented Generation (RAG) answers with citations to your authorised sources; all actions and prompts are logged.
What we publish
A single, conservative headline median on product pages (once validated). A downloadable methods note (PDF) per claim covering timeframe, n, inclusion/exclusion rules, statistic choice, and reproducibility details.
Frequently asked questions
What KPIs does Cal report?
We co‑define KPIs for your domain; common ones include review minutes, cycle times, adherence to policies/Standard Operating Procedures (SOPs), time‑to‑publish communications, incident/near‑miss recurrences, and time‑to‑resolution.
Why medians and interquartile ranges (IQR)?
They’re robust to outliers and reflect typical performance; we include sample size (n) and design notes.
How do you protect privacy?
Personally Identifiable Information (PII) is redacted by default; when follow‑up is required, we use reversible tokens under role‑based access control (RBAC).
Do you guarantee results?
No. We publish measured outcomes from your environment and methods notes you can audit.
Can we run this on‑premises or in a Virtual Private Cloud (VPC)?
Yes – hosting options include dedicated tenants; expect separate commercials and security review.
"SIRV helped us move beyond basic reporting into a system that actively supports decision-making". Les O'Gorman, Director of Facilities, UCB - Pharma and Life Sciences