Security Incident Frequency Stats Are (Mostly) Useless.
Security incident frequency stats are (mostly) useless. Unless we agree on - or know - what a security incident actually is.
Every few months, a new report drops with dramatic headlines:
- “Cyber incidents up 300%!”
- “Companies face an average of 5 incidents per week!”
But here’s the catch: what counts as an incident?
- A phishing email caught by a spam filter?
- A failed login attempt?
- Malware blocked at the endpoint?
- A confirmed data breach?
Without a shared definition - or at least a clear one in context - incident frequency becomes a useless metric. (Unless your goal is to grab attention. In that case, move along.)
Until we standardize - or, more realistically, define clearly within a specific report - what we’re measuring, incident counts are just noise dressed up as signal.
Obviously, I don’t have the catch-all solution.
A first step might be to count only incidents that caused actual damage to the target. But even that gets messy.
Are we talking only primary losses (in the FAIR sense: lost revenue, replacement costs, remediation, etc.)? Or do we include secondary losses (PR fallout, legal costs, regulatory penalties, etc.)?
And even if we settle that, let’s not forget - we’re talking frequency.
High-volume, low-impact incidents dilute the stat - and reduce its usefulness.
Which brings me to my two main points:
- Be wary when using a stat. If you’re justifying a security initiative and your stat includes phishing emails caught by a spam filter… expect (and deserve) pushback - unless you’re pitching anti-spam solutions.
- Show your work when publishing a stat. If your number isn’t just marketing fluff and you want people to use it - give us the context. Let us engage with your data in good faith.
(And next time, maybe I’ll rant about [why incidents that don’t cause damage are still important]({% post_url 2025-04-24-near-misses-are-worth-talking-about %}) - and how underdeveloped near-miss reporting is.)