Tips for Measuring Program Success

With SpyCloud Consumer ATO Prevention.

📊 Metrics & KPIs for Consumer ATO Prevention (CAP)

Use this guide to define success, build dashboards, and report impact. Start with a short baseline, set targets, and iterate.


🎯 What to measure

CategoryKPIWhy it matters
OutcomesATO IncidentsThe core “bad thing” you’re trying to stop.
ATO PreventedProactive blocks/resets that stopped credential-based account abuse.
EfficiencyTime to Detect (TTD)Time from new exposure observed → user flagged.
Time to Remediate (TTR)Time from user flagged → action completed (reset/step-up).
CoveragePopulation Evaluated% of active users evaluated by CAP controls.
Identifier Coverage% of users with matchable identifiers (email/phone/username/IP).
QualityPolicy Precision% of CAP actions on accounts with confirmed exposure.
False Positive Rate% of CAP actions later reversed as “not risky.”
User ExperienceFriction Rate ↔︎% of logins/registrations impacted by CAP actions.
Password Reset Completion% of forced resets completed within SLA (e.g., 24h).

🧪 Baseline, targets, and cadence

  • Baseline: 2–4 weeks pre-enforcement to record ATO incidents, fraud losses, and login/registration conversion.
  • Targets (first 90 days):
    • ATO incidents: ↓ 20–40% (credential-based)
    • TTR: ↓ 30–50% (from flag to reset/step-up)
    • Friction rate: ≤ 1–3% of total logins (tune to business tolerance)
  • Cadence: Weekly ops review; monthly KPI roll-up; quarterly ROI update.

Aim for steady improvements, not perfection on day one. Tighten thresholds as precision improves.


🔢 Core definitions & formulas

Adjust to your data model; keep formulas stable over time for trend integrity.

1) ATO Incidents (credential-based) Number of confirmed account takeovers that used exposed/reused credentials during the period.

2) ATO Prevented Prevented = ForcedResets_PreAccess + High-Risk_Login_Blocks + Registration_Blocks

3) Time to Detect (TTD) TTD (median) = time(user_flagged_by_CAP) − time(exposure_available)

4) Time to Remediate (TTR) TTR (median) = time(action_completed) − time(user_flagged_by_CAP)

5) Population Evaluated Evaluated % = distinct(users_evaluated_by_CAP) / distinct(active_users)

6) Policy Precision Precision % = CAP_Actions_On_Exposed / Total_CAP_Actions

7) False Positive Rate (FPR) FPR % = Reversed_CAP_Actions / Total_CAP_Actions

8) Friction Rate Friction % = (Forced_Resets + Step-Up_Auth_Prompts) / Total_Logins

9) Password Reset Completion Completion % = Resets_Completed_within_SLA / Resets_Triggered


🧱 Suggested events to log (for dashboards)

Capture these events (or equivalents) to power KPIs:

  • ExposureEvaluated (user_id, identifiers, exposure_source: breach/malware/phished, exposure_count, severity, ts)
  • LoginEvaluated (user_id, credential_risk, action_applied, ts)
  • RegistrationEvaluated (user_id/email, action_applied, ts)
  • ActionApplied (type: forced_reset | step_up | registration_block, reason, ts)
  • ResetCompleted (user_id, ts)
  • ATOConfirmed (user_id, ts, vector: credential)
  • Conversion (type: login_success | registration_complete, ts)

Keep identifiers consistent so you can join SpyCloud results with app/fraud/IdP logs.


🗺️ Dashboard layout

📈 Executive top line (always-on)
  • ATO incidents (credential-based) – 13-week trend
  • ATO prevented – weekly count & cumulative
  • TTR (median) – by action (reset vs step-up)
  • Friction rate – % of logins impacted
  • Reset completion – % within SLA (24h, 72h)
🧰 Operations (daily)
  • New exposed users – by source (breach/malware/phished) & severity
  • Actions applied – by policy route (reset/step-up/block)
  • Open remediations – aging buckets (0–24h, 24–72h, 3–7d)
  • Precision & FPR – weekly trend
  • Population evaluated – by user segment (region, product line, risk tier)
🧪 Policy tuning
  • Exposure count bands (e.g., 1, 2–5, 6+) → action outcomes
  • Credential risk score → conversion/abandon comparisons
  • A/B threshold tests – effect on precision, FPR, friction

🧭 Segmentation you’ll want

  • Source: breach vs malware vs phished
  • Lifecycle: new registration vs returning login
  • Tier: high-value accounts vs standard
  • Region/geo & device class
  • Credential hygiene: prior resets, repeat offenses

Segment reports to see where to tighten and where to lighten controls.


🧪 Example KPI scorecard (monthly)

KPIThis MonthPrev MonthΔTargetStatus
ATO Incidents (cred)4261−31%−20%
ATO Prevented3,2102,780+15%+10%
TTR (median)3.6h6.1h−41%≤ 4h
Precision92.4%88.7%+3.7pp≥ 90%
FPR1.3%1.9%−0.6pp≤ 2%
Friction1.8%1.7%+0.1pp≤ 2%⚠️
Reset Completion (24h)74%69%+5pp≥ 75%

pp = percentage points


🧩 Translating SpyCloud results into actions

Map exposure severity/counts to actions (tune to your risk tolerance):

ConditionSuggested Action
High severity or multiple recent exposuresForce password reset immediately
Medium severity, first-time exposureStep-up auth + optional reset nudge
Low severity or stale exposureMonitor; nudge user to refresh password
Repeat exposure within 30 daysMandatory reset + education banner

Start conservative; widen enforcement as precision stabilizes.


🧮 Simple ROI sketch (optional)

ROI = (Avg Incident Cost × ATO Prevented) − (Ops Cost + Tooling Cost)

  • Avg Incident Cost: fraud loss + recovery + support time
  • Ops Cost: time to handle resets/exceptions
  • Tooling Cost: incremental platform/infra spend

Report quarterly; socialize wins with Security, Fraud, and CX.


🧑‍🔧 Implementation tips

  • Fail-open async checks at login; sync checks at registration where policy requires.
  • Track attempt → success for forced resets (this powers TTR and completion).
  • Log policy route and reason for every action to enable precision/FPR reporting.
  • Revisit thresholds monthly; keep a changelog for experiments.

📅 Reporting cadence & owners

  • Weekly ops: CAP owner + Fraud + SRE/Platform
  • Monthly KPI: Security leadership + Product + Support
  • Quarterly ROI: Finance + Exec sponsor

Keep it boring and repeatable. The program wins by consistency.