A risk dashboard is not a report. A report answers a question that was asked in the past. A dashboard enables a decision that needs to be made now. The design distinction matters: risk dashboards must be organized around decisions and actions, not around data availability or organizational convenience.
This article describes six risk dashboard archetypes, each oriented toward a specific audience, decision context, and update cadence. For each, it specifies the key metrics, layout considerations, widget types, and the data sources and calculations that underlie them. For the KPIs these dashboards display, see Risk KPIs. For the analytical techniques that produce the underlying numbers, see Risk Techniques. For the data sources required, see Risk Data Sources.
Dashboard Design Principles for Risk Analytics
Before describing individual archetypes, five design principles apply across all risk dashboards:
Threshold anchoring. Every KPI should be displayed in relation to its threshold or target - not as a raw number but as a position on a defined scale. A fraud rate of 12 bps displayed against a target of 10 bps and a red-line of 15 bps communicates urgency that the raw number does not. Use traffic-light indicators (RAG status: Red/Amber/Green) consistently, but always show the underlying number alongside the indicator - composite status without the number encourages escalation to decisions without evidence.
Trend over snapshot. Risk metrics displayed as a single point in time are almost meaningless. Display 13 months of trend data as a default time horizon - this preserves year-over-year comparison while showing monthly movement. For operational risk metrics where the current period is incomplete, distinguish confirmed values from projections.
Separation of leading and lagging indicators. Lagging indicators (historical losses, closed findings, resolved incidents) tell you what happened. Leading indicators (open risk items, pending remediation actions, compliance deadline proximity) tell you what is likely to happen. Risk dashboards that show only lagging indicators are autopsy reports. Mix both on every dashboard with clear visual distinction.
Drill-down paths. Executive-level risk dashboards should allow click-through from composite metrics to the underlying detail - from a Rising Compliance Rate alert to the specific controls failing, from a fraud rate spike to the transaction-level data driving it. Design the drill-down path before designing the summary metric; the summary is only useful if the supporting detail is accessible.
Attribution and accountability. Every risk item should have a named owner visible in the dashboard context. Aggregate metrics without ownership attribution produce action-diffuse reporting where nobody is accountable for the numbers.
Dashboard 1: Enterprise Risk Dashboard
Primary audience: CRO, CFO, CEO, Board Risk Committee
Update cadence: Monthly for trend views; near-real-time for threshold breach alerts
Decision context: Is our overall risk position within appetite? Where is it deteriorating and which domains require escalation?
Layout and Sections
Header band (top strip): Enterprise Risk Appetite Utilization gauge - a single semi-circular gauge showing Portfolio Unexpected Loss as a percentage of defined risk appetite capital, with Red/Amber/Green zones. Current period value displayed numerically alongside the gauge. Month-over-month change arrow.
Domain scorecard (primary section, tabular): One row per risk domain (Credit, Market, Operational, Compliance, Third-Party, Reputational). Columns: Domain Name, Risk Score (0-100), RAG Status, Change from Prior Period, Risk Appetite Limit, Appetite Utilization %. Color-code rows by RAG status. This section replaces a traditional risk register summary with a quantified, comparable view.
Trend section (middle): Two panels side by side.
- Left: Risk Exposure Index (REI) 13-month trend line with appetite threshold marked as a horizontal reference line.
- Right: Incident Cost 13-month stacked bar, segmented by risk domain (credit losses, operational losses, fraud losses, compliance fines).
Top risks panel (lower left): Top 5 open risk items by residual risk score, showing: Risk Name, Domain, Risk Owner, Target Mitigation Date, Days Open, Residual Score. Sort by residual score descending. This section drives immediate attention to the highest-priority items, not the most recently opened ones.
Forward-looking section (lower right): Three widgets:
- Upcoming regulatory deadlines in the next 60 days - count by urgency tier
- Audit findings pending remediation by due-date proximity - count in 0-30, 31-60, 61-90 day buckets
- Stress test summary: under the organization’s primary adverse scenario, what is the projected capital impact as a percentage of Tier 1 capital?
Alert panel (right sidebar): Rolling list of threshold breaches in the past 7 days, with domain, metric, current value, threshold, and breach severity. Auto-cleared when values return to within-threshold range.
Key Metrics Displayed
- Risk Exposure Index (REI)
- Value at Risk (VaR) - market and credit
- Compliance Rate (aggregate)
- Open Risk Items by severity
- Incident Cost (trailing 12 months)
- Risk Mitigation Effectiveness
Dashboard 2: Financial Risk and Audit Dashboard
Primary audience: CFO, VP Internal Audit, Controller, External Audit Liaison
Update cadence: Weekly for audit metrics; monthly for financial risk metrics
Decision context: Are our financial controls operating effectively? Where are audit findings concentrated? What is the status of remediation commitments?
Layout and Sections
Audit activity summary (top row, three KPI tiles):
- Total Findings (current audit cycle): count with year-over-year comparison
- Repeat Finding Rate: percentage with prior-period trend arrow
- Average Days to Remediation: current period vs. target vs. prior period
Finding inventory (primary section): Detailed grid of all open audit findings with columns: Finding ID, Audit Engagement, Control Domain, Severity (Critical/High/Medium/Low), Business Unit Owner, Due Date, Days Until Due, Current Status, Notes. Color-code rows by severity. Enable filtering by severity, business unit, and due-date proximity. Sort by due date ascending by default to surface overdue items first.
Remediation trend (middle left): Stacked bar chart showing findings opened, findings remediated, and findings overdue for each of the past 13 months. A rising “overdue” segment indicates remediation velocity is insufficient relative to new finding volume.
Finding heatmap (middle right): Two-dimensional heat map with business units on one axis and control domains (Financial Reporting, IT General Controls, Operational Controls, Compliance Controls) on the other. Cell color intensity represents finding count or aggregate severity score. This immediately identifies which business units have systemic control weaknesses vs. isolated failures.
SoD violation tracking (lower left): Count of active segregation-of-duties violations by risk tier (Critical SoD, High SoD, Medium SoD). Trend for the past 6 months. Link to violation detail showing user, conflicting roles, and last review date. SoD violations are a foundational SOX control requirement and persistent violations warrant immediate escalation.
Journal entry anomalies (lower right): Count of GL journal entries flagged by automated anomaly monitoring in the current period, segmented by flag type (unusual amount, after-hours posting, bypassed approval, unusual account combination). Trend for the past 6 months. Flagged entry volume as a percentage of total journal entries.
Financial VaR panel (far right): Simplified VaR display for market risk and liquidity risk, showing 1-day and 10-day VaR at 99% confidence. Days-since-VaR-breach count. Position relative to regulatory and internal VaR limits.
Dashboard 3: Fraud Detection Dashboard
Primary audience: Chief Fraud Officer, Head of Financial Crime, Fraud Operations Manager
Update cadence: Near-real-time for operational metrics; daily summaries for management view
Decision context: Is fraud occurring at an elevated rate? Where is it concentrated? Are alert queues being worked at adequate velocity?
Layout and Sections
Real-time ticker (header): Live statistics for the current day:
- Transactions processed (count and volume)
- Alerts generated (count and as percentage of transactions)
- Cases in queue (count by priority)
- Cases confirmed fraudulent (count and as percentage of alerts)
- Estimated fraud losses prevented (dollar value from declined/held transactions)
Fraud rate trend (primary section, full width): 90-day daily fraud rate (bps) trend line. Overlay: volume line (secondary axis). This dual-axis view distinguishes genuine fraud rate increases from volume-driven alert count changes. Mark significant events (new fraud type detected, rule change deployed, system outage) as annotations on the timeline.
Alert volume by channel (left panel): Stacked bar chart by day, segmented by channel (card-present, card-not-present, ACH, wire, mobile). Channel mix shifts in the fraud alert distribution indicate where active fraud campaigns are targeting.
Alert disposition funnel (center panel): Funnel chart: Transactions → Alerts Generated → Alerts Reviewed → Cases Created → Confirmed Fraud → Loss Recovered. Each stage shows count and conversion rate. Bottlenecks at the review stage indicate queue backlogs; bottlenecks at case creation indicate alert precision problems.
Fraud type breakdown (right panel): Donut chart or sortable table: fraud cases by type (card fraud, account takeover, first-party fraud, synthetic identity, insider fraud, vendor fraud). Period comparison (current month vs. prior month, current quarter vs. prior quarter).
Geographic distribution (lower left): Transaction volume and fraud rate by geography (country or state level, depending on business). Heat map coloring by fraud rate. Rising fraud rates in new geographies often indicate test activity by organized fraud rings before scaling.
Merchant category analysis (lower right): Top merchant categories by fraud rate and by fraud volume. A category with a moderate fraud rate but high transaction volume may represent larger total losses than a high-rate low-volume category.
Analyst performance metrics (bottom strip): Cases assigned per analyst, average review time, alert-to-case conversion rate, accuracy rate (percentage of confirmed cases correctly classified at triage). This section enables capacity planning and identifies analysts who may need additional training.
Dashboard 4: Compliance Dashboard
Primary audience: Chief Compliance Officer, Head of Regulatory Affairs, Business Unit Compliance Officers
Update cadence: Daily for attestation and deadline tracking; monthly for control effectiveness metrics
Decision context: Are our compliance obligations being met? Where are controls failing? What regulatory deadlines are at risk?
Layout and Sections
Compliance health summary (header, four KPI tiles):
- Overall Compliance Rate: percentage with trend arrow
- Controls Tested This Quarter: count vs. plan
- Critical Control Failures: count (drives immediate attention)
- Regulatory Deadlines in Next 30 Days: count
Regulatory deadline calendar (primary section): Timeline view showing upcoming regulatory filing deadlines for the next 90 days. Each deadline shown as a bar: deadline name, jurisdiction, responsible owner, current preparation status (Not Started / In Progress / Ready / Submitted). Color-code by preparation status. Overdue items shown in red above the timeline. This section drives immediate action from compliance teams more effectively than any KPI alone.
Control effectiveness heatmap (middle): Matrix with regulatory frameworks (SOX, GDPR, PCI DSS, HIPAA, AML, local frameworks as applicable) on one axis and control domains (IT General Controls, Financial Controls, Data Privacy Controls, Access Controls) on the other. Cell values represent percentage of controls with effective operation ratings. Color-grade from green (>95%) through amber (80-95%) to red (<80%). Clicking a cell drills to the individual control list.
Policy attestation tracker (lower left):
- Attestation completion rate for the current cycle: percentage with progress bar toward 100%
- Overdue attestations: count by organizational unit
- Trend of completion rate by day within the attestation window (S-curve chart) - a completion rate that is growing slowly early in the window may indicate a deadline risk
Training compliance (lower center):
- Required training completion rate by course
- Employees with overdue training by business unit
- Days until next compliance training deadline
Compliance findings by framework (lower right): Bar chart: compliance audit findings in the trailing 12 months, grouped by regulatory framework. Enables comparison of which frameworks are generating the most control deficiencies and may require targeted remediation investment.
Regulatory examination status panel (sidebar): Current open regulatory examinations or inquiries - regulator name, examination type, period under examination, requests outstanding, response due dates. This panel ensures that exam management status is visible to compliance leadership without requiring separate tracking in email or manual systems.
Dashboard 5: Credit Risk Dashboard
Primary audience: Chief Credit Officer, Head of Portfolio Risk, Credit Risk Analysts
Update cadence: Daily for concentration metrics; monthly for PD/LGD model performance
Decision context: Is portfolio credit quality deteriorating? Where is concentration risk building? Are credit models performing as expected?
Layout and Sections
Portfolio health summary (header, five KPI tiles):
- Portfolio Expected Loss (EL): dollar value with prior-period comparison
- Weighted Average PD: percentage with trend arrow
- Weighted Average LGD: percentage with trend arrow
- Portfolio VaR (99%, 1-year): dollar value and as percentage of regulatory capital
- Concentration HHI: score with color coding (green <1,500 / amber 1,500-2,500 / red >2,500)
Credit quality migration (primary section): Migration matrix showing movement of obligors between internal credit grades in the past quarter. Row = start grade, Column = end grade. Cell values = count and percentage of obligors. Diagonal cells (no migration) shown in neutral color; downgrades shown in progressively warmer colors; upgrades shown in progressively cooler colors. A matrix with heavy concentration below the diagonal indicates broad portfolio deterioration.
PD distribution (middle left): Histogram of obligor PD scores across the portfolio, segmented by vintage (origination year). Shifts in the distribution toward higher PD values indicate credit quality deterioration - especially meaningful when the shift is concentrated in a specific vintage, pointing to origination period underwriting weakness.
Concentration risk panel (middle right): Three views available via tab:
- Top 10 obligors by exposure (name/ID, exposure amount, PD, expected loss, percentage of portfolio)
- Sector concentration: exposure by industry sector vs. concentration limits
- Geographic concentration: exposure by country/region with HHI by geography
Model performance monitoring (lower left): For each production credit model:
- Model name and version
- Actual default rate vs. model-predicted default rate (back-test comparison)
- Gini coefficient on out-of-time validation data
- Last validation date
- Model status (Approved/Under Review/Watch List)
A significant gap between actual and predicted default rates triggers model re-validation. Rising divergence is an early warning signal of model degradation often detectable quarters before formal validation would otherwise occur.
Watch list (lower right): Obligors on credit watch - deteriorating financial metrics, covenant breaches, ratings under review, or elevated fraud signals. Columns: obligor name, watch reason, initial watch date, exposure amount, assigned analyst, next review date. Sorted by exposure amount descending.
Stress test impact summary (footer): Under the organization’s primary adverse scenario, show the projected impact on: Portfolio EL, Expected Credit Losses (ECL) under IFRS 9/CECL, Capital Ratio, Required Loan Loss Reserve. This connects credit portfolio analytics directly to regulatory capital implications.
Dashboard 6: Operational Risk Dashboard
Primary audience: Chief Operating Officer, Head of Operational Risk, Business Continuity Director
Update cadence: Daily for open incident tracking; monthly for loss trend analysis
Decision context: Are operational failures occurring at elevated rates? Are incidents being identified and resolved quickly? Where are our process controls breaking down?
Layout and Sections
Operational risk health (header, four KPI tiles):
- Open Incidents: count, segmented by severity
- MTTI Risk (rolling 30-day): average hours with trend arrow
- MTTR Risk (rolling 30-day): average hours with trend arrow
- Month-to-Date Operational Loss: dollar value vs. budget and vs. prior month
Incident trend (primary section): 13-month bar chart of incident counts by severity tier (P1/Critical, P2/High, P3/Medium, P4/Low). Overlay the monthly loss line on a secondary axis. This view immediately shows whether incident volume trends and loss trends are correlated - they should be, but divergences (rising volume with flat loss) may indicate improved loss containment, while flat volume with rising loss indicates severity escalation.
Category breakdown (middle left): Donut chart or horizontal bar chart: operational loss events in the trailing 12 months by event type using Basel II operational risk categories (Internal Fraud, External Fraud, Employment Practices and Workplace Safety, Clients Products and Business Practices, Damage to Physical Assets, Business Disruption and System Failures, Execution Delivery and Process Management). This categorization enables benchmarking against industry loss data from the ORX operational risk consortium.
Process heat map (middle right): Matrix of business processes (rows) vs. risk event types (columns). Cell values represent incident count or aggregate loss for the current year. Color intensity represents severity. This view identifies which process-risk combinations account for the largest share of operational losses and should receive the most remediation investment.
Near-miss tracking (lower left): Count of near-miss reports submitted in the current and prior three months. Near-miss reporting rate (near-miss reports per 100 employees or per 100,000 transactions) as a leading indicator - a high near-miss reporting culture catches precursors before they become loss events. A declining near-miss rate in an organization without improving loss trends often indicates reduced reporting rather than reduced risk.
Business continuity status (lower right): For each critical business process:
- Current availability status (Operational/Degraded/Unavailable)
- Last BCP test date
- RTO (Recovery Time Objective) and RPO (Recovery Point Objective) targets
- Actual recovery time from most recent unplanned outage
Vendor-related incidents (footer panel): Incidents attributable to third-party vendors - count, total loss, and percentage of total operational losses attributable to vendor failures. Top 5 vendors by incident count. This connects operational risk measurement to the third-party risk analytics described in Risk Techniques and creates accountability for vendor management outcomes.
Implementation Sequencing
Organizations building risk analytics dashboards for the first time should sequence implementation in order of decision impact and data readiness:
Phase 1 (Foundation): Compliance Dashboard and Operational Risk Dashboard. These rely primarily on internally generated data (GRC systems, RMIS, ticketing systems) and produce immediate decision value by replacing manual spreadsheet tracking. Expected implementation time: 6-10 weeks.
Phase 2 (Financial Controls): Financial Risk and Audit Dashboard. Requires ERP integration and audit management system connection. Produces audit finding analytics and financial anomaly monitoring that directly reduce financial statement risk. Expected implementation time: 8-14 weeks.
Phase 3 (Credit and Fraud): Credit Risk Dashboard and Fraud Detection Dashboard. These require the deepest data integration (core banking or lending systems, transaction feeds, credit bureau data) and are most technically demanding. Expected implementation time: 12-20 weeks each.
Phase 4 (Enterprise): Enterprise Risk Dashboard. This dashboard aggregates outputs from all other domains and is meaningless without the underlying domain metrics operating reliably. Implement last, after the domain-level analytics have been validated and are trusted by stakeholders.
The sequencing is deliberate: each phase produces independent decision value and builds stakeholder trust in the analytical infrastructure that the next phase will depend on. An integrated analytics platform such as Plotono can accelerate this progression by providing a shared dashboard and data pipeline layer that carries consistent metric definitions from one phase into the next.