Skip to content
D-LIT Logo

Dashboards & Reporting

Customer dashboards for retention, engagement, and satisfaction metrics.

By D-LIT Team

A dashboard is only as valuable as the decisions it enables. The most common failure in customer analytics reporting is building dashboards that display data without creating a clear connection between the data and the actions it should trigger. This guide describes seven customer analytics dashboard types, the audience each serves, the specific metrics each should contain, and the design principles that determine whether a dashboard drives action or becomes another report that teams glance at and ignore.

The seven dashboards in this guide address the full spectrum of customer analytics consumers: executive leadership reviewing company health, customer success managers managing account portfolios, product teams evaluating feature engagement, finance tracking revenue retention, and support teams managing service quality.


Executive Customer Health Dashboard

Purpose. This dashboard gives the CEO, CRO, and CCO a single view of the overall health and trajectory of the customer base. It is designed to answer the question “How is the customer business doing this month compared to last month, and compared to a year ago?” without requiring the viewer to synthesize information from multiple reports.

Primary audience. Chief Executive Officer, Chief Revenue Officer, Chief Customer Officer, board and investor reporting.

Key metrics.

  • Net Revenue Retention (NRR): The single most important metric on this dashboard. Shows what proportion of last period’s revenue is retained this period after accounting for churn, contraction, and expansion. Target above 100 percent for a healthy subscription business.
  • Gross Revenue Retention (GRR): NRR without expansion. Isolates the retention story from the growth story.
  • Monthly Recurring Revenue (MRR) waterfall: New MRR, Expansion MRR, Contraction MRR, and Churned MRR displayed as a waterfall chart showing the composition of Net New MRR.
  • Customer count movement: Total customer count at start and end of period, with net new, new, and churned counts displayed.
  • NPS trend: Rolling 90-day NPS with comparison to prior period and prior year.
  • LTV:CAC Ratio by segment: The efficiency of customer acquisition relative to expected return, segmented by customer tier or acquisition channel.
  • Average Revenue Per Account (ARPA): Trend over time to identify whether pricing power is increasing or eroding.

Design principles. This dashboard should contain fewer than fifteen metrics. Every metric displayed should have a comparison (prior period, prior year, or target). Color coding should consistently represent performance direction: green for favorable movement, red for unfavorable, regardless of whether the metric is increasing or decreasing in absolute terms. (Churn going down is green; NPS going up is green.) Include brief narrative annotations for any metric that moved more than ten percent in the period, explaining the driver.

Update cadence. Monthly primary review with weekly refresh available for MRR-related metrics.


Customer Churn and Retention Dashboard

Purpose. This dashboard provides the operational depth behind the retention numbers surfaced in the executive view. It is designed for the team responsible for retention outcomes, typically the Head of Customer Success or VP of Customer Experience, to understand where churn is occurring, which segments are most at risk, and whether retention initiatives are working.

Primary audience. Head of Customer Success, VP of Customer Experience, Revenue Operations.

Key metrics.

  • Cohort retention curves: Month-over-month retention percentage for each acquisition cohort, displayed as a line chart. This is the single most important analysis for understanding whether retention is improving.
  • Churn by segment: Customer churn rate broken down by account tier (enterprise, mid-market, SMB), acquisition channel, industry vertical, and product tier. Identifies where churn is concentrated.
  • Churn by reason: Where exit surveys or sales-logged churn reasons are available, show the distribution of stated churn reasons (price, competitive loss, product fit, budget cut, champion departure, etc.).
  • Time-to-churn distribution: How long customers typically survive before churning, broken down by segment. Identifies critical lifecycle moments where intervention is most needed.
  • At-risk account count: Current number of accounts flagged as at-risk by the health scoring model, with MRR at risk.
  • Churn rate trend: Rolling 12-month monthly customer churn rate and revenue churn rate plotted over time.
  • Involuntary churn rate: Churn due to payment failure specifically, which is recoverable through dunning and should be tracked separately from voluntary churn.

Design principles. Cohort retention should be the primary visual on this dashboard. It should occupy a large portion of the view and be readable without clicking through. Segmentation filters should allow the viewer to drill into any breakdown (by tier, vertical, or product) without navigating to a separate view. At-risk account lists should link directly to the relevant CRM or customer success platform records.

Update cadence. Weekly for operational metrics; cohort analysis monthly.


Account Health and Risk Dashboard

Purpose. This dashboard is the operational tool for individual customer success managers, showing the health status of each account in their portfolio and flagging accounts that require attention. It sits between the strategic overview of the executive view and the granular behavioral data of the product usage view.

Primary audience. Customer Success Managers, Customer Success Team Leads.

Key metrics (per account).

  • Overall health score: Composite score derived from product engagement, support history, NPS, and contract signals. Displayed as a color-coded indicator (green/yellow/red) with the score breakdown visible on hover or click.
  • Product engagement trend: Login frequency and active feature usage over the past 30 days versus the prior 30 days, expressed as a percentage change.
  • Days since last meaningful activity: A simple signal that degrades rapidly for inactive accounts.
  • Open support tickets: Count and age of unresolved support issues.
  • NPS classification: Whether the account’s most recent respondent was a Promoter, Passive, or Detractor, and the date of last response.
  • Renewal date and ARR at risk: Highlights accounts approaching renewal within 90 days.
  • Next scheduled touchpoint: Date of next planned CSM interaction.

At a portfolio level, the dashboard should display:

  • Distribution of accounts by health tier (green/yellow/red) and the MRR weight of each tier.
  • Accounts whose health score declined by more than ten points in the past 30 days.
  • Accounts with no CSM touchpoint scheduled in the next 30 days that are in yellow or red health.

Design principles. This dashboard is used daily by CSMs and must be fast, scannable, and actionable. Every red or yellow account should show a clear call to action or next step. The most critical view is sorted by a combination of health risk and MRR at risk, so that the accounts deserving the most immediate attention are at the top of the list. Bulk actions (e.g., schedule QBR for all red accounts above $10K MRR) are valuable if the dashboard can integrate with the CSM’s workflow tools.

Update cadence. Daily refresh, with near-real-time updates for critical triggers (e.g., champion departure alert, significant engagement drop).


NPS and Customer Satisfaction Dashboard

Purpose. This dashboard tracks satisfaction metrics over time and across segments, enabling the team responsible for customer experience to understand where experience is strong, where it is deteriorating, and what factors are driving sentiment.

Primary audience. Chief Customer Officer, Customer Experience team, Product leadership.

Key metrics.

  • Overall NPS trend: Rolling 90-day NPS plotted over time, with volume of responses displayed alongside the score to contextualize reliability.
  • NPS by segment: NPS broken down by account tier, product line, vertical, and lifecycle stage (0-90 days post-onboarding, 90-365 days, 365+ days). This segmentation reveals whether satisfaction issues are concentrated in specific cohorts.
  • NPS by touchpoint: If NPS is collected at multiple moments (post-onboarding, post-support, annual relationship), show each series separately rather than blending them.
  • Promoter, Passive, Detractor breakdown: Volume and trend for each category, not just the net score.
  • Verbatim theme summary: Top five positive and top five negative themes from open-ended responses in the period, with volume for each theme.
  • CSAT by channel: Post-support CSAT and post-onboarding CSAT tracked separately.
  • CES trend: Customer Effort Score for support interactions, with breakdown by issue type.
  • Closed-loop metrics: For organizations with a closed-loop NPS process, show the percentage of Detractors contacted, average time to first contact, and re-survey NPS for previously contacted Detractors.

Design principles. NPS is a lagging indicator; its value is maximized when displayed in context with leading behavioral data. Where possible, show correlation analysis between NPS scores and product engagement signals, so that teams can see that “accounts with fewer than two active users per month score 15 NPS points lower on average.” Verbatim themes should be linkable to the underlying responses so that readers can see qualitative examples behind the quantitative summaries.

Update cadence. Monthly for relationship NPS; weekly or daily for transactional CSAT and CES.


Product Engagement and Adoption Dashboard

Purpose. This dashboard gives the product team and customer success leadership a view of how customers are using the product: which features are adopted broadly, which are underutilized, where users drop off in key workflows, and how engagement is evolving over time.

Primary audience. Chief Product Officer, Head of Customer Success, Product Managers.

Key metrics.

  • DAU, WAU, MAU with stickiness ratio (DAU/MAU): Core engagement health metrics tracked over time.
  • Activation rate: Percentage of new users or customers who reached the activation milestone within the first 14 and 30 days, tracked as a trend.
  • Feature adoption heatmap: For each major feature or module, the percentage of active accounts using it in the past 30 days. Displayed as a heatmap or ranked list to identify underperforming features.
  • Engagement funnel: Conversion rates through key product workflows from first action to completion.
  • Time-to-value: Distribution of time elapsed between account creation and activation milestone, tracked monthly.
  • Power users per account: For B2B, the percentage of licensed users who qualify as “power users” (using the product at a defined frequency threshold). Low power user ratios in high-value accounts are a churn risk signal.
  • Feature correlation with retention: For the top ten features, the correlation between feature adoption in the first 30 days and 90-day retention. This identifies the features most predictive of long-term engagement.

Design principles. The product engagement dashboard should be filterable by account tier, acquisition cohort, and plan type. Trend comparisons are essential: feature adoption percentages are meaningless without the context of whether they are increasing or decreasing. The most actionable view is usually “features with declining adoption among high-value accounts,” which focuses the product team on the risks most likely to affect revenue.

Update cadence. Weekly, with daily data available for teams managing active onboarding programs.


Revenue Retention and Expansion Dashboard

Purpose. This dashboard gives the finance team and revenue leadership a view of how MRR is changing within the existing customer base, not through new acquisition, but through retention, expansion, and contraction. It is the financial partner to the churn and retention dashboard.

Primary audience. Chief Financial Officer, VP of Finance, Chief Revenue Officer, Revenue Operations.

Key metrics.

  • MRR waterfall: Starting MRR, plus New, plus Expansion, minus Contraction, minus Churned MRR = Ending MRR. The waterfall format makes the composition of MRR change visible at a glance.
  • Net Revenue Retention (NRR) by cohort: NRR calculated for each acquisition cohort, showing how each cohort’s revenue has evolved since acquisition.
  • Expansion MRR by type: Upsell (seat expansion), cross-sell (new product modules), and price increase, tracked separately.
  • Logo retention rate: The percentage of accounts from the prior period that renewed in the current period, regardless of contract value.
  • Dollar retention rate: The percentage of prior-period ARR retained in the current period before expansion.
  • Payback period trend: Average months to recover CAC from gross margin, tracked over time and by acquisition cohort.
  • ARR by segment: Current ARR distributed across customer tiers, showing the concentration of revenue in enterprise versus mid-market versus SMB accounts.
  • Renewal forecast: ARR up for renewal in the next 30, 60, and 90 days, segmented by risk tier based on health scoring.

Design principles. This dashboard should be built with finance’s data governance standards in mind: definitions should be documented and reconciled with the financial close process. Revenue figures should tie to the general ledger. Variance explanations should be attached to any metric that moved more than a defined threshold.

Update cadence. Monthly aligned with financial close; weekly for renewal pipeline forecast.


Customer Support Quality Dashboard

Purpose. This dashboard tracks the efficiency and quality of the customer support function. It serves both the operational goal of managing support throughput and the strategic goal of identifying where product and process improvements can reduce support volume and customer effort.

Primary audience. Head of Customer Support, VP of Customer Experience, Customer Success Operations.

Key metrics.

  • Ticket volume trend: Total support tickets created per period, segmented by issue type, severity, and customer tier.
  • First Response Time (FRT): Average time from ticket creation to first human response, by channel and severity level.
  • Time to Resolution (TTR): Average time from ticket creation to resolution, by issue type and severity.
  • First Contact Resolution Rate (FCR): Percentage of tickets resolved on the first contact without escalation or reopening. A primary driver of Customer Effort Score.
  • Post-support CSAT: Average satisfaction score following ticket resolution, tracked over time and by support tier.
  • Escalation rate: Percentage of tickets escalated to a senior tier, by issue type. High escalation in specific categories indicates training or tooling gaps.
  • Self-service deflection rate: Percentage of users who found their answer in documentation or the knowledge base without submitting a ticket. A leading indicator of self-service investment effectiveness.
  • Issue category distribution: Ranking of issue types by volume, showing whether certain problem categories are growing or shrinking over time.
  • Support volume by account health tier: Are red-health accounts generating disproportionate support volume? This connection between support load and churn risk informs resource allocation.

Design principles. Support quality metrics degrade rapidly if not monitored frequently. This dashboard should be refreshed daily and reviewed in weekly team meetings. Benchmarks should be set for each metric and deviations should trigger escalation reviews. The link between high support volume or poor CSAT and downstream churn should be surfaced explicitly by showing customer-level support data alongside account health scores.

Update cadence. Daily for operational metrics; weekly for trend analysis; monthly for strategic review.


Design Principles Common to All Customer Dashboards

One audience, one dashboard. Dashboards that try to serve every audience serve none. The executive health view and the CSM account health view have fundamentally different requirements: the executive needs breadth and comparability; the CSM needs depth and actionability. Resist the temptation to build a single dashboard with everything.

Comparisons are mandatory. A metric without comparison is a data point, not an insight. Every metric on a customer dashboard should display alongside at least one comparison: prior period, prior year, plan or target, or benchmark. The comparison is what tells the viewer whether the metric represents progress or regression.

Lead with the alert, not the aggregate. Most users of operational dashboards need to know what requires their attention. Design for the exception case first: what has changed significantly, what has crossed a threshold, and what requires action. Display this information at the top of the dashboard before presenting the full metric suite.

Metrics should link to actions. For each metric on a dashboard, there should be a defined response protocol: if NRR drops below a threshold, who is notified and what do they do? If an account health score enters red, what is the expected CSM response within what timeframe? Dashboards without linked response protocols are reporting systems; dashboards with linked protocols are management systems.

Governance and data trust. A customer dashboard that executives distrust produces no value. Invest in data governance: document metric definitions, establish a single source of truth for each metric, reconcile customer counts and revenue figures against source systems regularly, and make the calculation methodology accessible within the dashboard. Platforms such as Plotono that combine data pipeline management with visualization can help enforce consistent metric definitions across all customer-facing dashboards. Data trust is earned through consistency and transparency, not through visual polish.


For the KPIs featured in these dashboards, see the Customer KPIs guide. For the data sources that feed these dashboards, see the Customer Data Sources guide. For the analytical techniques that produce the more advanced metrics featured here, see the Techniques and Models guide.

Get More from D-LIT

Ready to transform your analytics capabilities? Talk to our team about how D-LIT can help your organisation make better, data-driven decisions.

Get in Touch