A reliable sales dashboard does one thing: it tells you whether you will hit your number before it is too late to do something about it. Most dashboards fail that test. They show last month's closed revenue, rep activity tallies, and a pipeline bar chart that looks fine right up until it isn't.
This guide gives you the full architecture: a four-layer metrics hierarchy, complete formulas, role-specific views, tool recommendations, review cadences, and a target-setting methodology that actually holds. If you want to pressure-test your current setup, run the ROI calculator alongside this reading.
- Four-layer metrics hierarchy: activity → output → outcome → efficiency
- Complete metric list with definitions and formulas
- Dashboard design by role: CEO, VP Sales, AE/SDR
- Tool recommendations and cadence structure
- Leading vs lagging indicators and how to use them
- Vanity metrics, gaming behaviors, and how to prevent both
- Target-setting methodology for each layer
The four-layer metrics hierarchy
Every metric in your stack belongs to one of four layers. The layers are ordered by how quickly they give you signal and how directly they reflect cause vs effect.
Activity metrics are fully within your control. If reps send 50 sequences per week and you want 75, you can change that today. Outcome metrics are the end result—won revenue, pipeline created—but they lag activity by weeks or months. Efficiency metrics tell you whether your system is healthy or burning cost to produce the same output.
The hierarchy matters because you need all four layers to manage a revenue engine. Outcome metrics alone tell you what happened. Activity metrics tell you what will happen. Efficiency metrics tell you what it cost.
Layer 1: Activity metrics
Activity metrics measure what your team does, not what results from it. They are the only metrics reps fully control, which makes them the right inputs to daily coaching and short-cycle optimization.
Sequences initiated Definition: New personalized sequences started per rep per week. Formula: Count of sequences with first-touch sent in the period. Target range: Varies by role and segment; establish a floor based on historical pipeline creation, not gut feel.
Dials attempted Definition: Total outbound call attempts logged per rep. Formula: Count of call activities in CRM by rep, date range. Note: Dials alone are a weak signal. Pair with connect rate to determine whether volume is generating conversations.
LinkedIn touches Definition: Connection requests sent plus messages sent plus comments on prospect content. Formula: Sum of LinkedIn activities logged in your sales engagement platform.
Tasks completed on time Definition: Percentage of scheduled CRM tasks completed within the defined SLA window. Formula: (Tasks completed on time / Total tasks due) × 100 This is a discipline metric, not a volume metric. Low on-time completion rates indicate either overloaded reps or poor prioritization systems.
Email volume per sequence Definition: Average number of emails delivered per initiated sequence before the sequence ends. Formula: Total emails delivered / Sequences initiated. A proxy for sequence design quality. Very low values (1-2 touches) often mean premature abandonment.
Layer 2: Output metrics
Output metrics measure the results of activity—meetings booked, opportunities created, pipeline added. They are leading indicators for revenue but lag activity by days to weeks.
Connect rate Definition: Percentage of dials that result in a live conversation. Formula: (Connects / Dials attempted) × 100 Benchmark: 6-12% for cold outbound; varies significantly by target segment and list quality.
Meetings booked Definition: Total qualified first meetings scheduled in the period. Formula: Count of meetings flagged as booked/held in CRM. Do not conflate booked with held. A calendar invite is not pipeline.
Meetings held rate Definition: Percentage of booked meetings that actually occur. Formula: (Meetings held / Meetings booked) × 100 Low held rates (below 70%) indicate poor lead quality, weak qualification, or a discovery call structure that lets prospects cancel without friction.
Meeting-to-opportunity conversion Definition: Percentage of held meetings that result in a qualified opportunity created in CRM. Formula: (Opportunities created from meetings / Meetings held) × 100 This is where ICP discipline shows up. Low conversion usually means the meeting was booked with an unqualified prospect.
Opportunity creation rate Definition: Net new opportunities added to the pipeline per rep per week. Formula: Count of opportunities created with a stage ≥ 1 in the period.
Pipeline created ($) Definition: Total dollar value of opportunities created in the period. Formula: Sum of opportunity amounts at creation date. Note: Use at-creation value, not current value. If you update opportunity amounts during the cycle, you lose the ability to track pipeline input quality separately from pipeline management quality.
For a deeper analysis of how pipeline creation connects to revenue timing, see the sales pipeline velocity formula breakdown.
Layer 3: Outcome metrics
Outcome metrics are the scoreboard. They are lagging indicators—by the time they move, the inputs that caused them are weeks or months in the past.
Pipeline velocity Definition: The speed at which opportunities move through your pipeline and generate revenue. Formula: (Number of opportunities × Average deal size × Win rate) / Average sales cycle length This is the single most important compound metric in sales operations. A 10% improvement in any one variable increases velocity by 10%. Compounding two or three variables simultaneously multiplies impact.
Win rate Definition: Percentage of closed opportunities that close as won. Formula: (Closed won / (Closed won + Closed lost)) × 100 Always segment win rate by lead source, segment, rep, and competitor. An aggregate win rate obscures the story.
Average deal size (ADS) Definition: Average dollar value of won opportunities. Formula: Total closed won revenue / Number of closed won deals Track ADS trend over time. Declining ADS often signals either market pressure or reps discounting to hit quota.
Sales cycle length Definition: Average number of days from opportunity creation to close. Formula: Sum of (close date − creation date) for won deals / Number of won deals Segment by deal size. Larger deals have longer cycles; conflating them distorts your forecast model.
Revenue attainment Definition: Percentage of quota achieved in the period. Formula: (Closed won revenue / Quota) × 100 At the rep level. At the team level, also track quota coverage—the ratio of total team quota to total addressable pipeline. If coverage is below 3:1, you are already behind.
Forecast accuracy Definition: Percentage accuracy of called revenue versus actual closed revenue. Formula: |Forecasted revenue − Actual revenue| / Forecasted revenue × 100 Measure monthly. High forecast error (above 15%) is a system health problem, not a rep problem—it indicates either poor opportunity hygiene or a broken stage definition.
Revenue per rep (RPR) Definition: Closed won revenue per quota-carrying rep in the period. Formula: Total closed won revenue / Number of quota-carrying reps
For a full breakdown of how per-meeting economics connect to these outcome metrics, see true cost per meeting in B2B outbound.
Layer 4: Efficiency metrics
Efficiency metrics tell you what it costs to produce your outcomes. They are essential for budget decisions, capacity planning, and identifying system-level problems.
Customer acquisition cost (CAC) Definition: Total sales and marketing cost divided by new customers acquired. Formula: (Total S&M spend) / New customers acquired Calculate monthly and trailing twelve months. Use fully-loaded costs: salaries, tools, contractors, ad spend.
Cost per meeting (CPM) Definition: Fully-loaded cost to produce one held qualified meeting. Formula: Total outbound S&M costs / Meetings held This is one of the most actionable efficiency metrics in outbound. If CPM is rising, either your conversion funnel is degrading or your costs are increasing faster than output.
Magic number Definition: Revenue efficiency ratio measuring how much new ARR is generated per dollar of S&M spend. Formula: (Net new ARR in period) / (S&M spend in prior period) A magic number above 0.75 is generally considered efficient. Below 0.5 suggests growth is becoming expensive.
Sales cycle ROI Definition: Return on investment per deal relative to the cost to close it. Formula: (Deal value − Cost to acquire) / Cost to acquire × 100
CRM data completeness Definition: Percentage of opportunities with required fields populated. Formula: (Opportunities with all required fields / Total active opportunities) × 100 Not a revenue metric, but directly predicts forecast accuracy and coaching quality. Target 90%+.
Dashboard design: three views, three audiences
One dashboard does not serve all audiences. CEO needs momentum and exception alerts. VP Sales needs pipeline health and team-level variance. Reps need daily priorities and their own performance vs targets.
What they need to see
Revenue attainment vs plan, pipeline coverage ratio, new logo vs expansion split, CAC and payback period, forecast vs prior forecast trend.
What they need to see
Pipeline velocity by segment, stage conversion rates, win rate by rep and source, forecast accuracy trend, quota coverage, rep ramp progress.
What they need to see
Daily task queue, meetings booked vs target, pipeline owned by stage, personal win rate, activity vs benchmark, next actions on top opportunities.
CEO view design principles: Monthly trend lines, not weekly noise. Flag exceptions—pipeline coverage below 3x, forecast deviation above 15%, CAC payback trending beyond 18 months. Three to five metrics maximum. If it requires explanation, it is not a CEO metric.
VP Sales view design principles: Team-level variance is more valuable than team averages. Show the distribution of win rates, deal sizes, and cycle lengths. A team average of 25% win rate could mean everyone is at 25% or it could mean two reps are at 45% and three are at 10%. The distribution drives coaching decisions.
Rep view design principles: Daily, actionable, ranked by priority. The rep's dashboard should answer "what do I do first today?" not "how am I doing this quarter?" Stack-rank open opportunities by close date and engagement recency. Surface tasks overdue. Show meetings booked this week vs target.
Tool recommendations
Salesforce Reports and Dashboards: Best for complex organization hierarchies and custom stage definitions. Native reports are powerful but require disciplined field hygiene. Use joined reports for cross-object analysis (e.g., activity-to-opportunity correlation). Weakness: default dashboards are noisy; almost every team needs a custom build.
HubSpot Dashboards: Faster to configure than Salesforce, strong for teams under 50 reps. The deal pipeline and activity reports are sufficient for most Layer 1-2 metrics. Revenue attribution reporting requires Sales Hub Enterprise.
Looker (Google Cloud): Best for teams with a data warehouse and a RevOps or BI analyst. Enables cross-system analysis: CRM data plus marketing automation plus product usage in one view. Learning curve is real; do not deploy without a dedicated owner.
Metabase: Open source, cost-effective alternative to Looker. Good for smaller teams with basic SQL capability. Excellent for ad-hoc analysis and exporting custom datasets for quarterly reviews.
Gong / Chorus: Call intelligence platforms that add a Layer 1-2 overlay—talk/listen ratio, competitor mentions, objection patterns. Feed these into your VP Sales view to connect activity quality (not just quantity) to output metrics.
For how these tools connect to your broader revenue operations architecture, see Revenue Operations 101 for mid-market.
Metric cadences
Metrics reviewed at the wrong frequency create either false confidence or alert fatigue.
Daily: Task completion rate, meetings booked today, open sequences active, priority opportunity next steps. For SDRs, also: sequences initiated, connects, and meetings booked vs daily target.
Weekly: Pipeline created vs weekly target, meetings held rate, stage conversion rates for deals moving this week, forecast call (manager review), rep-level activity vs benchmark. Weekly is the primary coaching cadence—not monthly.
Monthly: Win rate trend, ADS trend, sales cycle length trend, CAC, CPM, forecast accuracy for the prior month, quota attainment distribution. Monthly reviews should answer: is the system working, or are we seeing structural drift?
Quarterly: Pipeline velocity components, magic number, RPR, cohort analysis of rep performance, ICP analysis of won vs lost deals, target-setting calibration for next quarter.
Leading vs lagging indicators
Every metric in your stack falls on a lead-lag spectrum. Leads predict outcomes; lags confirm them.
Strong leading indicators: Sequences initiated, meetings booked, pipeline created, connect rate trend. These move first. If sequences initiated drops 30% in week one, you will see pipeline creation decline in weeks three through six. Act on leads immediately.
Strong lagging indicators: Win rate, ADS, revenue attainment, CAC. These confirm whether system changes worked. Do not use lagging indicators for weekly coaching—the data is too old to inform specific rep behavior.
Mixed signals: Meetings held rate, meeting-to-opportunity conversion, and stage conversion rates are semi-leading. They lag activity by days to two weeks but lead revenue by weeks to months. These are the highest-leverage metrics for VP Sales weekly reviews.
For a framework connecting leading indicators to pipeline velocity specifically, see revenue automations and pipeline mechanics.
Common metric mistakes
Vanity metrics: Total emails sent, total calls logged, LinkedIn connection count. These measure motion, not progress. An SDR who sends 200 unqualified emails achieves nothing. Replace volume-only metrics with conversion-rate companions: not just dials, but dials and connect rate.
Gaming behaviors: When you measure only what is easy to count, reps optimize for the count. Common patterns: logging a call that lasted 15 seconds as a "connect," creating low-quality opportunities to inflate pipeline coverage, sandbagging deals to hit quota in a safe quarter. Prevent gaming by pairing every activity metric with an outcome metric. If calls go up but connects do not, investigate.
Single-metric dashboards: Optimizing win rate in isolation inflates cycle length. Optimizing cycle length deflates deal size. Optimizing deal size depresses win rate. Pipeline velocity is the correct compound metric precisely because improving it requires balancing all four variables simultaneously.
Ignoring cohort analysis: Aggregate win rates hide rep development patterns. A rep hired six months ago and closing at 18% needs different coaching than a veteran closing at the same rate. Cohort by hire date, segment, and lead source.
Target-setting methodology
Targets set from top-down quota allocation without bottom-up validation fail consistently. Use this four-step process:
Step 1 — Baseline from historical conversion rates. Pull your last 12 months of data. Calculate average conversion at each stage (sequence → meeting, meeting → opportunity, opportunity → close). These are your baseline rates. Do not assume you can improve all of them simultaneously.
Step 2 — Model the activity requirement. Work backward from revenue target. If you need $500K in new ARR per quarter, at a 20% win rate and $50K ADS, you need 50 new opportunities. At a 30% meeting-to-opportunity rate, you need 167 meetings held. At an 80% hold rate, you need 208 meetings booked. At a 5% booking rate from sequences, you need 4,160 sequences initiated. That is your activity floor.
Step 3 — Stress-test against capacity. Can your team actually deliver 4,160 sequences? At 5 reps, that is 832 per rep per quarter or 64 per week. Is that realistic given other responsibilities? If not, you need more reps, a higher conversion rate, or a lower revenue target—not more pressure on existing reps.
Step 4 — Set targets with range bands, not single numbers. A target of exactly 50 opportunities creates cliff-edge psychology. Set a floor (minimum acceptable), a target (planned), and a stretch (what exceptional looks like). Review weekly against the floor. Celebrate consistently reaching target. Reserve stretch for bonus mechanics.
FAQ
How many metrics should be on a weekly sales dashboard?
Five to eight metrics per view, segmented by audience. More than ten metrics on a single dashboard guarantees that the important ones get ignored. The discipline is deciding what not to measure, not what to add. Start with pipeline velocity components and one activity metric per role. Add metrics only when you have a specific decision they will inform.
What is a healthy pipeline coverage ratio?
3x your quarterly revenue target is the standard floor for most B2B sales organizations. At a 30-35% win rate on qualified opportunities, 3x coverage gives you roughly 90-105% attainment under normal conditions. At win rates below 25%, target 4x. Coverage below 2.5x with less than six weeks in the quarter is a miss-risk situation requiring escalation and an accelerated pipeline creation plan.
Should SDRs and AEs share the same dashboard?
No. SDR metrics are activity- and output-focused: sequences, connects, meetings booked, meetings held. AE metrics are output- and outcome-focused: pipeline owned, stage distribution, win rate, ADS, forecast. Combining them into one view either overwhelms SDRs with irrelevant data or obscures the AE metrics that drive coaching. Build two views on the same underlying data model.
How do you prevent reps from gaming CRM metrics?
Pair every activity metric with a downstream outcome. Log a call? Connect rate must move. Create an opportunity? It must advance stages within a defined window or be flagged for review. Build a stage hygiene audit into your weekly manager process: any opportunity without an updated next step or activity in seven days gets reviewed in the pipeline call. Visibility and accountability reduce gaming faster than policy.
What is the difference between a forecast and a pipeline report?
A pipeline report shows what exists. A forecast is a commitment about what will close. Pipeline reports are objective; forecasts require judgment. Your forecast should be built from stage-weighted probability plus rep-level commit/upside call, not from multiplying total pipeline by a blanket win rate. The gap between your weighted pipeline and your rep-level forecast is your risk or upside buffer—track it explicitly every week.
The dashboard is not the system
A well-designed metrics dashboard surfaces the truth fast enough to act on it. But dashboards do not fix broken processes—they reveal them. If your connect rate has been declining for six weeks and you have only just noticed it in your monthly review, the problem is your cadence, not your metrics.
Build the four-layer hierarchy. Review it at the right frequency. Set targets from the bottom up. And use what you see to drive specific decisions, not general concern.
If you want to see how these metrics connect to a full revenue automation architecture, start with revenue automations and the solutions for sales leaders. When you are ready to model the ROI of improving specific conversion rates, the ROI calculator runs the math.
Talk to us about building a metrics architecture for your revenue team.