The GTM Metrics Framework: What to Measure at Every Funnel Stage
A full-funnel GTM metrics framework covering awareness through expansion, metric trees, leading vs lagging indicators, and reporting cadence.
GTMStack Team
Table of Contents
What Most People Get Wrong About GTM Metrics
The average B2B go-to-market team tracks somewhere between 40 and 120 metrics across their tools. Dashboards overflow with charts. Weekly reports stretch to 15 pages. And yet, when the CEO asks “why did we miss the quarter?”, nobody has a clear answer.
The problem isn’t a lack of data. It’s a lack of structure. Without a framework that connects individual metrics to business outcomes, you end up with a spreadsheet graveyard. Numbers that get reported but never acted on.
In our 2026 State of GTM Ops survey of 847 B2B professionals, 34% identified pipeline generation as their top priority. But when we asked about their measurement practices, the answers revealed a disconnect. Most teams track activity metrics (emails sent, calls made) and outcome metrics (revenue, win rate) but have almost nothing connecting the two. The middle of the funnel, where leading indicators live, is a measurement dead zone.
We’ve found this pattern across dozens of GTMStack accounts. Teams with 80+ metrics on their dashboards make slower decisions than teams with 15-25 well-chosen metrics. More data doesn’t mean better decisions. It often means worse ones because signal gets buried in noise.
Over the past two years, we’ve worked with revenue teams building their measurement practices from scratch. The ones that succeed share a common trait: they start with a framework before they start with a dashboard. They define what matters at each stage of the funnel, separate leading indicators from lagging ones, and build a metric tree that makes it possible to diagnose problems in minutes instead of days.
This post walks through that framework in detail.
The Full-Funnel Metrics Model
A GTM metrics framework needs to cover six distinct stages. Each stage has different goals, different owners, and different metrics. Mixing them together, which is what most teams do, creates confusion about who owns what and what actions to take when something goes wrong.
Stage 1: Awareness
Goal: Get your brand and message in front of the right audience.
Key metrics:
- Impressions by channel. Total reach across paid, organic, social, and content. Break this down by ICP-fit audience vs. general audience when possible.
- Share of voice. What percentage of category-relevant conversations include your brand? For B2B, track this through branded search volume relative to competitors, social mentions, and analyst coverage.
- Website traffic by source. Not just total visitors, but traffic quality. Segment by source and track the percentage that matches your ICP firmographics.
- Content reach. Downloads, views, and shares of top-of-funnel content.
Target-setting guidance: Awareness metrics are inherently noisy. Set targets on 90-day rolling averages rather than week-over-week changes. A 10-15% quarter-over-quarter growth rate in ICP-fit traffic is a strong benchmark for Series A through Series C companies.
We discovered that tracking ICP-fit traffic separately from total traffic changed how we thought about content strategy. Our total traffic grew 30% one quarter, but ICP-fit traffic was flat. We were attracting the wrong audience. That insight wouldn’t have surfaced if we were only tracking the aggregate number.
Stage 2: Interest
Goal: Convert anonymous visitors into known contacts who have shown intent.
Key metrics:
- Marketing Qualified Leads (MQLs). Contacts who meet your scoring threshold. Be ruthless about your scoring model. If more than 40% of MQLs get rejected by sales, your threshold is too low.
- Content engagement depth. Pages per session, time on site, and return visit rate for contacts who have identified themselves.
- Email opt-in rate. What percentage of visitors subscribe to your content?
- Demo request rate. The most direct signal of interest. Track this as a percentage of total website sessions and as an absolute number.
Target-setting guidance: Demo request rates vary wildly by industry. For horizontal B2B SaaS, 1-3% of website sessions converting to a demo request is typical. For vertical SaaS with a narrower audience, 3-7% is achievable.
In our survey, 41% of respondents cited tool sprawl as a challenge. Tool sprawl is particularly damaging at the interest stage because lead data gets fragmented across marketing automation, CRM, and analytics platforms. A lead that fills out a form on your website, subscribes to your newsletter, and views your pricing page might generate three separate records in three tools. That fragmentation makes interest-stage metrics unreliable.
Stage 3: Consideration
Goal: Move interested contacts toward an active buying process.
Key metrics:
- Sales Accepted Leads (SALs). MQLs that sales agrees are worth pursuing. The MQL-to-SAL acceptance rate is one of the most important alignment metrics between marketing and sales. In our survey, 38% of respondents identified alignment as a top challenge. This metric is where that alignment either works or breaks.
- First meeting booked rate. What percentage of SALs result in a discovery call? If this is below 60%, you have a handoff problem.
- Opportunity creation rate. SALs that convert to pipeline. Track the time from SAL to opportunity creation. If it exceeds 14 days on average, deals are stalling in early qualification.
- Content consumption during evaluation. Which case studies, comparison pages, and technical docs are prospects viewing? This tells you what objections they’re trying to resolve.
Target-setting guidance: SAL-to-opportunity conversion between 40-60% indicates healthy qualification. Below 40% means marketing is sending unqualified leads. Above 70% might mean your criteria are too strict and you’re leaving pipeline on the table.
We tested adjusting our SAL criteria after finding a 35% acceptance rate. We tightened the MQL threshold (requiring two intent signals instead of one), and acceptance rose to 52%. Total MQL volume dropped by about 30%, but pipeline from those MQLs actually increased by 15% because sales was spending time on better-qualified contacts.
Stage 4: Decision
Goal: Win deals and close revenue.
Key metrics:
- Win rate. Closed-won divided by total opportunities that reached a decision stage. Segment by deal size, segment, and source to find patterns.
- Average deal size. Track the trend over time. Declining deal sizes often signal a shift in buyer mix or discounting pressure.
- Sales cycle length. Days from opportunity creation to close. Measure the median, not the mean. Outlier deals will skew the average.
- Competitive win rate. When you’re in a competitive deal, how often do you win? Track this by competitor.
Target-setting guidance: Win rates between 20-35% are common for B2B SaaS with average deal sizes under $50K ARR. Above $100K ARR, win rates often compress to 15-25% due to longer evaluation cycles and more stakeholders.
We analyzed win rate data across our platform and found one pattern worth highlighting: win rate is strongly correlated with multi-threading. Deals with 3+ contacts engaged have a win rate roughly 2.4x higher than single-threaded deals. If your win rate is declining, check your multi-threading depth before assuming it’s a competitive or pricing problem. For more on this, see our guide on account-based lead generation.
Stage 5: Closed
Goal: Ensure successful onboarding and time-to-value.
Key metrics:
- Time to first value. How long until the customer achieves their initial success milestone? Define this clearly per product and track it religiously.
- Onboarding completion rate. What percentage of customers complete all onboarding steps within the expected window?
- Support ticket volume (first 90 days). High volume here signals product or onboarding gaps.
- NPS/CSAT at 30, 60, 90 days. Early satisfaction scores predict retention better than any other metric.
We found that time to first value is the single strongest predictor of retention in our own data. Customers who achieved their first success milestone within 14 days had a 92% one-year retention rate. Those who took longer than 30 days had a 67% retention rate. That 25-point gap made time-to-value our top priority for the customer success team.
Stage 6: Expansion
Goal: Grow revenue from existing customers.
Key metrics:
- Net Revenue Retention (NRR). The single most important metric for any SaaS business. Includes expansion, contraction, and churn. Top-performing B2B companies maintain NRR above 115%.
- Expansion revenue as % of new ARR. Healthy companies generate 30-40% of new ARR from existing customers.
- Product usage trends. Feature adoption, seat utilization, and API call volume. Declining usage is the earliest warning sign of churn.
- Customer health score. A composite metric combining usage, engagement, support sentiment, and payment history.
For a deeper look at how these metrics connect across teams, our Revenue Ops Playbook covers the data architecture required to make cross-stage measurement work.
Leading vs. Lagging Indicators
Every metric falls into one of two categories, and confusing them is one of the most expensive mistakes a GTM team can make.
Lagging indicators tell you what already happened. Revenue, win rate, churn rate, NRR. These are outcomes. By the time a lagging indicator moves, the underlying cause happened weeks or months ago. You can’t manage a business by watching lagging indicators alone. That’s like driving by looking in the rearview mirror.
Leading indicators predict what will happen. Pipeline creation rate, first meeting booked rate, content engagement, and product usage trends. These give you early warning. When a leading indicator drops, you have time to intervene before it shows up in your revenue numbers.
The practical rule: for every lagging indicator you report to leadership, identify at least two leading indicators your team monitors daily.
Here’s an example from our own operations. Our lagging indicator is quarterly win rate (currently 28%). Our leading indicators are:
- Discovery call quality score. We track this by recording and scoring the first 50 discovery calls each month. A drop in quality score today predicts a drop in win rate 60-90 days from now.
- Multi-threaded deal percentage. Deals with 3+ contacts engaged have a 2.4x higher win rate than single-threaded deals. If our multi-threading rate drops, our future win rate will follow.
We initially tracked 6 leading indicators for win rate. It was too many. Nobody could keep them all in their head, and none of them got the attention they deserved. Cutting to 2 leading indicators per lagging indicator forced us to pick the ones with the strongest predictive signal. That constraint improved our response time to problems.
SDR-specific leading indicators deserve their own treatment. We cover those in detail in our post on SDR metrics that actually matter.
The Metric Tree: From North Star to Tactical Metrics
A metric tree is the structural backbone of your framework. It answers the question: “When this number moves, what caused it?”
Level 1: North Star Metric
Pick one. For most B2B SaaS companies, this is ARR growth rate or net new ARR per quarter. Everything else exists to explain and drive this number.
Level 2: Branch Metrics (3-4 maximum)
These are the major components that sum to your north star. For a company targeting $2M net new ARR per quarter:
- New business ARR target of $1.4M (70% of total)
- Expansion ARR target of $800K (40% of total)
- Churn/contraction target of -$200K (keeping this to 10%)
Note: these intentionally sum to more than 100% because churn offsets part of the new and expansion revenue.
Level 3: Driver Metrics
Each branch metric breaks down into the factors that drive it. For new business ARR of $1.4M:
- Pipeline created of $5.6M (assuming a 25% win rate)
- Average deal size of $35K ARR
- Number of deals needed is 40 closed-won deals
- Sales cycle length is 62 days median
For pipeline created of $5.6M:
- Inbound pipeline of $2.8M (50% of total)
- Outbound pipeline of $1.7M (30% of total)
- Partner/channel pipeline of $1.1M (20% of total)
Level 4: Tactical Metrics
These are the daily and weekly activity metrics that individual contributors control. For inbound pipeline of $2.8M:
- Website sessions of 45,000/month
- Session-to-MQL rate of 2.2%
- MQL-to-SAL rate of 55%
- SAL-to-opportunity rate of 48%
- Average pipeline value per opportunity of $42K
Now, when the VP of Sales asks “why is pipeline light this month?”, you can trace the tree. Website sessions are on track. MQL conversion is on track. But SAL-to-opportunity dropped from 48% to 31%. Sales reps are rejecting more leads. That’s a specific, actionable diagnosis.
We built this exact tree for our own operations and it took about two weeks to get right. The hardest part wasn’t the math. It was getting agreement across teams on the metric definitions. “What counts as an MQL?” had four different answers depending on who you asked. Getting to a single definition was more valuable than any dashboard we built afterward.
In our survey, 62% of respondents had teams of 3 or fewer. For teams that small, a full metric tree might seem like overkill. It’s not. Small teams benefit more from clarity because each person wears multiple hats and needs to know exactly which numbers matter for each hat.
The analytics capabilities you choose should support this kind of drill-down natively, without requiring an analyst to build custom queries every time someone asks a question.
Setting Targets That Drive Behavior
Bad targets create bad behavior. Here are the principles that work.
Start with historical data, not aspirations. Pull 6-12 months of conversion rates, cycle times, and activity volumes. Your targets should reflect what’s achievable based on evidence, not what the board wants to see. We learned this the hard way in our first year. We set ambitious targets based on “where we wanted to be” rather than “where the data said we could get.” The team missed every target for two quarters, morale suffered, and the targets became meaningless. Resetting to evidence-based targets and then gradually stretching them fixed the problem.
Set targets at each level of the metric tree. A revenue target without corresponding pipeline, conversion, and activity targets is just a wish. Work backward: if you need $2M in new ARR and your win rate is 25%, you need $8M in pipeline. If your average deal is $35K, that’s 229 opportunities. If your opportunity creation rate is 12% of SQLs, you need 1,908 SQLs.
Use ranges, not point estimates. Instead of “40 deals this quarter,” set a target range: “36-44 deals (commit: 36, target: 40, stretch: 44).” This gives your team clarity about what’s expected versus what’s exceptional, and it makes forecasting discussions more productive.
Revisit quarterly. Markets shift. Product changes affect conversion rates. A target set in January based on December data may be irrelevant by April. Build a quarterly target review into your operating cadence.
Never set a target without an owner. Every metric in your framework should have one person accountable for it. Not a team. A person. Shared ownership is no ownership. In our experience, assigning metric ownership to individuals rather than teams improved response time to metric changes by about 50%. When one person’s name is on the metric, they pay attention to it daily.
Reporting Cadence: When and What
Different metrics require different reporting frequencies. Getting this wrong either creates noise (daily reports on metrics that barely move) or blindness (monthly reports on metrics that needed intervention last week).
Daily (operational teams only):
- Activity metrics: calls made, emails sent, meetings booked
- Pipeline created (running total)
- Website traffic and conversion rates
- Support ticket volume
Weekly (team leads and managers):
- Pipeline movement (new, advanced, slipped, lost)
- Conversion rates at each funnel stage
- Leading indicator trends
- Forecast updates
Monthly (leadership and cross-functional):
- Full metric tree review
- Leading vs. lagging indicator trends
- Cohort analysis (how is this month’s pipeline performing vs. prior months at the same age?)
- Experiment results and operational changes
Quarterly (board and executive):
- North star and branch metrics vs. targets
- Year-over-year and quarter-over-quarter trends
- Strategic metric changes (NRR, CAC payback, LTV:CAC)
- Target recalibration
We tested moving from monthly to weekly metric tree reviews. It was too much. The metrics didn’t change fast enough to warrant weekly full-tree reviews, and the meetings ate into execution time. Monthly full reviews with weekly leading indicator check-ins turned out to be the right cadence.
Revenue Operations teams typically own the reporting cadence and are responsible for maintaining the metric tree. If you’re building a RevOps function, establishing this cadence should be one of your first 30-day priorities.
Avoiding Metric Overload
More metrics is not better. Here are the warning signs that your measurement practice has become counterproductive:
Your weekly report takes more than 15 minutes to review. If your team spends more time discussing the report than discussing what to do about the numbers, you have too many metrics.
Nobody can name the top 3 metrics from memory. Ask five people on your GTM team what the three most important metrics are. If you get five different answers, your framework isn’t working. We run this test quarterly with our own team. The first time we did it, we got six different answers from five people. After implementing the metric tree, we got the same three answers from everyone.
Metrics create conflicting incentives. Marketing optimizes for MQL volume. Sales complains about lead quality. This classic conflict happens because the metrics are disconnected. MQL volume is rewarded without a corresponding quality gate.
You’re measuring things you can’t influence. Every metric should have a clear action associated with it. If a metric moves and nobody knows what to do differently, remove it from your active dashboard. It can live in an analysis tool for periodic deep investigation, but it doesn’t belong in your operating metrics.
The fix is subtraction, not addition. When something goes wrong, the instinct is to add more metrics. Resist it. Instead, ask: “Which existing metric, if we paid closer attention to it, would have told us about this problem earlier?”
We went through this exercise ourselves. We cut our active dashboard from 47 metrics to 22. Decision speed improved immediately. Two months later, we cut further to 18. We haven’t added a metric back since.
A strong metrics framework for a $5-50M ARR company should have 15-25 metrics in active use. The metric tree might contain 40-60 total, but most of those are diagnostic. You only look at them when a higher-level metric signals a problem.
Building the Framework in Practice
Here’s the sequence that works for teams implementing this from scratch:
Week 1: Audit. List every metric currently tracked across all tools. For each one, note who owns it, how often it’s reviewed, and what action it triggers. Most teams find that 60-70% of their metrics fail the “what action does this trigger?” test. We found that 65% of ours failed.
Week 2: Define the tree. Start with your north star and work down through branches, drivers, and tactical metrics. Get sign-off from every team lead on the metrics that affect their team.
Week 3: Set baselines. Pull historical data for every metric in the tree. Calculate trailing 6-month averages and identify trends. This becomes your baseline for target-setting.
Week 4: Build dashboards. Create three dashboards: executive (north star + branches), team lead (drivers + leading indicators), and individual contributor (tactical metrics + daily activities). Our post on building revenue dashboards covers the design principles for each.
Week 5-6: Operationalize. Establish the reporting cadence. Run the first full metric tree review. Identify gaps in data collection and prioritize fixing them.
Ongoing: Iterate. Every quarter, review the framework. Remove metrics nobody acts on. Add metrics that would have helped diagnose recent problems. Adjust targets based on new data.
The framework is not a one-time project. It’s a living system that evolves with your business. But the structure, the funnel stages, the metric tree, the leading/lagging distinction, the reporting cadence, should remain stable. It gives your team a shared language for talking about performance, diagnosing problems, and making decisions.
For a practical look at how to apply structured measurement to GTM experiments, see our guide on data-driven experimentation. The metrics framework and the experimentation practice reinforce each other: the framework tells you what to test, and experiments tell you how to improve the metrics.
That shared language is worth more than any individual metric.
Stay in the loop
Get insights, strategies, and product updates delivered to your inbox.
No spam. Unsubscribe anytime.
Ready to see GTMStack in action?
Get started and see how GTMStack can transform your go-to-market operations.
Get started