Net Promoter Score (NPS) is a single-question loyalty metric that asks customers how likely they are to recommend your product or company to a colleague, scored 0 to 10 and then converted into a number between -100 and +100.
At a glance
- Used by customer success, revenue, and product teams to gauge account health and loyalty.
- Scores of 30 or above are a reasonable benchmark for B2B SaaS companies.
- Calculated as: percentage of Promoters (9-10) minus percentage of Detractors (0-6).
- Passives (7-8) are excluded from the final calculation entirely.
- A single company-level score is rarely actionable without segmentation by tier or role.
How is NPS actually calculated?
Survey respondents fall into three groups. Anyone scoring you 9 or 10 is a Promoter. Scores of 7 or 8 are Passives, who carry no weight in the math. Scores of 0 through 6 are Detractors. The final NPS is the percentage of Promoters minus the percentage of Detractors.
A company with 60% Promoters and 25% Detractors lands at an NPS of 35. Enterprise software companies routinely score lower than consumer apps, partly because procurement friction and implementation pain drag sentiment down before customers have fully realized value.
Why does NPS matter for B2B revenue teams?
NPS works as a proxy for expansion and retention probability. Promoters renew at higher rates, expand into adjacent seats or modules, and generate referrals that shorten sales cycles. One mid-market SaaS company found that accounts with NPS above 50 had 2.3x the net revenue retention of accounts below 20, a gap that compounds quickly across a portfolio of 200 to 500 accounts.
Customer success teams use NPS to triage risk. A sudden score drop from a previously high-scoring account is often the first signal of a champion leaving or a competing evaluation starting, appearing weeks before any formal churn conversation surfaces.
When does NPS break down?
In B2B accounts, you often have multiple contacts at a single company. An end user might score you a 6 while the economic buyer scores you a 9. Averaging those into one account-level NPS loses the signal entirely. Tracking by contact role, then weighting or flagging accordingly, produces more useful data.
Survey timing creates its own distortions. Sending NPS immediately after onboarding measures first impressions, not loyalty. Sending it right after a support ticket closes measures support quality. Quarterly or semi-annual surveys, timed away from support interactions, tend to produce more honest signals about whether a customer would actually put their reputation behind recommending you.
What are the most common NPS mistakes?
- Treating it as a performance target rather than a diagnostic signal. A company-level score of 42 tells you almost nothing actionable on its own.
- Skipping segmentation. Slice by account tier, cohort age, product line, or the CSM owning the account before drawing conclusions.
- Leaving NPS data siloed in a survey tool. Disconnected from the CRM, it never informs account prioritization or renewal forecasting.
- Ignoring contract masking. If Detractor rates rise but churn stays flat, long-term contracts may be hiding real dissatisfaction.
- Missing the referral capture step. A high Promoter rate with flat expansion revenue often signals there is no formal channel to act on referral intent.
How does NPS connect to adjacent metrics?
NPS works best alongside Customer Effort Score (CES), which captures friction in specific interactions, and Churn Rate data, which lets you validate whether NPS movement actually predicts revenue outcomes in your business. Pairing NPS with net revenue retention data shows whether score changes lead or lag actual expansion and contraction.
When Promoter rates are high but expansion revenue is low, the referral motion likely has no formal channel to capture demand. When Detractor rates climb but churn holds steady, dig into whether multi-year contracts are suppressing the true health picture.
