CSAT (Customer Satisfaction Score) is a point-in-time survey metric that captures how satisfied a customer was with a specific interaction or experience, expressed as the percentage of respondents who gave a positive rating.
At a glance
- Calculated as: positive responses divided by total responses, multiplied by 100.
- Used by Customer Success, Support, and occasionally Product teams after key touchpoints.
- Measures a moment, not a relationship. It does not predict loyalty on its own.
- Low response rates introduce self-selection bias, skewing scores toward happier customers.
- Only useful when paired with a defined process for acting on negative scores.
How is CSAT actually calculated?
After a defined touchpoint, such as a support ticket closing, an onboarding session finishing, or a new feature shipping, customers receive a one-question survey: “How satisfied were you with this experience?” Respondents choose a score on a 1-5 or 1-10 scale. The positive responses (typically the top one or two options) are counted, divided by total responses, and multiplied by 100. A score of 78 means 78% of respondents rated the experience positively.
Where most teams go wrong is treating CSAT as a continuous monitor rather than a targeted probe. Sending it after every touchpoint creates survey fatigue and produces noisy data that tells you very little about overall account health.
Why does it matter for B2B revenue teams?
CSAT functions as an early-warning signal in B2B accounts. A customer who rates onboarding 2 out of 5 is telling you something useful three months before their renewal conversation. CS teams that route low scores to an account manager within 24 hours have a real chance of course-correcting before the relationship deteriorates.
CSAT also connects to expansion revenue. Accounts with consistently high satisfaction scores are statistically easier to upsell. One mid-market SaaS company tracked a 34-point difference in net revenue retention between accounts scoring above 80 CSAT and those scoring below 60, measured across a 12-month cohort. The score itself did not drive retention. Acting on it did.
Common mistakes and misconceptions
- Conflating CSAT with loyalty. A customer can be satisfied with last week’s support call and still churn at renewal.
- Ignoring response rate. A 90 CSAT from 8% of your customer base is not a reliable signal. Self-selection bias skews results toward happier customers.
- No closed loop. Collecting scores without a defined process for acting on negative responses trains customers to believe feedback disappears.
- Comparing across contexts. A post-implementation CSAT and a post-support CSAT measure different things. Averaging them into one number strips out the signal.
How does CSAT connect to adjacent metrics?
CSAT sits alongside CES (Customer Effort Score), which measures how much work a customer had to do to get something resolved. The two are complementary. CES tends to predict short-term behavior; CSAT captures emotional response. Used together, they give CS teams a more complete picture than either metric alone.
On the commercial side, CSAT data should feed into churn rate analysis and CLV modeling. If low-CSAT accounts churn at 2.4 times the rate of high-CSAT accounts, that ratio belongs in revenue forecasting, not just a CS dashboard. Keeping CSAT siloed away from finance and sales leaves predictive signal unused.
When does CSAT break down?
CSAT loses reliability when survey volume is too low, when it is sent indiscriminately rather than at defined moments, or when scores are averaged across fundamentally different interaction types. A single aggregate CSAT number across onboarding, support, and QBR touchpoints obscures more than it reveals.
It also breaks down when there is no internal owner for acting on results. Scores collected without a response workflow become a reporting exercise rather than a management tool, and customers notice the silence.
