Time to Drop NPS?

Kim Larsen
4 min readApr 4, 2019

Ever since The Ultimate Question was published, the Net Promoter Score (NPS) has taken the business world by storm. Most large companies today use NPS for customer loyalty tracking.

Not surprisingly, people have divided opinions on the efficacy of NPS as a metric. This is to be expected when a framework reaches this level of adoption.

But, whether or not you believe that NPS is based on the “ultimate question,” there’s one bipartisan problem that cannot be ignored: NPS is a survey-guzzling metric that requires large samples in order to track trends at any level of granularity. In other words, it’s an impractical metric — IMHO.

What is NPS?

  1. Ask survey recipients to answer this question on a scale from 0–10: “how likely is it that you would recommend our company/product/service to a friend or colleague?”
  2. Those who respond with a score of 0–6 are labeled Detractors and those who respond 9–10 are labeled Promoters. The rest are Passives.
  3. NPS = % Promoters — % Detractors.

Why is NPS inefficient?

NPS is based on a survey of a random sample of customers. And anytime we deal with sampling, we have to consider the margin of error (MOE) — which essentially quantifies the (sampling) error in the metric we calculate from the survey. This is where NPS runs into trouble.

To see why, let’s do the math:

(p = % Promoters, q=% Detractors, n=responders, 95% confidence level)

What does this tell us? The key takeaway is that the variance of NPS — which ultimately determines the margin of error — is (in most cases) more than twice the variance of a traditional “top-box” metric.

How problematic is this, really? Consider the scenario where p=50% and q=50% (which is the worst case scenario from a variance perspective). In this case, the margin of error can be expressed by this simple formula:

Let’s use this formula to evaluate a simple example:

  • A company sends out 10k surveys monthly and get 1k responses. Sounds like a decent sample size.
  • The NPS margin of error is ±6.2 points — i.e., we can feel confident that the truth is within ±6.2 points of the number we calculate from the survey*. Not exactly a tight range.
  • If we use a more simple metric, such as the % of Promoters, the margin of error is only ±3.1 points. In order to get to a margin of error of ±3.1 points for NPS, we’d need around 4k responders.
  • Now imagine what it takes to track NPS separately across multiple segments.

Ugh.

* yes, I’m using the Bayesian interpretation here, but the conclusion is the same whether you’re a Bayesian, Frequentist or agnostic.

Last words

Whether or not we believe in the theory behind NPS, the math shows that it’s an inefficient metric.

So what should we do?

As a first principle, it’s best to stay away from any survey-based metric that is based on adding or subtracting other metrics. Also, it’s unlikely that a single survey metric can capture everything we care about when it comes to loyalty.

An alternative option is to simply track the % of Promoters and then augment it with the Product-Market Fit (PMF) score. This will get at two key areas that most businesses care about: the willingness of customers to spread the word (word of mouth) and how dependent people are on your product (stickiness) — without the frivolous variance of NPS.

Interesting reads

📝 Read this story later in Journal.

🗞 Wake up every Sunday morning to the week’s most noteworthy Tech stories, opinions, and news waiting in your inbox: Get the noteworthy newsletter >

--

--