L O A D I N G
blog banner

What NOT to do when surveying customer satisfaction (and what to do instead)?

A speedometer-like graphic of a customer satisfaction scale

In the times when ‘customer success’ is trending, you have more reasons than ever to survey customer satisfaction. A low satisfaction rate imply in a customer base erosion, and you must act or fail miserably in your business. Poor satisfaction But even when customers are “not that unsatisfied, but not satisfied either” it can be a threat to your company’s success. A high satisfaction, on the other hand, is on the right path for brand advocacy, something ALL brands need in the digital era [1].

But regardless of your reasons for surveying customer satisfaction, it is useless if you cannot do it right. Two days ago I went to the local real estate notary service in my town and I have found a really good example of how NOT doing it. I will review their example for most of the topics in this post. And it’s needless to say, but of course I’m not criticizing their work as I just don’t know their goals, and perhaps my assumption that they are trying to survey customer satisfaction could likely be wrong.

1. Fail to make the customer comfortable to answer

A visual representation of the officer, at the notary's office, staring at you and clearly seeing the tablet where you are supposed to answer the customer satisfaction survey
The officer looking at you while you may likely want to answer the survey in the giant fancy tablet in the right of the desk

The notary’s office failed miserably on that. Their survey consists of a standard 5-point scale represented by giant smiles. Really modern. It’s also shown in a fancy giant tablet, disposed in the right of the desk, where the officer can see you answering. And they do not just can see you are answering, but also they can easily see your answer. Some people could disagree, but I find that way too intimidating. I built a quick picture in Paint-3D, not intended to show my drawing skills though.

But there is something else to say. I must confess that in 2011, in my former software startup, I did somewhat the same thing. In the end of every support request, an employee would ask my customer to evaluate in a 1-5 scale their satisfaction level with our support team. And if you think it is okay, after a lower-than-5 evaluation the employee would ask what could be improved. No wonder why every customer just changed their previous answer to 5 immediately or in the next call.

The problem with such approaches is: your customer is not likely to provide you any useful feedback this way. In the notary’s office they won’t even want to answer the survey unless they are really upset to the point they want you to see that. Or perhaps the person wants to show how good they are to the officer because maybe they are friends, have a crush, or are doing some other kind of business together. In my “ticket evaluation program” the customer needs the support team and won’t tell “heyy, you solved my problem, but you were really rude” or “you really screw at explaining tech stuffs to me”.

HOW TO MAKE IT RIGHT: All customer satisfaction entry MUST be totally anonymous [2]. In a call center, it must either be automatic (like “press a key between 1 and 5”) or done by someone else. Anonymous electronic surveys work even better for that, if you have the means and take care of some serious issues that I will talk about in the next point [3].

2. Fail to sample your customers in a proper way

What is the problem with a common approach by undergrad students of just sharing a survey link in a Facebook group or e-mailing everybody and inviting them to the survey? It works, right? You reach 10,000 people and you get like 50-100 answers, maybe 200 if you insist, post twice or something like this, so what’s wrong?

Let’s think for a while. What would make you answer a survey that you saw, publicly posted, in an internet board? Most people will just skim to the title of your post until they see the magic word “survey”, and they will disappear (aka just ignore your post and go to the next item in their feeds). And this is likely what you would do for most surveys you see.

But what makes you act differently? What would make you think: “heyy, I really want to answer this survey”? I will tell you what: you have opinions about this topic that you consider important to be shared (or perhaps you have been student and know how hard it is to reach the sample size your professor requested, and now you answer everyone who looks desperate enough).

So if you are a big fan of technology and you see a survey about some new high tech product, you won’t think twice: you will answer the survey. The problem is: are big fans of technology a representative of your customers if you are launching a product that is intended for i.e. small businesses?

Statisticians call it “Self-Selection Bias”, and it is a really dangerous threat to the validity of your research [3]. It happens because, instead of participants being selected by the researcher who chooses them according to the research needs, they actually select themselves to the survey for some ideological reason.

Now we go back to the notary’s office scenario. If the tablet with the “evaluate our services” is there for everyone, who would answer it? I kinda already answered that, but I will repeat because it is worth repeating:

[…] they won’t even want to answer the survey unless they are really upset to the point they want you to see that. Or perhaps […] they are friends, have a crush, or are doing some other kind of business together

(myself in the item 1)

So, when you survey customer satisfaction in the presence of Self-Selection bias, you will likely get only extreme responses: either people who is highly dissatisfied, or people who really loves you, your brand or the person in front of them. And unless you sell something really polemical, they’re likely not representative of your customers.

Of course, in addition to Self-Selection, there are several other bias which could undermine your research efforts and provide you with useless information.

HOW TO MAKE IT RIGHT: Get rid of most biases at once by surveying a probabilistic sample [4]. The name might sound “complicated”, but it’s quite simple, especially if you have a list of your customers activity in the period you want to survey. Keep in mind that sometimes it’s not the

You can use a simple random sampling approach, like randomly selecting a enough amount of customers and reaching them. It’s easier, but you don’t ensure the sample is representative of the target population (customers). But distortions are reduced to chances, and not to biases [4].

Also, you can use some advanced sampling approach, like systematic (if you have a list of customers ordered by some important characteristic) or stratified, if you know important features of your customers and want to ensure your sample meets them [5].

To be short I will not cover the sampling details in this post. But if you cannot find help online information about this, leave a comment and I can guide you and maybe write a future post about.

I hope you liked it.

References

  1. Kotler, P., Kartajaya, H., & Setiawan, I. (2016). Marketing 4.0. Moving from Traditional to Digital. Hoboken: Wiley.
  2. Kotler, P., & Keller, K. L. (2016). Marketing Management (15 ed.). Harlow: Pearson Education Limited.
  3. Fricker Jr, R. D. (2017). Sampling Methods for Online Surveys. In N. G. Fielding, R. M. Lee, & G. Blank, The SAGE Handbook of Online Research Methods (pp. 162-183). London: SAGE. doi:10.4135/9781473957992.n10
  4. Malhotra, N. K. (2009). Marketing Research. An Applied Orientation (6 ed.). Upper Saddle River: Prentice Hall.
  5. Cooper, D. R., & Schindler, P. S. (2014). Business Research Methods (12 ed.). New York: McGraw-Hill/Irwin.

Share your thoughts


%d bloggers like this: