Sunday, January 20, 2008

The Problem with Surveys

The pollsters were shocked last week with the results of the New Hampshire primary. Their projections were horrendously off, and, ever since, they have been attempting to explain the discrepancy. As they scramble to redeem themselves and what they do, the bottom line is that data collection is not an exact science. Asking someone a question on a survey does not give you all the answers.

Why?

The answers that you receive are subject to interpretation.

In fact, the interpretation begins long before you even ask the questions.

The way the question is formed sets the stage for the interpretation. The words used and the sequence of those words shape the ultimate meaning of the question. The question may have one meaning to the person forming it, another to the person asking it, and a third to the person answering it. All of this has a profound effect upon the data collected and, ultimately, its interpretation.

How the person answering the question understands it affects his or her answer. If the person answering the question has a different understanding of what is being asked than what was intended, his or her answer will skew the data collected. Even worse, if the person does not understand the question and just gives an answer to complete the survey, the data received from that person is bogus and contaminates all the results.

I recently had an experience in which I did this. In order to enter a Best Buy drawing for $10,000, I had to answer several survey questions. “Too many survey questions for the entry,” I thought. Some of the questions either were not relevant to me or did not make sense to me, and, although I knew that I was messing up their data collection, I picked any answer just to get through the survey. This survey created “question fatigue,” prompting me, and I suspect any person answering the question, to pick any answer, which further skews the data.

If your data is skewed to start, the conclusions that you draw from that data will be incorrect. Unfortunately, those interpreting data don’t know if it is correct or incorrect. They assume that the data is correct. Further, they assume that how they compile, handle, and review the data is correct. In any part of this process, they could be drawing incorrect conclusions from the data that they review and have no idea that they are doing so. Surveys are subject to a great deal of interpretation and, more accurately, assumptions, which always make me nervous.

That’s why I recommend face-to-face or voice-to-voice data collection. When you ask open-ended questions of your customers, you receive usable information that is relevant to improving your business. If you can’t ask these questions in person, asking them on the telephone is the next best thing. The power in this data collection is that you can ask another question to clarify the first question. You will often get better information in the second question, not the first. Through the second or even the third question, you have the power on-the-spot to fix assumptions on the part of the person answering and the person asking the question. These subsequent questions eliminate assumption and improve interpretation.

The closer you are to the collection of the data, the more interpretation is improved.

The key is talking to your customer, not asking your customer to complete a survey.

No comments: