You can never, ever, ever change a question. (Thank you Taylor Swift!)

Why? Because changing even the tiniest thing in a question will change the results of the questionnaire, thereby making the norms and benchmarks collected over the years null and void.

Yes, there is indeed potential that the results will change but sadly, that rule is horribly flawed. Let’s consider just some of the reasons why.

The evolution of questionnaires

The first census in the USA was conducted using paper questionnaires in 1790 when interviewers were tasked with providing the paper and writing out the answers of each interviewee themselves.  Sometime in the 1930s, telephone questionnaires became possible and quickly became a popular choice even though by 1940 only 37% of US households had a telephone. (Remember that disastrous presidential poll!) In the 20th century, psychology researchers were bulking up every academic journal with insights about paper-and-pencil and telephone interviewing biases, questionnaire design techniques, and all survey related issues. In the 1990s, online questionnaires jumped into the toolbox even though only about 43% of the USA had internet access in 2000. That’s when I too began to help bulk up academic journals with research on how to conduct online questionnaires. Questionnaires have gone through more than 100 years of major, minor, and miniscule insightful learnings and researchers continue to recommend fixes as a regular course of action. Should we stick with how questionnaires were written in 1790, for the sake of consistency, or should we do what the experts tell us to do?

Surveys gone mobile

The world is moving on without you. While you fail to approve fixes to a bad survey to preserve historical data, the world is actively changing without your permission. Perhaps you’re considering how to accommodate mobile surveys and wondering whether your norms will be negatively affected. Well, somewhere around 30% of research panelists already answer your surveys on their cell phones, even more are going to do so next year. Panelists are already choosing mobile questionnaires and affecting your norms without your permission. Your survey programmer is also changing your surveys without your permission. Their question formats are now better suited to tiny screens and you didn’t even get a chance to parallel test the new formats. They too affected your norms without your permission. I can’t begin to count all the ways that your norms are being affected without your permission.

Research error

Every questionnaire in field right now has at least one mistake or bad judgement call. Even your perfect survey. It might be as simple as a typo but chances are it’s a huge grid, marketing speak, leading or loaded questions, missing answer options, confusing wording, or incorrect logic. Even my surveys have errors – gasp! Though I pride myself on being an awesome baker, I even make mistakes there, the worst one being using a cup of cornstarch instead of a tablespoon of cornstarch in the very first pumpkin pie I ever made. Now, I could have maintained consistency over the last 20 years by using a cup of cornstarch in every pumpkin pie since then, but who wants a pie brick every Thanksgiving? Is there really a right or wrong time to perpetuate mistakes? Baking mistakes right time, survey mistakes wrong time?

Don’t fool yourself about the accuracy of your data. In a true probability sample with 300 respondents, the margin of error is about 6 percentage points. Of course, almost no one in the market, social, and opinion research industry uses probability sampling so that number could actually be anywhere from 6 points to 100 points. (Hopefully you’re using a sample provider that values data quality!) But disregarding probability sampling, let’s consider everything else. The people answering your survey aren’t robots – they have feelings and emotions and they get distracted and forgetful. So add a bit of measurement error to the margin of error. Oh, and don’t forget that some people who didn’t match your desired target group answered your survey so add some frame error to that.  Now think about what you were trying to measure with your questionnaire and how that got translated into written words so you’d better add a bit of specification error to the list. And of course, as we already discussed, none of us is perfect. Chances are someone made a mistake in the preparation of the data, perhaps a label or the weights or the tabulation so you’d better add some points for processing errors. In other words, in your refusal to fix a problem for fear of changing a number, you failed to recognize that that number sits within a range of potential truth. It is not THE truth. It is an estimation of truth. And to be more accurate, yours is a wrong estimation of truth.

So how do we solve the problem of fixing trackers when that action might make the norms or benchmarks less relevant? Well, given that the norms and benchmarks are based on incorrect data and therefore are norms and benchmarks of, well, nothing, you can solve the problem by fixing the mistakes right now. Instead of working with 10 months or 10 years of incorrect norms, maybe you’ll only have to work with 6 months or 6 years of incorrect norms.

There is no better time to fix a bad tracker than right now.