Let’s be honest, most surveys suck. When was the last time you said, “Wow! That was a great survey experience,” as a customer? I’ve never heard those words. I know for a fact that surveys can improve the lives of experience managers and customers alike, and there’s no shortage of data, expert advice, and consultants willing to “help” make your survey “great”. Why does it seem to be so difficult not to screw up what should be a relatively simple concept?
I could write a whole book about the worst survey practices. To protect you from my rampage, we’re not discussing bad surveys today. Instead, we’ll explore a few of the reasons bad surveys are born.
Surveys Need a Champion
Just as every team needs a leader, every survey requires a champion. Survey champions must have expertise in surveying, deep customer and business knowledge, and a relentless desire to exhaust the value of experience data. Champions needn’t have broad executive powers or a snobby job title, but their trustworthiness, authority, and discretion within the scope of the survey must be indisputable. I challenge you to name one industry leader who’s key differentiator falls under someone’s “other duties as assigned.”
If your survey needs a champion, look for the person who gets the most excited about customer feedback. These champions must be obsessed with continuous improvement, finding hidden patterns and messages. Maybe you’ve noticed someone who looks at things in unique and unusual ways, and they’re probably fairly opinionated themselves. Experience is such a subjective thing, it takes a very subjective person to own it.
Once you find your champion, protect them at all costs. Of course, you’ll lose your survey champion if the employee leaves the organization. There’s only so much an employer can do about that. What we can control are internal constraints, like responsibility creep. Responsibility creep, the gradual addition of unrelated work tasks, consumes the champion’s time and distracts them from their mission as a survey champion. Imagine taking a college admissions exam at a heavy metal concert; you probably won’t enjoy the music or pass the test.
Too Many Helping Hands
Unless you’re a sole proprietor, there are probably a lot of different groups who create what your customers actually experience. As such, usually multiple groups’ performance is rated by a single survey. For instance, I recently completed a hotel survey asking me to rate the front desk, engineering, security, housekeeping, bell services, catering, and over ten different on-property restaurants. Each of these groups is a valid stakeholder, but not every stakeholder needs to be seen.
Fairness is often a justification for mediocrity, and the result of fairness is usually not the best possible outcome for all stakeholders as a group or individually. It’s most fair for every stakeholder group to devise their own list of questions to add to a customer survey. This creates terrifyingly long surveys with questions that often overlap. Asking too many questions wears respondents out, and in many cases they do not put forth the necessary energy to truly rate each attribute accurately.
Like customers, stakeholders are terrible at predicting what they want before they have it. It’s still important to engage these groups, but keep the conversation centered around desired outcomes not desired inputs, i.e. questions. For example, instead of asking, “are there any questions you would like included in our survey,” ask, “what do you hope to learn from our customer that you cannot learn any other way?”
Outsourcing Survey Development
It could be said, “if you’ve seen one survey, you’ve seen them all.” This is mostly true. The majority of customer surveys are created by just a handful of companies who provide the software, services, and expert advice. These companies often reuse the same bag of tricks, and develop a strong foothold in a particular industry. In fact, I can often determine which company created a survey even without their name on it. As an experiment, visit the top 5 fast food chains in your area, take their survey, and compare their similarities and differences. You’ll probably find a great deal of overlap, if not completely identical surveys from the same provider.
In some cases, question reuse is helpful for benchmarking purposes, although benchmarking with data from transactional surveys is tricky. Benchmarking allows you to see where you fall in an industry against comparable organizations, and if you’re lagging behind in a particular area it may be a sign that you’re not meeting the expectations set by your competitors. If you’re not careful though, customers’ emotional state immediately after a transaction, combined with poor survey design, can lead to defective data.
The biggest downside of question reuse and survey templates is that they’re often an excuse to bypass critical thought. As a result, the questions may not be relevant to your specific customer, problems facing your organization, or you may not know how to use the data. This is also how bad survey practices become the norm. You should be able to explain, in great detail, what you’ll learn from a particular question and how you’ll get there.
Survey questions are a lot like federal regulations. They’re always creating new ones, and never removing the obsolete. This is why surveys for extremely simple transactions have grown to ridiculous lengths, and we’re still not moving the needle on what customers actually feel.
I love trying new things. If there’s a better way to learn from our customers, I’m all over it! The difference is, I am equally passionate about tearing things down. For the last survey I developed, I threw the old one out in it’s entirety. Not a single question was reused, because it was established that they weren’t working. Surveys should be dynamic; they should change periodically to adapt to new business challenges. Questions that aren’t insightful should be removed, and new questions should be added to experiment and find new ways to learn from customers. There are some questions that have proven themselves over time. If you can clearly articulate why they’re effective for your organization, they’re worth keeping. Any question you’re struggling to justify, should be removed.
It’s also worth noting that there is a maximum amount of value that can be obtained from a transactional survey, in the real world. At some point, humans will be fatigued from answering questions and the data simply cannot be mined another way. Every fruit eventually runs out of juice; it’s important to know when to stop squeezing.
Question redundancy, where multiple questions measure very similar attributes, is a problem often found in longer surveys with too many questions. Pressure from multiple stakeholders to ask a particular question their preferred way, a lack of coordination between survey developers, and pressure to deploy surveys quickly all lead to asking too many questions that measure roughly the same idea. Mark Twain wrote, “I didn’t have time to write a short letter, so I wrote a long one instead.” Too often, customers are the victims of unnecessarily long surveys.
Another type of question redundancy happens when respondents are asked to provide information about the experience they’re rating when it could be obtained elsewhere. This might include times, dates, products ordered, services requested, names, and other details. Sometimes we have no choice but to ask the customer for this information, but most of the time it could have been obtained with greater accuracy on the back end by the researcher.
Survey leaders must carefully examine the finished survey, searching for questions that create unnecessary redundancy. It’s also important to examine the data after it’s collected, to determine if each question is really collecting unique results. In most cases, you’ll find that questions can be combined or eliminated to make better use of the customer’s short attention span.