Surveys Are Harder Than They Look
Creating a survey seems easy: write some questions, send them out, and count the answers. But poorly designed surveys produce data that is misleading or useless. A badly worded question can push respondents toward a particular answer. An unclear scale can mean different things to different people. A survey that is too long will have most respondents abandon it halfway through, leaving you with biased results from only the most motivated (or bored) completers.
Good survey design is a skill that combines psychology, communication, and statistics. The goal is to collect accurate, honest information that reflects what people actually think, feel, or do -- not what your question wording nudges them to say.
Types of Questions
Closed-ended questions give respondents a fixed set of options to choose from. "How satisfied are you with our service? Very satisfied / Satisfied / Neutral / Dissatisfied / Very dissatisfied." These are easy to analyze statistically but limit what respondents can tell you.
Open-ended questions let respondents answer in their own words. "What could we do to improve your experience?" These can uncover surprising insights but are time-consuming to analyze and hard to quantify.
Most well-designed surveys use a mix of both. Closed-ended questions provide the quantitative backbone, while a few open-ended questions capture nuances that fixed options might miss.
The chart above shows a typical distribution of responses on a five-point Likert scale. Notice how responses tend to cluster around the middle and slightly positive end -- a common pattern known as acquiescence bias, where people tend to agree more than disagree.
Avoiding Leading and Loaded Questions
A leading question steers respondents toward a particular answer. "Don't you agree that our customer service is excellent?" practically begs for a "yes." A neutral version would be: "How would you rate our customer service?" Leading questions are sometimes used deliberately to manufacture favorable results, but they destroy the credibility of any survey.
A loaded question contains a hidden assumption. "How much do you enjoy our new premium features?" assumes the respondent has used them and enjoyed them. Anyone who has not used the features or dislikes them is forced into an awkward answer or will skip the question entirely.
Double-barreled questions ask about two things at once. "How satisfied are you with the speed and accuracy of our service?" What if the speed is great but the accuracy is terrible? The respondent cannot give an honest answer. Split these into two separate questions.
Response Bias and How to Minimize It
Social desirability bias occurs when respondents give answers they think are socially acceptable rather than truthful. People over-report exercise, under-report alcohol consumption, and claim to recycle more than they actually do. Anonymous surveys reduce this bias but do not eliminate it entirely.
Order effects can influence responses too. Questions early in a survey can frame how people think about later questions. If you ask about negative experiences first, respondents may be in a more critical mindset when rating overall satisfaction. Randomizing question order can help, though logical flow matters too.
Response fatigue sets in when surveys are too long. After 10-15 minutes, response quality drops sharply. People start selecting the same answer for every question (called "straight-lining") or rushing through without reading carefully. Keep surveys as short as possible. Every question should earn its place by directly serving your research goal.
The chart above shows typical completion rates by survey length. A 5-minute survey retains most respondents, while a 30-minute survey loses the vast majority. The respondents who do finish long surveys are often not representative of the broader audience.
Likert Scales and Response Formats
The Likert scale is the most widely used response format in surveys. It presents a statement and asks respondents to indicate their level of agreement, typically on a 5-point or 7-point scale. Five points (Strongly Agree to Strongly Disagree) work well for most purposes. Seven points offer finer distinctions but can overwhelm casual respondents.
One common debate is whether to include a neutral midpoint. Including one lets genuinely neutral respondents express that, but it also gives an easy escape for people who do not want to think hard. Some researchers use an even number of options (4 or 6 points) to force respondents to lean one way or the other.
Whatever scale you choose, be consistent throughout the survey. Switching between 5-point and 7-point scales, or changing the direction (sometimes "1" means best and sometimes "1" means worst), confuses respondents and increases error rates.
Sample Size Considerations
How many responses do you need? The answer depends on how precise you want your results to be, how variable your population is, and how much error you are willing to accept. For most practical purposes, 300-400 responses give you a margin of error around 5% at a 95% confidence level. Larger samples give more precision but with diminishing returns -- going from 400 to 1,600 responses only cuts the margin of error in half.
But sample size only matters if the sample is representative. A million responses from a self-selected online poll are worth less than 500 responses from a properly randomized sample. Quality of sampling always trumps quantity of responses.
Good survey design requires clear, unbiased questions in a well-chosen format. Avoid leading, loaded, and double-barreled questions. Use Likert scales consistently. Keep surveys short to prevent response fatigue. Minimize social desirability bias through anonymity. And remember that a small, representative sample is far more valuable than a large, biased one. The quality of your survey determines the quality of every insight you draw from it.