Definition
Sample size (denoted as n) is the number of individual observations or data points included in a statistical study. It is a critical factor that affects the reliability, precision, and statistical power of your results.
How Sample Size Affects Results
Larger samples lead to more precise estimates and narrower confidence intervals.
You want to know the average commute time in your city.
Survey 10 people: Average = 28 minutes, 95% CI: 18 to 38 minutes (wide range)
Survey 500 people: Average = 31 minutes, 95% CI: 29 to 33 minutes (narrow range)
The larger sample gives a much more precise estimate of the true average.
Why It Matters
Sample size is one of the most important decisions in any study. Too small a sample can lead to unreliable results, wide confidence intervals, and failure to detect real effects (low statistical power). Too large a sample wastes time and resources.
Before collecting data, researchers use power analysis to calculate the minimum sample size needed to detect a meaningful effect. This balances precision with practicality. In business, proper sample size calculations prevent companies from drawing conclusions from insufficient A/B test data.
Sample size directly affects the quality of your conclusions. Always calculate the needed sample size before collecting data, and be cautious about results from small samples.