What Is Standard Error?

Definition

The standard error (SE) is a measure of how precisely a sample statistic estimates the corresponding population parameter. Most commonly, it refers to the standard error of the mean - how much the sample mean is expected to vary from the true population mean across different samples.

How to Calculate It

Standard error of the mean = standard deviation divided by the square root of the sample size (SE = SD / sqrt(n)).

Example

A dataset of test scores has a standard deviation of 12 points.

Sample of 9 students: SE = 12 / sqrt(9) = 12 / 3 = 4.0

Sample of 36 students: SE = 12 / sqrt(36) = 12 / 6 = 2.0

Sample of 144 students: SE = 12 / sqrt(144) = 12 / 12 = 1.0

Quadrupling the sample size cuts the standard error in half each time.

Why It Matters

Standard error is the building block of confidence intervals and hypothesis tests. A 95% confidence interval is roughly the sample mean plus or minus two standard errors. When you see error bars on a graph, they often represent standard errors.

Understanding standard error helps you judge the reliability of research findings. A study reporting a mean of 50 with SE = 1 gives a much more precise estimate than one reporting a mean of 50 with SE = 10. The smaller the standard error, the more you can trust the estimate.

Key Takeaway

Standard error measures the precision of a sample estimate. Larger samples produce smaller standard errors and more reliable estimates.

← Back to Glossary