What Is Statistical Significance?

Definition

Statistical significance indicates that an observed result is unlikely to have occurred by chance alone. A result is considered statistically significant when its p-value falls below a predetermined threshold (the significance level, usually 0.05).

How It Is Determined

You set a significance level before your study, run a statistical test, and compare the p-value to your threshold.

Example

A pharmaceutical company tests a new headache pill against a placebo.

Patients on the pill report pain relief 15 minutes faster on average.

The statistical test gives a p-value of 0.002.

Since 0.002 < 0.05, the result is statistically significant - the difference is unlikely due to chance.

But is 15 minutes clinically meaningful? That is a separate question about practical significance.

Why It Matters

Statistical significance is the standard for deciding whether research findings are real. Journals, regulators, and businesses all use it to separate signal from noise. Without it, we would have no systematic way to distinguish genuine effects from random variation.

However, statistical significance has limits. It does not measure effect size, practical importance, or clinical relevance. A huge study can make a trivially small effect reach significance. Always report effect sizes and confidence intervals alongside significance to give a complete picture of your findings.

Key Takeaway

Statistical significance tells you a result probably is not due to chance. It does not tell you the result is important. Always consider practical significance too.

← Back to Glossary