Definition
A p-value is the probability of obtaining results at least as extreme as the observed results, assuming that the null hypothesis is true. In plain language, it answers the question: "If there were really no effect, how surprising would my data be?"
How to Interpret a P-Value
The smaller the p-value, the stronger the evidence against the null hypothesis.
A company tests whether a new website design increases sign-ups. The old design had a 10% sign-up rate. The new design shows 12% in a test with 1,000 visitors.
A statistical test returns a p-value of 0.03.
This means: if the new design had no real effect, there would be only a 3% chance of seeing a difference this large. Since 0.03 is less than 0.05, the result is considered statistically significant.
Why It Matters
P-values are the backbone of statistical decision-making in science, medicine, and business. They help researchers decide whether an observed pattern is likely real or just random noise. Nearly every published research study reports p-values.
However, p-values are widely misunderstood. A p-value does not tell you how large or important an effect is. A tiny p-value can come from a trivially small effect tested on a huge sample. Always pair p-values with effect size and confidence intervals for a complete picture.
A p-value measures surprise under the null hypothesis. Small p-values suggest your data is unlikely to be explained by chance, but they do not measure the size or importance of an effect.