What Is a Type 2 Error?

Definition

A Type 2 error (also called a false negative or beta error) occurs when you fail to reject a null hypothesis that is actually false. In plain terms, you miss a real effect - you conclude "nothing is happening" when something actually is.

How It Happens

Type 2 errors typically occur when you do not have enough data or statistical power to detect a real effect.

Example

A new drug actually reduces headache duration by 20 minutes on average. A clinical trial tests it on only 15 patients.

Due to the small sample size, there is a lot of variability in the results. The test produces p = 0.12.

The researchers conclude the drug does not work. This is a Type 2 error - the drug is effective, but the study was too small to detect the effect reliably.

Why It Matters

Type 2 errors mean missed opportunities. An effective drug might be abandoned. A successful marketing strategy might be discarded. A real safety problem might go unnoticed. These "false negatives" can be just as costly as false positives.

The probability of a Type 2 error is called beta. Statistical power (1 - beta) is the probability of correctly detecting a real effect. Most researchers aim for 80% power, meaning a 20% chance of a Type 2 error. Increasing your sample size is the most straightforward way to boost power and reduce Type 2 errors.

Key Takeaway

A Type 2 error means missing a real effect. The best defense is adequate sample size and a well-designed study with sufficient statistical power.

← Back to Glossary