New Information Changes Everything
Imagine you are about to leave the house. You check the weather app: there is a 30% chance of rain. Then you look out the window and see dark clouds rolling in. Does that change how likely you think rain is? Of course it does.
That is the core idea of conditional probability: the likelihood of something happening changes when you learn new information. The probability of rain given that you see dark clouds is different from the overall probability of rain on any random day.
The Key Phrase: "Given That"
In probability, the phrase "given that" is the signal that you are dealing with conditional probability. It means you already know something happened, and you want to figure out how that changes the probability of something else.
Mathematicians write it like this:
P(A | B) - read as "the probability of A, given that B has happened."
The vertical bar "|" is just shorthand for "given that." You do not need to memorize fancy notation - just remember that conditional probability answers the question: "Now that I know this one thing, how does it affect the chances of that other thing?"
A Simple Example: Colored Marbles
A jar has 4 red marbles and 6 blue marbles (10 total). You draw one marble without looking. The probability of drawing red is 4/10 = 0.40.
Now suppose someone peeks and tells you: "The marble you drew is NOT blue." Given this information, you know the marble must be red. The probability of red, given "not blue," is now 1.00 - certain.
That is conditional probability at work. The new information ("not blue") completely changed the probability.
Most real-world cases are less extreme than that, but the principle is the same. New information narrows down the possibilities, which changes the probabilities.
The Formula (In Plain Language)
The formula for conditional probability is:
P(A | B) = P(A and B) ÷ P(B)
In words: to find the probability of A given that B happened, take the probability that both A and B happen, and divide by the probability that B happens.
Why does this work? When you know B happened, you are no longer looking at all possible outcomes - you are only looking at the ones where B occurred. Dividing by P(B) "zooms in" on that smaller world.
At a school, 60% of students play sports. Of all students, 24% both play sports and are on the honor roll. What is the probability that a student is on the honor roll, given that they play sports?
P(honor roll | sports) = P(honor roll and sports) ÷ P(sports)
= 0.24 ÷ 0.60 = 0.40, or 40%.
So among students who play sports, 40% are also on the honor roll.
Real-World Example: Medical Testing
Medical tests are one of the most important - and most misunderstood - applications of conditional probability. Here is a scenario that trips up even doctors:
A disease affects 1% of the population. A test for this disease is 90% accurate, meaning:
- If you HAVE the disease, the test correctly says "positive" 90% of the time.
- If you do NOT have the disease, the test correctly says "negative" 90% of the time (but gives a wrong "positive" 10% of the time).
You take the test, and it comes back positive. What is the probability you actually have the disease?
Most people guess around 90%. The real answer is much lower. Let us work through it with 1,000 imaginary people:
- 10 people actually have the disease (1% of 1,000).
- Of those 10, the test correctly identifies 9 as positive (90% accuracy).
- 990 people do NOT have the disease.
- Of those 990, the test incorrectly says 99 are positive (10% false positive rate).
Total positive results: 9 + 99 = 108. But only 9 of those 108 actually have the disease.
P(disease | positive test) = 9 ÷ 108 = 0.083, or about 8.3%.
Even with a "90% accurate" test, a positive result means only about an 8% chance you have the disease. This is because the disease is rare, so the false positives from the large healthy group outnumber the true positives from the small sick group.
This is not a flaw in the test - it is how conditional probability works when the base rate (how common the disease is) is low. Understanding this can save you from unnecessary panic after a screening test.
Real-World Example: Weather
In a certain city, it rains on 20% of days overall. On days when it rains, there are clouds in the morning 85% of the time. On days when it does NOT rain, there are morning clouds 30% of the time.
You wake up and see clouds. What is the probability it will rain?
Let us use 100 days as our sample:
- 20 days have rain. Of these, 17 had morning clouds (85% of 20).
- 80 days are dry. Of these, 24 had morning clouds (30% of 80).
Total days with morning clouds: 17 + 24 = 41.
Of those 41 cloudy mornings, 17 led to rain.
P(rain | clouds) = 17 ÷ 41 = 0.415, or about 41.5%.
Clouds raised the rain probability from 20% to about 42%. The information helped, but clouds alone do not guarantee rain.
Independent vs. Dependent Events - Revisited
In the previous lesson, we mentioned that some events are independent (one does not affect the other) and some are dependent (one changes the probability of the other). Conditional probability gives us a precise way to test this:
If P(A | B) = P(A), then A and B are independent.
In other words, if knowing B happened does not change the probability of A at all, the events do not affect each other. If knowing B does change the probability of A, they are dependent.
Suppose 50% of customers at a coffee shop order coffee. Among customers who arrive before 9 AM, 75% order coffee. Since P(coffee | before 9 AM) = 0.75 is different from P(coffee) = 0.50, the time of arrival and ordering coffee are dependent events. Early customers are more likely to want coffee.
Why Conditional Probability Trips People Up
The human brain is not naturally wired for conditional probability. We tend to make two common mistakes:
- Ignoring the base rate. In the medical test example, people focus on the 90% accuracy and forget that the disease is rare. The rarity of the disease is crucial information.
- Confusing the direction. P(positive test | disease) is NOT the same as P(disease | positive test). The probability of testing positive if you are sick is 90%. But the probability of being sick if you test positive is only 8%. The direction matters enormously.
This second mistake is so common it has a name: the prosecutor's fallacy. It shows up in courtrooms, medical offices, and news headlines. Being aware of it makes you a sharper thinker.
Practical Tips for Thinking About Conditional Probability
- Use concrete numbers. Instead of working with abstract percentages, imagine 100 or 1,000 people. It makes the math clearer and the results more intuitive.
- Ask "what changed?" Whenever you learn new information, ask yourself: "Does this change the chances?" If yes, you are in conditional probability territory.
- Watch the direction. Always be clear about which event is "given." P(A | B) and P(B | A) are usually very different numbers.
Conditional probability measures how the likelihood of an event changes when you learn new information. The formula P(A | B) = P(A and B) ÷ P(B) captures this, but the most important insight is practical: always consider the base rate (how common something is to begin with), and never confuse P(A | B) with P(B | A). Using concrete numbers - like imagining 1,000 people - makes conditional probability much easier to understand and apply.