“You Tested Positive. But Are You Really Sick?”
- Michael Lee, MBA
- May 24
- 4 min read

Introduction: Why Test Accuracy Isn't the Whole Story
In the age of modern diagnostics, we put immense trust in test results. If a test is 98% accurate, we assume it must be reliable. But is that really true? The answer lies in understanding a simple yet powerful principle: test accuracy depends not just on the test itself, but also on how common the disease is in the population. This often-overlooked factor can dramatically change how we interpret a positive or negative result.
This concept is critical for anyone interpreting medical test results—whether you're a healthcare professional or a curious individual trying to make sense of your own diagnosis. It's not enough to ask "how good is the test?" We must also ask: "how common is the disease where I am?" This contextual factor can dramatically shift the meaning of test results.
To explore this idea, we’ll first look at a powerful example involving tuberculosis (TB), and then connect it to a more familiar case: COVID-19 testing. Together, these stories reveal how even highly accurate tests can give misleading results in real-world scenarios.
Case Study 1: TB Testing and the Base Rate Fallacy
Imagine testing 100,000 people for TB with a test that:
Gives a positive result 98% of the time when the person actually has TB (high sensitivity)
Gives a negative result 99% of the time when the person doesn’t have TB (high specificity)
Sounds great, right? But what if we told you that only 142 out of every 100,000 people actually have TB?
That’s just 0.142%.
Now run the numbers:
Out of 142 people who actually have TB, 98% = 139 true positives
Out of 99,858 who do not have TB, 1% = 999 false positives
That means 1,138 people test positive, but only 139 actually have TB.
So if you test positive, the chance you actually have TB is only: 139 / 1,138 ≈ 12.2%
Despite the high test accuracy, most people who test positive are actually false positives. This is the base rate fallacy: ignoring how rare the condition is skews our understanding of the results.
This is especially important for rare diseases, where the vast majority of the population is healthy. Even small errors in testing can lead to a flood of false positives. And that means a positive result doesn't always mean what you think it means.

Case Study 2: COVID-19 Testing in the Early Pandemic
In the early stages of COVID-19, rapid antigen tests were used widely. Many were advertised with impressive accuracy:
Sensitivity: around 98%
Specificity: around 99%
Let’s say you use one of these tests in a city where the actual infection rate is 1%. Test 100,000 people:
1,000 actually have COVID-19 → 98% detected = 980 true positives
99,000 do not have COVID-19 → 1% false positive rate = 990 false positives
So, 1,970 people test positive, but only 980 actually have COVID.
That means the probability you truly have COVID after a positive test is: 980 / 1,970 ≈ 49.7%
That’s better than the 12.2% we saw for TB, but it still means more than half of those testing positive may not have COVID. Why the difference?
Because COVID-19 was far more prevalent during its peak. Even a 1% base rate is significantly higher than TB’s 0.142%. That higher prevalence boosts the Positive Predictive Value (PPV), which means a positive test result is more likely to be true.
In statistical terms, the Positive Predictive Value increases with prevalence, even if test quality stays the same. That’s why in times of surge, a positive COVID result became more reliable.
In other words: the more common a disease is, the more meaningful a positive test result becomes—even if the test itself is less accurate.

Takeaway: Why Context Is Everything in Diagnostics
These case studies show that test accuracy cannot be viewed in isolation. Even excellent tests can lead to misleading conclusions if the disease is rare.
Always ask:
What’s the base rate of the disease in this population?
What are the consequences of false positives or false negatives?
How does the disease context (e.g., outbreak, control, or endemic stage) affect interpretation?
Whether you're a doctor, policymaker, or just someone interpreting your own test result, understanding the context behind the numbers is crucial. After all, the right answer in medicine isn’t just about precision—it’s about probability. And probability, in turn, is shaped by the world around us.
This way of thinking also mirrors the broader world of statistical inference. Just like in medicine, when we make decisions based on data, we need to consider not just the quality of the evidence, but also the background likelihood of the thing we're testing for. In both fields, understanding contextual probability is key to making informed decisions.
Therefore, when evaluating any diagnostic test, it is crucial to ask not just about accuracy, but also about how frequently the condition occurs in the tested population.
Commenti