Imagine that you have recently been asked to take a Covid-19 test – maybe you are a resident of Lambeth, a school pupil, or about to travel abroad.

You receive a message with your results, which reads as follows: ‘Your recent coronavirus test has come back positive’.

Take a moment to consider how you would interpret this result: what do you think is the percentage chance that you *actually* have Covid-19?

You might think the result is as unequivocal as it sounds: you have it, 100%.

Or you might have heard about the ‘false positive rate’ – the proportion of uninfected people who receive a positive result. For the PCR test, this is estimated to be as low as 0.8%. Perhaps this means your positive result indicates a 99.2% chance of actually having Covid-19?

But the actual answer is that it’s impossible to say without knowing the prevalence of Covid in the population at any given time.

Think about it this way. If a population was completely Covid-free, and the false positive rate of a test was 0.8%, 800 people for every 100,000 tested would receive a positive result – even though no one actually has Covid. In this scenario, you would be 100% sure that you didn’t have the disease, despite getting a positive result.

Neglecting this ‘base rate’ is a common fallacy that has significant implications for policymakers and citizens alike. So what does it mean for our interpretation of a positive test result today, when there are on average 18 cases per 100,000 in England?

The correct answer is that a positive test result today means you have roughly a **1 in 45** (or 2.2%) chance of actually having Covid-19.

If you’re currently scratching your head, you’re not alone. We recently posed a similar question to 4,000 people in the UK and the US. We found that only around **1 in 83** people got it right.

For most people this figure of 2.2% is completely counterintuitive. And it’s only when we visually sketch out a scenario of 100,000 people in a ‘natural frequencies tree’ that we can begin to understand how this works. And, in turn, what the implications for how we interpret the data are as the number of cases in a population (the base rate) falls.

But seeing information conveyed in this way is no magic bullet for improving comprehension. When we showed a similar graphic (without the maths underneath) to some participants to support them with the same similar question, the proportion of people answering correctly increased to 5.4% (around **1 in 18**) – better, but by no means great.

According to the Nobel laureate Daniel Kahneman, this is because most of us are relatively poor ‘intuitive statisticians’. We simply struggle to calculate and compute percentages and probabilities.

In practice, this means that many of us are prone to overlooking the inherent uncertainties associated with a seemingly certain test result. And it also means that simply presenting people with information about false positives and negatives is unlikely to promote real understanding.

Communicating risks such as these brings several challenges, many more of which we discuss in a recently published paper in partnership with Finsbury Glover Hering. If you’re interested, you can find the paper here.