Imagine that you are standing beside a train track. A group of five people is crossing the track. Suddenly you notice a train barrelling uncontrollably towards the group.
There is a fork in the track ahead of the group. Beside you is a lever, which you can pull to divert the train at the fork, saving the lives of the five people. However, there is a single person on the other track, who will be killed if you divert the train. Do you pull the lever to save the group of five, sacrificing one person’s life in the process?
In this scenario of a famous thought experiment first conceived in the 1970s and dubbed “the trolley problem” by Judith Thomson, around 90% of people say ‘yes’. Most people justify their decision on the basis of the resulting outcome: yes, one person dies, but you have saved five people’s lives. The alternative would have been to do nothing, resulting in five people dying.
Now imagine exactly the same scenario: there’s an out of control train, and a group of five people crossing the tracks. But instead of a lever, there is a person standing right beside the track. The only way to stop the train is to push the person in front of it. This will result in this person’s death, but will stop the train from killing the group of five.
What do you do now? In this situation, most people say they would not push the person. And yet, in terms of the outcome, which was how we justified our actions in the first scenario, nothing has changed. One person would be killed to save the life of five.
Many versions of Thomson’s original scenario have been devised by philosophers around the world, and many explanations for our differing reactions have been proposed. But one of the most convincing focuses on a well-known behavioural effect known as ‘omission bias’.
Omission bias describes our inclination to favour omissions (e.g. withholding the truth) over otherwise equivalent commissions (such as deceiving someone). In the first train track scenario, we allow someone to die (an omission); whereas in the second scenario, we have to actively intervene, thereby killing a person (a commission).
This concept of omission bias doesn’t just elucidate moral dilemmas. It can also help explain many people’s reactions to the recent flood of information about the risks associated with different types of COVID-19 vaccines. This information has often taken the form of comparisons with other activities to put these risks into perspective. The BBC, for example, compared the risk of vaccine side effects to the likelihood of being struck by lightning, or the far greater risk of a fatal car accident.
And while it may seem intuitive to put such abstract risk figures into context, these comparisons miss the crucial distinction between omission (the risk of an accident associated with not taking preventative action) and commission (the risk associated with purposefully engaging in an activity).
As illustrated in the trolley problem, we feel much less responsible for outcomes from our omissions than our commissions.
In the context of COVID-19, this means that we are likely to feel less responsible for any outcomes arising from passively contracting COVID-19 due to non-vaccination, than for any outcomes arising from actively getting vaccinated. This is despite the fact that the risks associated with contracting the virus are much higher than those associated with vaccination.
The key lesson here is a familiar one for behavioural scientists, designers and data analysts the world over. It is that if we want to encourage people to take action, we need to understand the underlying reasons why people might make particular decisions. And once we have done that, we can speak in a language that resonates and build new programmes that go with the grain of how human beings actually think about those decisions, rather than how we imagine they should.