Tag Archives: cognitive biases

The MWS Podcast 31: Robert M. Ellis on Cognitive Biases

In this latest interview Robert M. Ellis, the chair of the society, talks about cognitive biases, what they have in common, how you go about recognizing them and how you can work with them.


MWS Podcast 31: Robert M. Ellis as audio only:
Download audio: MWS_Podcast_31_RobertMEllis

The Trouble with Revisionism

Almost everything we do is in some way an attempt to improve on what went before. Even tidying up a room involves what we see as an improvement on its previous state. When we consider traditions of human thought and activity, too, each new development of a tradition tries to address a new condition of some kind and thus also remedy a defect: for example, the Reformation was a response to dogmatic limitations and perceived abuses in the Catholic church, and new artistic movements respond to what they see as the aesthetic limitations of the previous movements that inspired them.

In many ways, then, its not surprising that both individuals and groups gradually evolve new ways of doing things in response to past tradition or custom. What creates a problem, though, is when we essentialise that tradition and try to appropriate its whole moral weight to justify our current approach: believing that we have found the ultimately right solution, the true answer, or the ultimately correct interpretation of that tradition. When we do that, we’re not just contributing to a new development that we acknowledge to be different from what went before, but also imposing that development on the past. In effect, we’re projecting the present onto the past. Revisionism - Executed Yezhov removed from photo of StalinThis is an approach to things for which ‘revisionism’ seems to be a good label, though it’s most typically been used for those who more formally impose their preconceptions on the interpretation of history, such as holocaust deniers. This photo shows such revisionism in action in the Soviet Union: the executed commissar Yezhov removed from a photo featuring Stalin.

In a sense, we’re all revisionists to some degree, since this tendency to appropriate and essentialise the past is wrapped up in common fallacies and cognitive biases that we might all slip into. We’re especially likely to do this when considering our own past, for example underestimating the extent to which our mature experience differs from our youth and projecting the benefit of hindsight onto our judgements in the past. In working on my next book Middle Way Philosophy 4: The Integration of Belief, I’ve been thinking a lot about these cognitive biases around time recently. There are many concerned with the present and the future, or with non-specific times, as well as the past, so I won’t try to discuss them all, but just a couple that focus particularly on the past.

In terms of Critical Thinking, the fallacy of absolutising the past is equivalent to the Irrelevant Appeal to History or Irrelevant Appeal to Tradition. This is when someone assumes that because something was the case in the past that necessarily makes it true or justified now. Simple examples might be “We haven’t admitted women to the club in the hundred years of our existence – we can’t start now! It would undermine everything we stand for!” Or “When we go to the pub we always take turns to pay for a round of drinks. When it’s your round you have to pay – it’s as simple as that.”

A common cognitive bias that works on the same basis is the Sunk Cost Fallacy, which Daniel Kahneman writes about. When we’ve put a lot of time, effort, or money into something, even if it’s not achieving what we hoped, we are very reluctant to let go of it. Companies who have invested money in big projects that turn out to have big cost overruns and diminishing prospects of return will nevertheless often pursue them, sending “good money after bad”. The massively expensive Concorde project in the 1970’s is a classic example of governments also doing this. But as individuals we also have an identifiable tendency to fail to let go of things we’ve invested in: whether it’s houses, relationships, books or business ventures. The Sunk Cost Fallacy involves an absolutisation of what we have done in the past, so that we fail to compare it fairly to new evidence in the present. In effect, we also revise our understanding of the present so that it fits our unexamined assumptions about the value of events in the past.

I think the Sunk Cost Fallacy also figures in revisionist attitudes to religious, philosophical and moral traditions. It’s highly understandable, perhaps, that if you’ve sunk a large portion of your life into the culture, symbolism and social context of a particular religious tradition, for example, but then you encounter a lot of conflicts between the assumptions that dominate that tradition and the conditions that need to be addressed in the present, there is going to be a strong temptation to try to revise that tradition rather than to abandon it. Since that tradition provides a lot of our meaning – our vocabulary and a whole set of ways of symbolising and conceptualising – it’s clear that we cannot just abandon what that tradition means to us. We can acknowledge that, but at the same time I think we need to resist the revisionist impulse that is likely to accompany it. The use and gradual adaptation of meaning from past traditions doesn’t have to be accompanied by claims that we have a new, true, or correct interpretation of that tradition. Instead we should just try to admit that we have a new perspective, influenced by past traditions but basically an attempt to respond to new circumstances.

That, at any rate, is what I have been trying to do with Middle Way Philosophy. I acknowledge my debt to Buddhism, as well as Christianity and various other Western traditions of thought. However, I try not to slip into the claim that I have the correct or true interpretation of any of these traditions, or indeed the true message of their founders. For example, I have a view about the most useful interpretation of the Buddha’s Middle Way – one that I think Buddhists would be wise to adopt to gain the practical benefits of the Buddha’s insights. However, I don’t claim to know what the Buddha ‘really meant’ or to have my finger on ‘true Buddhism’. Instead, all beliefs need to be judged in terms of their practical adequacy to present circumstances.

This approach also accounts for the measure of disagreement I have had with three recent contributors to our podcasts: Stephen Batchelor, Don Cupitt and Mark Vernon. I wouldn’t want to exaggerate that degree of disagreement, as our roads lie together for many miles. and in each case I think that dialogue with the society and exploration of the relationship of their ideas to the Middle Way has been, and may continue to be, fruitful. However, it seems to me on the evidence available that Batchelor, Cupitt and Vernon each want to adopt revisionist views of the Buddha, Jesus and Plato respectively. I’m not saying that any of those revisionist views are necessarily wrong, but only that I think it’s a mistake to rely on a reassessment of a highly ambiguous and debatable past as a starting-point for developing an adequate response to present conditions. In each case, we may find elements of inspiration or insight in the ‘revised’ views – but please let’s try to let go of the belief that ‘what they really meant’ is in any sense a useful thing to try to establish. In the end, this attachment to ‘what they really meant’ seems to be largely an indicator of sunk costs on our part.

Critical Thinking 6: Fallacies and Cognitive Biases

There are a great many different fallacies and a great many different cognitive biases: probably enough to keep me going for years if I was to discuss one each week on this blog series. What I want to do here, though, is just to consider the question of what fallacies and cognitive biases actually are, and how they relate to each other. This is a contentious enough subject in itself.

A fallacy is normally described as a flaw in reasoning, or a type of mistake whereby people draw incorrect conclusions from the reasons they start with. This would be correct when applied to formal fallacies, but I think incorrect when applied to the more interesting and practically relevant informal fallacies. Here’s a simple example of a formal fallacy (one known as affirming the consequent):

Good Catholics attend mass regularly.

Bridget attends mass regularly.

Therefore Bridget is a good Catholic.

You might think this was quite a reasonable conclusion to draw from the reasons given. However, it is not a necessary conclusion. You don’t have to be a good Catholic to attend mass regularly, and it’s quite possible that Bridget is an uncertain enquirer, or a bad Catholic trying to keep up appearances, or a Religious Studies scholar doing field research into Catholicism. A conclusion that is not necessarily true is not valid, and thus a formal fallacy.

However, it may be reassuring to reflect that the vast majority of arguments we actually use in practice are formal fallacies. Many of them are inductive (see Critical Thinking 2), and even those that are not inductive may be deduction (like the example above) of a kind that is formally invalid but actually reasonably enough most of the time. Formal fallacies are thus of little interest from the practical point of view. Informal fallacies, on the other hand, tell us much more about unhelpful thinking, even though they may actually be formally valid in some instances.

Informal fallacies are just unjustified assumptions: for example, the assumption that some objectionable personal attribute in the arguer refutes their argument (ad hominem); or that there are only two choices in a situation where there is actually a spectrum of options (false dichotomy); or the assumption that using your conclusion as a reason provides an informative argument (begging the question). What is objectionable about arguments involving these moves often depends on the circumstances, and it requires thoughtful judgement rather than just applying black-and-white rules. But that’s also the indication that these fallacies actually matter in everyday life.

Informal fallacies are unjustified assumptions identified by philosophers. The only genuine difference between informal fallacies and cognitive biases, as far as I can see, is that cognitive biases are unjustified assumptions identified by psychologists and often tested through experiment. Psychologists may explain our tendency to make these particular kinds of unhelpful assumptions in terms of the physical, social and evolutionary conditions we emerge from, but in the end these kinds of explanations are less central than the identification of the bias itself. Usually cognitive biases can be ‘translated’ into fallacies and vice-versa. For example, the in-group bias (tendency to favour the judgements of your own group) is equivalent to the irrelevant appeal to the authority of the group (or its leaders), irrelevant appeal to popularity within the group, or irrelevant appeal to tradition in the group, all of which are recognised informal fallacies. The outcome bias, whereby we judge a past decision by its outcome rather than its quality at the time, involves an irrelevant appeal to consequences.

Philosophers and psychologists thus both have very useful things to tell us about what sorts of mistakes we are likely to make in our thinking, and insofar as their different contributions are practically useful, they tend to converge. I would also argue that this convergence of useful theory relates closely to the avoidance of metaphysics (see cognitive biases page). Despite the widespread idea that fallacies are faults in reasoning, they really have nothing to do with reasoning in the strict sense of logical validity. They are all about the unhelpful assumptions we often tend to make.

Exercise

See if you can identify  and describe the type of unhelpful assumptions being made in these video clips. You don’t necessarily need to know the formal titles of the fallacies involved to identify why they are a problem.

1.

3.

4.

The rider and the elephant

Can we actually change our moral responses? Much debate about moral issues is fruitless because, however well-justified the reasons given for one position or another, they make no difference to our position. Rather than changing our position in response to strong evidence or argument seen overall, we tend to focus on minor weaknesses in views we intuitively oppose (or minor strengths in views we support) and blow them out of proportion.  I’ve recently been reading Jonathan Haidt, who encapsulates this situation in the image of the rider and the elephant. Elephant and rider Dennis JarvisThe rider thinks he’s in charge, but most of the time he’s just pretending to direct an elephant that is going where it wants to go. He gives lots of psychological evidence for the extent to which we rationalise things we’ve already judged, rather than making decisions on the basis of reasoning. This is the whole field of cognitive bias. For example, people experiencing a bad smell are more likely to make negative judgements, and judges grant fewer parole applications when they’re tired in the afternoon than they do when they’re fresh in the morning.

However, too many people draw a cheap moral determinism out of this. That’s a determinism expressed by Hume in his famous line “reason is, and ought only to be, the slave of the passions, and can never pretend to any other office than to serve and obey them.” Fortunately Haidt, who’s a professor of moral psychology at the University of Virginia, recognises that Hume’s position is over-stated, and that it is based on a simplistic false dichotomy between reason and emotion. Just because the rider tends to over-estimate his influence on the elephant, doesn’t mean he has no influence at all. Rather, someone controlling a beast with as much bulk and momentum as an elephant needs to develop skills to encourage it in one direction rather than another, and to set up the conditions that will encourage it to go one way rather than another rather than just telling it and expecting it to obey instantaneously. Nor are the rider and the elephant  to be equated to “reason” and “emotion”: each uses reasoning, and each has motives and starting points for reasoning, making a complex mixture of the two in both rider and elephant. The elephant would be better described as an elephant of intuition and the rider as conscious awareness.

Haidt, like most scientific or academic commentators on this kind of issue, also makes certain questionable assumptions. One of these is that of the essential unity of the rider and of the elephant (particularly the elephant). If it was true that the elephant definitely wanted to do one thing, and the rider another, it would be pretty much impossible to steer the elephant in any sense. But this is an over-simplification of the physical intuitions we get from our bodies. We may intuitively judge one way, but there is also an intuitive sympathy to some extent for the opposing approach. Perhaps we should not think of the rider so much as astride one elephant, as leading a herd of them. To lead the herd you find the elephant who is pre-disposed to act in the most objective way, encourage it, and (because elephants are herd animals) the rest are likely to follow.

That’s where integrative practice comes in. In the over-specialised world of academia, it seems to often to be the case that those engaged in crucial and ground-breaking research in psychology (such as, say, Daniel Kahneman and Jonathan Haidt) have evidently never experienced meditation, and completely ignore the potential of meditation and of other integrative practices to modify our responses. There are, of course, other academics investigating meditation, but these usually show little interest in ethics or judgement. Cheap determinism seems to rule, for the most part, because over-specialised people don’t join up the evidence in different quarters, and the incentive system could hardly be better geared to discourage synthetic thinking.

In meditation, one can become more aware of that variety of possible responses. Meditation is, in effect, a close scrutiny of the elephant on the part of the rider, from a sympathetic inside viewpoint. The better he knows the elephant, the more he can skilfully manage it. He can rein in the elephant a little so it has more time to listen to the other members of the herd. He can make it more aware of ambiguity using humour, art or poetry. He can give the elephant a wider range of options by educating its sensibility. He can help the elephant  become more aware of the rider, and cultivate its sympathy for the rider.

That doesn’t mean that the elephant will ever cease to go where it wants to go. The question is just what it means to ‘want’. Wanting is never simple. We have lots of wants, and it is the integration of those wants that helps us steer the elephant in a more acceptable direction – indeed that helps us see what that better direction is. The rider really needs the elephant, and no merely abstract morality can be justified that does not take that elephant into account.

 

Picture: Rider and elephant by Dennis Jarvis (Wikimedia Commons)

The MWS Podcast: Episode 1, The skill of critical thinking with Robert M. Ellis

Robert M Ellis1In this inaugural episode of the Middle Way Society podcast, the philosopher and chair of the Middle Way Society talks to Barry Daniel about the skill of critical thinking and how it relates to Middle Way Philosophy. Robert was asked a variety of questions about the subject, including what critical thinking is and what are its origins, its practical applications, its relationship with the Middle Way and the psychological benefits it can bring, and how you can go about acquiring such a skill.

There is also a slide show version of the podcast available on Youtube.