Tag Archives: fallacies

Critical Thinking 20: Appeal to consequences

To point out the likely consequences of a course of action usually seems like a helpful thing to do: for example, discouraging your friend from making themselves ill by drinking, or considering how much the recipient will really value the gift you are preparing. However, there are some cases where appealing to a particular consequence is a form of distraction or manipulation. Perhaps the consequence is frightening or flattering, but not nearly as important or probable as it is being presented as being, but because we have had our minds focused on that consequence we miss more important factors. An appeal to consequences needs to be distinguished from merely alerting people to them as possibilities.

I came across a striking example recently in an article about ‘essay mill’ sites where students can pay to have essays written for them. This included the following clip from an essay mill site, in which its authors tried to persuade students of the morality of using it:Buy essays - appeal to consequencesThis is quite cleverly done. The moral idea of cheating is ambiguously conflated with the idea of getting caught, so the unlikelihood of getting caught may well be confused with the justifiability of using the essay mill (even though the very idea of ‘getting caught’ implies cheating!). The student’s likely feelings are then sympathetically anticipated, making it more likely that the student will feel that the author understands their situation and can guide them wisely. But the clinching argument is where the appeal to consequences comes in: “In the long run, your success will be all that matters. Trivial things like ordering an essay will seem too distant to even be considered cheating”.

“Your success will be all that matters” is a matter of the end justifying the means. In order to persuade the student of this, the author invites the student to think ahead to when they’ve got their qualification and succeeded in their goals, and the importance of those goals to them will doubtless outweigh every other consideration. This is an appeal to consequences because it invites us to assume that this consequence is necessarily the one that trumps all other considerations – in this case the normal social and academic rules about cheating. But just because it may contribute to the achievement of a goal that may be of great importance to you does not necessarily mean that this form of cheating is justified.

Another form of appeal to consequence is the type that seeks to persuade people to change beliefs that are justified by evidence because of the political, social, or economic benefits of doing so. Thus, for example, a climate change scientist might be appealed to by a politician or administrator distort their findings to follow an official anti-climate change line, despite the weight of evidence for climate change. At an extreme, this might amount to a form of blackmail (change your beliefs or you might lose your job) or bribery (change your beliefs and you’ll get promoted), which also involves an appeal to consequences. The reason that we should reject such appeals to change our beliefs about the ‘facts’, in my view, is not that the ‘facts’ are incontrovertible or that we do not at some level generally accept certain ‘facts’ because of the pragmatic consequences of doing so, but because from a wider and more integrated perspective the long-term consequences of supporting beliefs that fit the evidence better are far more important than the short-term reasons for rejecting them.

Why should the student resist the temptation to cheat? Not just because there are social rules against cheating, because those social rules are not necessarily correct just because they are social rules. Rather, because a more integrated perspective, in which the student remained fully in touch with a desire for integrity both in their own lives and in the academic system, should motivate the avoidance of cheating. A student tempted to cheat, or a climate change scientist tempted to abandon the integrity of their research for political reasons, might be better able to resist that temptation if they reflected on the situation not as just a conflict between social rules and individual inclination, or even between rival ‘facts’, but rather between different desires that they themselves possess – desires that can only be reconciled by taking the more integral and sustainable path. The alternative is not just a danger of being ‘caught’, but also a danger of long-term guilt and conflict.

The problem with appeals to consequences is thus the narrow absolutisation of the particular consequences that are being appealed to. The Middle Way, which asks us to return to the middle ground between positive and negative types of absolutisation, would point out that neither the social rules against cheating nor the rationalisations we might give for cheating are absolute. By freeing ourselves from both sets of extreme assumption, we are in a better position to make a judgement that is actually based on both evidence and values that are sustainable in the long-term.


Link to a list of other posts in the critical thinking series


Critical Thinking 19: Straw men

The image of a straw man comes from past military training, where soldiers would apparently practise their combat skills by attacking a man made of straw.straw-man-ratomir-wilkowski-cca-3-0 Since I doubt if the soldiers ever attacked a woman made of straw, the politically correct “straw person” alternative seems to be based on a misunderstanding of this metaphor (much as I am generally in favour of gender-neutral universals). The straw man is a fallacy in critical thinking, and refers to a target of argument that is set up so as to be easy to attack. Generally it means a misrepresentation or over-simplification of someone else’s claims that you argue against, using justifications that would not be effective against a more realistic or sophisticated account of what they have said.

Here’s a classic example of a straw man from Margaret Thatcher in the UK parliament:

Here Thatcher attacks not ‘Socialism’ as any Socialist would describe it, but the idea that she attributes to Socialism that Socialists “would rather the poor were poorer as long as the rich were less rich”, i.e. that they are only concerned with the gap between rich and poor rather than with how well off the poor are. She also misrepresents Simon Hughes (the first male speaker) as ‘Socialist’ at all, as he is a Liberal Democrat who would probably describe himself as a Liberal rather than a Socialist.

Does that seem like a clear example? Well, imagine what would happen if you offered it to Thatcher herself, or one of her supporters. Almost undoubtedly, they would contest the claim that Socialism has been misrepresented. They’d probably say that they had detected a basic assumption in socialism, or an implication of socialism, even if socialists themselves were not willing to acknowledge it. You can imagine the fruitless argument that could then ensue between a Thatcherite and a Socialist, probably ending up in standoff and offence, with one claiming a straw man had been committed, and the other denying it. Unfortunately that’s a fairly typical example of what can easily happen when a straw man is pointed out.

As someone who is very interested in assumptions, I find that I quite often get accused of producing straw men myself (and, of course, I usually think this is unfair!). Anyone who seeks to point out an assumption made by someone else is in danger of this. Part of the problem is that people are often only willing to recognise as assumptions what they already consciously believe, so that the pointing out of an assumption of which they have been unconscious just seems wrong. “This doesn’t apply to me” they then think, “I don’t think that: it’s a straw man.” But in the wider analysis, it may still be the case that they do make that assumption. It needs further investigation. However, in the press of debate, we are most unlikely to take the time out to reflect on whether we really do assume what we have been accused of assuming. What Daniel Kahneman calls ‘fast thinking’ is the shortcut we rely upon for social survival, and ‘slow thinking’, where we might reconsider our assumptions, is reserved for occasions when we are feeling more relaxed and secure.

We can only try to come to terms with this condition, I think. We’re not likely to get people to examine their assumptions in most circumstances, unless the circumstances are sufficiently relaxed and (probably) face-to-face, or the people concerned trust each other and are used to examining assumptions. The best we can expect in normal discussion, I think, is that we will stimulate people with opposing beliefs to go off and reconsider them later. But that does quite often happen too, so all discussion should not be written off as useless.

In the meantime, I think it might be helpful to have a holding position on Straw Men, whether you feel someone else is misrepresenting your point of view, or whether they have accused you of misrepresenting theirs. It’s helpful to know if someone feels this, even if we are unable to resolve it on the spot. There are some reasonably obvious cases where someone has misunderstood or misrepresented the explicit and publically stated views of someone else, but most cases are probably not like this. If it can’t be easily resolved at that level, it might be worth noting that the alleged misrepresentation is about implicit things that need more thought, not explicit ones. It might also be helpful to indicate provisionality around straw man accusations. For example, you might say “I feel you’re misrepresenting my position there” and then say why, rather than just “That’s a straw man”. It might be possible to at least agree about how people feel and whether you’re referring to their explicit position. Both sides may then agree to go away and think about it. That’s a much better outcome than merely trading accusations about straw men on the basis of misunderstanding.


Are these examples of straw men? How should we respond to them? Feel free to discuss these in comments.

  1. (Draft bill presented to Louisiana state legislature)

Whereas, the writings of Charles Darwin, the father of evolution, promoted the justification of racism, and his books On the Origin of Species and The Descent of Man postulate a hierarchy of superior and inferior races. . . .
Therefore, be it resolved that the legislature of Louisiana does hereby deplore all instances and all ideologies of racism, does hereby reject the core concepts of Darwinist ideology that certain races and classes of humans are inherently superior to others, and does hereby condemn the extent to which these philosophies have been used to justify and approve racist practices.

2. Jeremy Corbyn, leader of the UK Labour Party, argues for the non-renewal of the Trident submarine-based nuclear weapons system of the UK. He argues that we should leave the UK defenceless against nuclear attack.

3. Free market capitalism is founded on one value: the maximisation of profit. Other values, like human dignity and solidarity, or environmental sustainability, are disregarded as soon as they limit potential profit. (Naomi Klein, ‘No Logo’)


Click here for index of other Critical Thinking blogs in this series.

Picture: Ratomir Wilkowski (Wikimedia) CCA 3.0



Critical Thinking 16: Appeal to Ignorance

It’s over a year ago now since I let the ‘Critical Thinking’ series of blogs lapse (the last one being no. 15, the ‘Texas Sharpshooter Fallacy’). However, that wasn’t at all because I’d run out of interesting material to talk about. Inspired by work I’m doing at the moment on Critical Thinking issues and the David McRaney podcast, I thought it might be time to revive and continue this series. A full list of the previous instalments can be found here.

The Appeal to Ignorance is the claim that you can justify a belief as completely true or false just by pointing out a lack of information. It’s assumed that because we are ignorant about the evidence for or against a particular claim, it must therefore be true or false. An easy classic type of example would be the ‘Nessie Argument’:

Nobody has yet disproved the existence of the Loch Ness Monster. So the Loch Ness Monster must exist.Loch_Ness_Monster

This kind of example might be reasonably obvious to many people as fallacious. However, I am often astonished at how widespread and, indeed, institutionalised appeals to ignorance often are. The most common arguments for atheism, and for philosophical idealism, are usually based on an appeal to ignorance. Here’s an example of each:

The existence of God cannot be proved. All the supposed proofs of God’s existence, for example based on religious experience or design, are based on doubtful assumptions. So we must conclude that God does not exist.

The ultimate existence of material objects can never be proved, because all we have access to is our experience of those objects. Thus there can be no material objects out there, only minds and mental constructions.

These are also examples of what I have sometimes called ‘sceptical slippage’. People begin with a justifiable doubt about something, but then they turn that doubt into a negative claim, in many cases apparently not noticing that they have gone from one to the other. But the distinction between uncertainty and  negative claims is a large and important one with lots of practical implications. For example, those who go around stating that belief in God is ‘false’ cause a lot of unnecessary conflict that would probably not be created by those who merely point out that we don’t know whether or not God exists (especially when religions themselves often offer mystical perspectives that underline this point).

These sorts of examples show how important avoiding the appeal to ignorance is in Middle Way Philosophy. If we are to apply the Middle Way carefully and reflectively, we need to be willing to take a critical stance towards traditions that may be trying to avoid positive absolutes, but have failed to avoid negative absolutes after falling into an appeal to ignorance. That’s the thing that seems to divide Middle Way Philosophy from naturalism, which on most interpretations rejects the ‘supernatural’ as false because it is uncertain.

Like most such unjustified assumptions, the appeal to ignorance also potentially has an equally erroneous opposite: that is the assumption that due to ignorance we cannot make even provisional assertions. not only can we not justify absolute assertions based on ignorance (whether positive or negative), but we also cannot justify the avoidance of assertions. A degree of ignorance is our embodied state, but it’s only a degree of ignorance, not an absolute ignorance. We can always assert that based on our experience so far, subject to amendment from further experience, such-and-such is the case. For example, I can assert that the sun will rise tomorrow. I am not prepared for the rapidly plunging temperatures and planetary ruin that would occur if it didn’t, even though I can’t be absolutely certain. Practically speaking, I am justified in relying on that very high probability.


Are these arguments guilty of an appeal to ignorance , or are they just taking our ignorance reasonably into account?

1. We don’t know when civilisation is going to fall apart, so we should all stockpile food and weapons to be ready for that eventuality.

2. We don’t know when the next outbreak of ebola will occur, and a worldwide pandemic has only been marginally headed off for now. Every child in the world should thus be given a vaccination against ebola.

3. You never know when a disabling virus is going to strike your computer. Back up all your files in at least two different places.

4. We don’t know for sure that the tooth fairy does not exist, so it’s quite OK to tell your children stories about the tooth fairy.

Critical Thinking 15: The Texas Sharpshooter Fallacy

The Texas sharpshooter fallacy is one of the most amusing fallacies in Critical Thinking: perhaps because it is based on a story. The Texas sharpshooter is a man who practices shooting by putting bullet-holes in his barn wall: then, when there is a cluster of holes in the wall, he draws a target around them. To commit the Texas Sharpshooter Fallacy, then, is to fit your theory to a pre-existing pattern of coincidences.

This video is an advert for a book, but presents the fallacy rather well:

The cognitive bias that might lead us into the Texas Sharpshooter Fallacy is called the Clustering Illusion. If we see a cluster of something (bullet holes, cancer cases, high grades in exams, letters used in a text) we have a tendency to assume that this cluster must be significant. Of course, it might be, but then it might not be. I think the fallacy becomes a metaphysical assumption about a ‘truth’ in the world around us when we assume that the pattern must be there, rather than just holding it provisionally as a possibility. Usually we need to look at a lot more evidence and continue to see the same pattern before we can justifiably conclude that the theory that explains it.

The Middle Way is applicable here because we need to avoid either, on the one hand, jumping to absolute conclusions about the significance of patterns we encounter, or, on the other, assuming that everything is necessarily random and any theories used to explain any pattern must be false. It may be important to accept consistent patterns of evidence even if they don’t amount to a certainty: the evidence for global warming is one example of that. On the other hand, the patterns of evidence used by those who argued that 9/11 was a conspiracy set up by the US or Israeli governments (see Wikipedia article) could point to only much more limited evidence. For example that Israeli agents were discovered filming the 9/11 scene and not apparently being disturbed by it is a pattern that would be consistent with an Israeli plot, but only a very small part of a pattern for which all the other elements are missing. A great deal more has to be assumed to support any of the 9/11 conspiracy theories: plausibility within a limited sphere is not enough.

This fallacy links with a number of others: for example the similar ad hoc reasoning (also known as the ‘No True Scotsman’ Fallacy) where someone refuses to give up a theory that conflicts with evidence but keeps moving the goalposts instead, and post hoc reasoning that assumes that when one thing follows another the first must cause the second. Post hoc reasoning can be seen as a version of the Texas Sharpshooter, because a pattern of correlation is being identified that is assumed to be causally significant when it may be a matter of coincidence. See the Spurious Correlations website for some hilarious examples of this. The divorce rate in Maine correlates with the per capita consumption of cheese: are depressed Maine divorcees binging on cheese?


How would you judge the following patterns? Are they evidence that could be used to support a theory, or just a small pattern of coincidences?Bermuda_Triangle_(clear)_svg

1. A number of bird droppings on the roof of your car appear to form a letter ‘F’.

2. A number of unsolved disappearances of ships and planes have occurred within the area of Atlantic known as the Bermuda Triangle. See Wikipedia.

3. During the Apollo 8 mission to the moon, astronaut Jim Lovell announced “Please be informed, there is a Santa Claus”..1918_spanish_flu_waves

4. The first peak of the Spanish Flu outbreak occurred at almost exactly the same time as the armistice of 11/11/18 ending the First World War (see graph).


5. Deaths by shark attack tend to peak and trough at the same time as ice cream sales.

Critical Thinking 11: Fallacies of Composition and Division

The fallacies of composition and division are concerned with the relationship between the whole and the parts. If you attribute a certain quality to a part of something, it will not necessarily apply to the whole, and if you attribute a certain quality to the whole, it will not necessarily apply to the parts.

For example, supposing you are building a house. Each of the bricks weighs 2kgs. However, the house as a whole obviously does not weigh 2 kgs. You could pick up the bricks and throw them, but you couldn’t do that to the house, and so on. That is the fallacy of composition. For the fallacy of division you just need to turn this round. The house as a whole makes a good shelter from the rain – but that doesn’t mean that an individual brick makes a good shelter from the rain.

This example may seem obvious and absurd, but there are other instances where fallacies of composition and division are less obvious. It can even apply to colours. You might think it obvious that a whole made up of parts will be the same colour as its parts: but if the parts are blue and yellow, they may blur into green from a distance. A house that is white on the outside may also be built of bricks that are black on every side except the one that shows on the outside of the house. You might think that your body is alive, but it contains dead cells as well as live ones: what is true of the whole is not necessarily true of all the parts.Fractal_Broccoli

The reverse of either of these two fallacies is also fallacious. I can no more assume that parts necessarily do not share the properties of the whole as I can assume that they do.

One interesting application of this fallacy is that it seems to offer a good refutation of those who take either kind of metaphysical position on the mind-body problem, whether they are reductionists who think that our minds must be entirely material, or essentialists who think that the mind must be irreducible and essentially different from the body (for example, if it is a non-physical soul). Reductive materialists seem to be subject to the fallacy of composition: just because the components of the mind can all be understood as material objects, does not necessarily mean that the mind as a whole can be understood in that way. On the other hand, essentialists who want to insist that the mind is special and different are subject to the negation of the fallacy of division: just because the mind has certain special characteristics, such as consciousness, does not necessarily imply that its parts do not share those characteristics.

These fallacies can be similarly employed to point out the kind of mistake made whenever metaphysical conclusions have been drawn about a higher level of explanation being essentially different from or reducible to a lower one (in philosophy this is called supervenience). For example, whether life is or is not essentially different from mere chemical compounds, or whether reasoned behaviour is or is not essentially different from instinctual behaviour. I think we just have to live with the vagueness of these divisions in our ways of understanding the world, but it is too easy to rush into assumptions about rigid divisions.

To identify these fallacies in practice, you need to identify what the whole is and what the parts are. There may be good reasons in experience for believing that the whole either does or does not have the same characteristics as the parts, but a fallacy is taking place if it is being assumed that they necessarily have the same characteristics without further evidence.


Are the following examples of the fallacy of composition or of division (or not)?

1. “Should we not assume that just as the eye, hand, the foot, and in general each part of the body clearly has its own proper function, so man too has some function over and above the function of his parts?” Aristotle, Nicomachean Ethics

2. Manchester United are likely to lose this match. Two of their strikers and several midfield players have chronic injury problems, and are likely to put in a disappointing performance.

3. Communism in the Soviet Union was a failure. Universal state employment meant that nobody was motivated to make an effort in economic life, and it was the economy that destroyed the Soviet system in the end.

4. I’ve tried one strand of spaghetti and it’s cooked, so the whole pan must be ready.

Picture: Fractal (Romanesco) Broccoli: In Fractals the parts do have the same qualities as the whole!