Category Archives: Objectivity training

Network Stimulus 12: Practice – Integration of Belief

The next main meeting of the Middle Way Network will be on Sun 25th October 2020 at 7pm UK time on Zoom. This is the last of a series of three talks and discussions focusing on the nature of Middle Way practice: that is, how we can create the conditions for better judgement overcoming conflict in the long-term. We will be looking in turn at the integration of desire, meaning and belief as interdependent aspects of practice, linked to a potentially wide range of specific practices including meditation, the arts, and critical thinking.

There’ll be a short talk on practice as integration of belief, followed by questions, then discussion in regionalised breakout groups. Some other regionalised groups will meet at other times. If you’re interested in joining us but are not already part of the Network, please see the general Network page to sign up. To catch up on the previous session, on integration of meaning, please see this post.

Integration of Belief

Integration of belief is the most important and most lasting form of integration, and the basis of wisdom and compassion. To develop these qualities we need to be able to avoid absolute beliefs but engage with provisional ones. We need to sift absolute beliefs from provisional ones in areas such as religion and politics, but there are also many other everyday ways of practising integration of belief in relation to cognitive biases. A range of practices can help us to develop integration of belief, but especially those that cultivate wider awareness of our beliefs specifically – such as individual reflection, or study and discussion in which critical thinking is applied.

Other resources

There is already an introductory video (21 mins) on integration of belief as part of Middle Way Philosophy, which is embedded below. You might like to watch this for an initial orientation before the session. This is relatively long and detailed in comparison to some of the other introductory videos we have used. A somewhat different approach will be taken in the session.

Here is the video from the actual Network talk:

Some suggested reflection questions:

1. Think of an example of an absolute belief that you have found difficult to integrate (it has caused conflict for you).

2. How can you apply the five principles of Middle Way Philosophy to this belief and its opposite (scepticism, agnosticism, provisionality, incrementality, integration)?

3. When you have avoided absolute beliefs in this example, what are the associated provisional beliefs and meaning, and how might they be used to develop a more integrated belief?

Suggested further reading

Migglism section 4 (5 in the e-book): ‘Critical Thinking’

Middle Way Philosophy 4: The Integration of Belief (especially section 2). See this page for a summary of the sections, and Researchgate for the full text on pdf as part of the Omnibus. From the end of this book you may also find the glossary of biases, fallacies and metaphysical beliefs useful.

Other video resources

The ‘mistakes we make in thinking’ video series goes into 6 key areas of bias where we might develop absolute beliefs, and how we could respond to them.

Critical Thinking 21: Credibility of Sources

I’ve been moved to revive my critical thinking series by the acuteness of the problems people seem to have with credibility judgements in current political debate. Russia has been implicated in the recent use of a nerve agent for attempted murder in Salisbury, England, and in the use of chemical weapons in Syria. In both cases they deny it. Most of us have no direct knowledge of these issues or situations, so we rely entirely on information about them that we get through the media. That means that we have to use credibility judgements – and we need to use them with great care. My own judgement is that the Russian government has very low credibility compared to most Western sources – but to see why you need to look at the kinds of credibility criteria that can be applied and think about each one of them, rather than jumping to conclusions that may be based on your reaction to past weaknesses in Western objectivity. I’d like to invite you to consider the account of credibility below and apply it to this example (and similar ones) for yourself.

This post connects strongly to Critical Thinking 7: Authority and Credibility, which you might also like to look at.

Credibility is an estimation of how much trust to place in a source of information – e.g. a person, an organisation, or a book. Most of the information we actually encounter that is used to support arguments has to be taken on trust, because we are not in a position to check it ourselves. For example, if I’m reading an article about the Large Hadron Collider (a famous massive physics experiment), I am entirely reliant on the physicists who are giving the information to accurately explain their evidence.

There are two extreme attitudes to credibility which would be equally unhelpful: to take everything on trust without question on the one hand, or to believe nothing on the other. If we believed nothing that anyone else told us, then we would could not make use of the vast majority of information we take for granted. For example, I have never been to Australia, so without believing other people I would have no grounds for believing that Australia exists at all. On the other hand, if we believe everything, then we become prey to unscrupulous advertisers, email hoaxes such as “phishing” for bank account details, and sincere but deluded extremists of all kinds in both religion and politics. We need a method of judging others’ credibility. In fact we have all already developed our judgements about this: we believe some people more than others. However, examining the subject carefully  may help you to refine and justify these judgements.

Credibility issues must be carefully distinguished from issues of argument. It is a way of judging the information that feeds into an argument when you have no other way of judging it – not the argument itself. So whilst deductive arguments are either valid or invalid, credibility is always a matter of degree, and judging it is an extension of inductive reasoning in relation to sources.  Credibility is just a way of judging assumptions, where those assumptions consist in claims from certain sources, and we’re not in a position to assess the evidence for those claims ourselves.

An example of a scenario needing credibility assessment
Suppose you are a teacher in a primary school on playground duty, and you hear distressed yells. You turn and see two eight-year old boys fighting. One has thumped the other, who is crying. The crying boy says he was picked on, whilst the thumping boy says the other boy hit him first. Two other boys were witnesses but they disagree about who was to blame.

Perhaps it would be quite common in such a scenario for a teacher to punish both boys due to doubts about who started it: but would this be fair? It is difficult to decide, because both boys (and their respective friends) all have a strong interest in maintaining their side of the story. The witnesses are also divided, so you can’t rely on the weight of their testimony. One possible way out would be to rely on reputations. Are either of the boys known to have lied, or to have been bullies, in the past? If one boy has a record of being involved in lots of fights in the past and the other does not, this might well sway the teacher’s judgement. But of course if this assumption is made too readily it could also reconfirm the “known trouble maker” as such, because even if he is innocent people will assume that he is guilty. Judgements about credibility are always made under uncertainty.

Factors of credibility
When judging the credibility of a person, or any other sort of human source, it is helpful to have a checklist of factors in mind. We are going to consider a list of 5 credibility factors here, which can be easily remembered using the mnemonic RAVEN.
Reputation
Ability to get information
Vested interest
Expertise
Neutrality or bias

We will now look more closely at these 5 factors.

Reputation
Reputation is what we know about a person or organisation’s track record for trustworthiness. This will often come from the assessments of others, whether they are experts or ordinary people. For example, restaurants seek to get a good reputation by being given stars in the Michelin guide. Reputation has also been democratised because it can be so easily shared on the internet, with different book suppliers being rated for reliability on Amazon or different hotels being rated by people who have stayed there on websites like Bookings.com.

Apart from an individual or organisation, you might need to consider the reputation of a newspaper, other publication, broadcaster, or website. Generally, for example, the BBC has a good reputation as an objective provider of news coverage, whereas the Sun is well known for being more interested in selling newspapers and pleasing its readers than providing objective reports. This will remain generally the case even if you feel that certain reports have tarnished the reputation of the BBC or improved that of the Sun. All credibility judgements need to be proportional, so you need to think carefully about what proportion of the BBC’s vast output is generally acknowledged as credible, rather than just about a small number of negative instances, in order to arrive at a fair judgement of reputation.

Ability to get information
This covers a range of ways that people may or may not have been able to access what they claim to know through experience: ability to observe, ability to gain access, and ability to recall. If someone claims to have observed a foul at a football game that the referee judged wrongly, their testimony is of less weight if they were five times further away from the incident than the referee was and could only see it distantly. If someone claims to have seen documents that their company or government would never have actually given them access to, this would also reduce credibility. If someone is known to have an unreliable memory, or only remembers something in a vague way, this would also affect the credibility of their claims.

The ability to observe is also relevant to the distinction (often used in history) between primary sources and secondary sources. A primary source is one which records a person’s experiences directly, but a secondary source gets the information second hand. So, for example, if an officer wrote a memoir of his experiences in the Battle of Waterloo, this would become a primary historical document in gaining information about that battle, but a historian who used that document, together with others, to write a book about the battle would be producing a secondary source. On average, primary sources tend to be more worthy of credibility in reporting an event than secondary ones, but primary sources can be unreliable (the officer might not have been in a good position to see what was happening in the whole battle, for example) and secondary sources may sometimes give a more comprehensive picture with greater expertise and neutrality (see below).

Vested interest
A vested interest is something that a person has to gain or lose from a certain outcome. For example, a salesman has a vested interest in getting you to buy his company’s double glazing, because they will give him extra commission if he sells it to you. This gives him a reason to give you a possibly misleading impression of its high quality, low price etc. Vested interests can cut both ways, though: there can be a vested interest to deceive (as in the case of a salesman), but also a vested interest to tell the truth, for example where someone’s job depends on them maintaining a good reputation for reliability. As well as an incentive for stretching the truth a little bit, a double glazing salesman also has a vested interest in keeping close enough to the truth not to be subject to legal action for grossly misrepresenting his product.

It’s important to keep vested interests in perspective, because most people have some vested interests in both directions. Nearly everyone has something to gain from getting your money or your support or even your friendship, but on the other hand they also have the incentive of maintaining a social reputation as reliable, and – if they are a professional – for maintaining their career prospects, which depend on that reputation. However, in cases like advertising or political campaigning it’s obvious that the vested interests lie strongly in one direction.

Expertise
If someone is an expert on the topic under consideration, then this normally adds substantially to their credibility, because they will know a lot more of the facts of the matter and also understand the relationship between them. We all rely on expertise constantly: the doctor, the computer technician, the academic on TV or writing a book. You can look for formal academic qualifications (BA’s, MA’s, & Ph.D.’s) as evidence of expertise, or it may just be a question of professional experience or life experience (e.g. someone has worked 20 years as a gardener, or practised meditation for 10 years, or whatever). People who hold university posts in a subject, or who have written books on it, are often the starting-point in the media when an expert is needed.

Apart from whether expertise is genuine, the other thing you might want to consider when deciding whether to trust it is whether it is relevant. Someone with a Ph.D. in physics may know a bit about biology, but not necessarily that much. The fact that someone is an Olympic gold medal winner may give them expertise in how to train in their sport, but not necessarily about, say, politics or business. ‘Celebrities’ who are largely famous for being famous, may assume expertise on subjects that they don’t actually know more than average about.

From the Middle Way point of view, it is also worth considering that expertise in modern society often results from over-specialisation that may lead people into making absolute assumptions that are specific to their highly specialised expert groups. This means that whilst highly specialised experts may be very reliable on very specific points within their expertise, the moment their judgement starts to involve synthesis or comparison with other areas it may actually become less reliable, because they may have effectively sacrificed their wider objectivity for the sake of specialisation. For example, when well-known specialised scientists start talking about ethics or religion I often have this impression – not that they are not entitled to express their views on these topics, but that their views are very narrowly based. On the other hand, there are also other people whose expertise is more broadly based.

Neutrality or bias
Finally, you can assess someone’s claims according to their overall approach to the topic and the kind of interpretation they make of it. Some people may clearly set out to be as objective as possible, whereas others adopt a deliberately biased approach in order to promote a particular point of view. Honest bias is probably better than false neutrality, but you need to be aware of the ways that the biased approach will limit people’s interpretation of the facts. For example, the comments of a politician arguing for their policies are going to be biased in favour of promoting those policies – compared to, say, a political journalist from the BBC who sets out to analyse the issue in a more objective way that explains the different views of it.

Bias should not be confused with vested interest, although they may go together in many cases. Someone can have a vested interest, yet take an objective and balanced tone when explaining the facts as they see them. On the other hand, someone without a lot of vested interests may be inspired by sympathy with one side or the other to weigh strongly into a debate: for example, the actor Joanna Lumley got involved in the campaign to give immigration rights to the UK to Nepalese Gurkha soldiers in the British army. She clearly had nothing much to gain from this herself, but nevertheless was a passionate advocate of the cause.

Conclusion

So, do you believe the Russian government? The judgement needs to be incremental and comparative. So, compare it to another source, say the British government on the Skripal Case. What are their reputations, their abilities to get information, their vested interests, expertise, and record on bias? These all need to be put together, with none of them being used as an absolute to either accept total authority or to completely dismiss.

 

For an index of all critical thinking blogs see this page.

Picture: Franco Atirador (Wikimedia Commons)

 

Critical Thinking 20: Appeal to consequences

To point out the likely consequences of a course of action usually seems like a helpful thing to do: for example, discouraging your friend from making themselves ill by drinking, or considering how much the recipient will really value the gift you are preparing. However, there are some cases where appealing to a particular consequence is a form of distraction or manipulation. Perhaps the consequence is frightening or flattering, but not nearly as important or probable as it is being presented as being, but because we have had our minds focused on that consequence we miss more important factors. An appeal to consequences needs to be distinguished from merely alerting people to them as possibilities.

I came across a striking example recently in an article about ‘essay mill’ sites where students can pay to have essays written for them. This included the following clip from an essay mill site, in which its authors tried to persuade students of the morality of using it:Buy essays - appeal to consequencesThis is quite cleverly done. The moral idea of cheating is ambiguously conflated with the idea of getting caught, so the unlikelihood of getting caught may well be confused with the justifiability of using the essay mill (even though the very idea of ‘getting caught’ implies cheating!). The student’s likely feelings are then sympathetically anticipated, making it more likely that the student will feel that the author understands their situation and can guide them wisely. But the clinching argument is where the appeal to consequences comes in: “In the long run, your success will be all that matters. Trivial things like ordering an essay will seem too distant to even be considered cheating”.

“Your success will be all that matters” is a matter of the end justifying the means. In order to persuade the student of this, the author invites the student to think ahead to when they’ve got their qualification and succeeded in their goals, and the importance of those goals to them will doubtless outweigh every other consideration. This is an appeal to consequences because it invites us to assume that this consequence is necessarily the one that trumps all other considerations – in this case the normal social and academic rules about cheating. But just because it may contribute to the achievement of a goal that may be of great importance to you does not necessarily mean that this form of cheating is justified.

Another form of appeal to consequence is the type that seeks to persuade people to change beliefs that are justified by evidence because of the political, social, or economic benefits of doing so. Thus, for example, a climate change scientist might be appealed to by a politician or administrator distort their findings to follow an official anti-climate change line, despite the weight of evidence for climate change. At an extreme, this might amount to a form of blackmail (change your beliefs or you might lose your job) or bribery (change your beliefs and you’ll get promoted), which also involves an appeal to consequences. The reason that we should reject such appeals to change our beliefs about the ‘facts’, in my view, is not that the ‘facts’ are incontrovertible or that we do not at some level generally accept certain ‘facts’ because of the pragmatic consequences of doing so, but because from a wider and more integrated perspective the long-term consequences of supporting beliefs that fit the evidence better are far more important than the short-term reasons for rejecting them.

Why should the student resist the temptation to cheat? Not just because there are social rules against cheating, because those social rules are not necessarily correct just because they are social rules. Rather, because a more integrated perspective, in which the student remained fully in touch with a desire for integrity both in their own lives and in the academic system, should motivate the avoidance of cheating. A student tempted to cheat, or a climate change scientist tempted to abandon the integrity of their research for political reasons, might be better able to resist that temptation if they reflected on the situation not as just a conflict between social rules and individual inclination, or even between rival ‘facts’, but rather between different desires that they themselves possess – desires that can only be reconciled by taking the more integral and sustainable path. The alternative is not just a danger of being ‘caught’, but also a danger of long-term guilt and conflict.

The problem with appeals to consequences is thus the narrow absolutisation of the particular consequences that are being appealed to. The Middle Way, which asks us to return to the middle ground between positive and negative types of absolutisation, would point out that neither the social rules against cheating nor the rationalisations we might give for cheating are absolute. By freeing ourselves from both sets of extreme assumption, we are in a better position to make a judgement that is actually based on both evidence and values that are sustainable in the long-term.

 

Link to a list of other posts in the critical thinking series

 

Everyday absolutisation

On retreat this summer, Willie Grieve asked a question well worth asking: “What would be a Middle Way first aid kit?” A question that I take to mean, what are the most immediate applications of it in a variety of everyday situations? I didn’t really have an effective answer to this question at the time, but I wonder if I’ve found at least one possible answer now after stumbling on a series of very simple, practical videos by Senseability.

These videos describe six basic types of thinking error and also then go on to offer ways of addressing them – all in an extremely accessible format. This could be described as ‘critical thinking’, or indeed as ‘cognitive behavioural therapy’. To make it into practical Middle Way Philosophy, all one needs to add is the recognition that all six of these errors are different types of absolutisation.  Here’s the first video, that introduces them:

The six errors introduced here are as follows. For each one I’ll give the label used on the video, plus some other labels in terms of biases or fallacies, plus what it absolutises:

  1. All or nothing: false dilemma or false dichotomy. Absolutises a limitation on the number of options and/or boundaries between them.
  2. Over-generalisation: sweeping generalisation fallacy. Absolutises one or a few examples into a universal truth about a whole category.
  3. Mind reading: projection. Absolutises an idea we have about someone else or their motives by assuming it must be true.
  4. Fortune-telling: forecast illusion, or pessimism/optimism about oneself. Absolutises a particular idea about what will happen to use in the future by assuming it must be true.
  5. Magnification/ minimisation: ad hominem. Absolutisation of a view of oneself as good or bad regardless of the arguments. Anything else might also be magnified or minimised as a general feature of absolutisation.
  6. Catastrophising: slippery slope. Assuming one bad event will necessarily lead to a worsening situation.

It’s worth noting that all of these also have opposites. In the case of 1 and 5 two extremes are already noted in the presentation, so it’s already obvious that a Middle Way is needed. In the case of 2, the opposite would be denying generalisation, in 3 denying all knowledge about others, in 4 and 6 denying all knowledge of the future.

The next video gives some example of these thinking errors without comment. It’s a helpful exercise to ask yourself which is occurring in each.

The third video suggests solutions to these thinking errors.

Here are the five solutions suggested on the video, all of which could be helpful for quite a wide range of absolutisations.

  1. Consider the evidence
  2. Is there an alternative?
  3. What would you say to a friend who was thinking like that?
  4. What is the likelihood?
  5. Is there a more helpful way I can think about this?

All of these strategies in some way prompt wider awareness beyond the absolutised belief you’re holding onto.

Of course, these responses to everyday absolutisation are only a start to the practise of the Middle Way. There are a great many more biases and fallacies (these all being discussed in my book Middle Way Philosophy 4: The Integration of Belief). In order to make strategies like these effective in the longer term, you may also need longer-term integration practices such as meditation, the arts and (a fuller training in) critical thinking. But this is a first aid kit. It might help patch you up in the face of immediately overwhelming absolutisations so you can then start to think in a longer-term way.

Critical Thinking 19: Straw men

The image of a straw man comes from past military training, where soldiers would apparently practise their combat skills by attacking a man made of straw.straw-man-ratomir-wilkowski-cca-3-0 Since I doubt if the soldiers ever attacked a woman made of straw, the politically correct “straw person” alternative seems to be based on a misunderstanding of this metaphor (much as I am generally in favour of gender-neutral universals). The straw man is a fallacy in critical thinking, and refers to a target of argument that is set up so as to be easy to attack. Generally it means a misrepresentation or over-simplification of someone else’s claims that you argue against, using justifications that would not be effective against a more realistic or sophisticated account of what they have said.

Here’s a classic example of a straw man from Margaret Thatcher in the UK parliament:

Here Thatcher attacks not ‘Socialism’ as any Socialist would describe it, but the idea that she attributes to Socialism that Socialists “would rather the poor were poorer as long as the rich were less rich”, i.e. that they are only concerned with the gap between rich and poor rather than with how well off the poor are. She also misrepresents Simon Hughes (the first male speaker) as ‘Socialist’ at all, as he is a Liberal Democrat who would probably describe himself as a Liberal rather than a Socialist.

Does that seem like a clear example? Well, imagine what would happen if you offered it to Thatcher herself, or one of her supporters. Almost undoubtedly, they would contest the claim that Socialism has been misrepresented. They’d probably say that they had detected a basic assumption in socialism, or an implication of socialism, even if socialists themselves were not willing to acknowledge it. You can imagine the fruitless argument that could then ensue between a Thatcherite and a Socialist, probably ending up in standoff and offence, with one claiming a straw man had been committed, and the other denying it. Unfortunately that’s a fairly typical example of what can easily happen when a straw man is pointed out.

As someone who is very interested in assumptions, I find that I quite often get accused of producing straw men myself (and, of course, I usually think this is unfair!). Anyone who seeks to point out an assumption made by someone else is in danger of this. Part of the problem is that people are often only willing to recognise as assumptions what they already consciously believe, so that the pointing out of an assumption of which they have been unconscious just seems wrong. “This doesn’t apply to me” they then think, “I don’t think that: it’s a straw man.” But in the wider analysis, it may still be the case that they do make that assumption. It needs further investigation. However, in the press of debate, we are most unlikely to take the time out to reflect on whether we really do assume what we have been accused of assuming. What Daniel Kahneman calls ‘fast thinking’ is the shortcut we rely upon for social survival, and ‘slow thinking’, where we might reconsider our assumptions, is reserved for occasions when we are feeling more relaxed and secure.

We can only try to come to terms with this condition, I think. We’re not likely to get people to examine their assumptions in most circumstances, unless the circumstances are sufficiently relaxed and (probably) face-to-face, or the people concerned trust each other and are used to examining assumptions. The best we can expect in normal discussion, I think, is that we will stimulate people with opposing beliefs to go off and reconsider them later. But that does quite often happen too, so all discussion should not be written off as useless.

In the meantime, I think it might be helpful to have a holding position on Straw Men, whether you feel someone else is misrepresenting your point of view, or whether they have accused you of misrepresenting theirs. It’s helpful to know if someone feels this, even if we are unable to resolve it on the spot. There are some reasonably obvious cases where someone has misunderstood or misrepresented the explicit and publically stated views of someone else, but most cases are probably not like this. If it can’t be easily resolved at that level, it might be worth noting that the alleged misrepresentation is about implicit things that need more thought, not explicit ones. It might also be helpful to indicate provisionality around straw man accusations. For example, you might say “I feel you’re misrepresenting my position there” and then say why, rather than just “That’s a straw man”. It might be possible to at least agree about how people feel and whether you’re referring to their explicit position. Both sides may then agree to go away and think about it. That’s a much better outcome than merely trading accusations about straw men on the basis of misunderstanding.

Examples

Are these examples of straw men? How should we respond to them? Feel free to discuss these in comments.

  1. (Draft bill presented to Louisiana state legislature)

Whereas, the writings of Charles Darwin, the father of evolution, promoted the justification of racism, and his books On the Origin of Species and The Descent of Man postulate a hierarchy of superior and inferior races. . . .
Therefore, be it resolved that the legislature of Louisiana does hereby deplore all instances and all ideologies of racism, does hereby reject the core concepts of Darwinist ideology that certain races and classes of humans are inherently superior to others, and does hereby condemn the extent to which these philosophies have been used to justify and approve racist practices.

2. Jeremy Corbyn, leader of the UK Labour Party, argues for the non-renewal of the Trident submarine-based nuclear weapons system of the UK. He argues that we should leave the UK defenceless against nuclear attack.

3. Free market capitalism is founded on one value: the maximisation of profit. Other values, like human dignity and solidarity, or environmental sustainability, are disregarded as soon as they limit potential profit. (Naomi Klein, ‘No Logo’)

 

Click here for index of other Critical Thinking blogs in this series.

Picture: Ratomir Wilkowski (Wikimedia) CCA 3.0