Category Archives: Critical Thinking

Critical Thinking 21: Credibility of Sources

I’ve been moved to revive my critical thinking series by the acuteness of the problems people seem to have with credibility judgements in current political debate. Russia has been implicated in the recent use of a nerve agent for attempted murder in Salisbury, England, and in the use of chemical weapons in Syria. In both cases they deny it. Most of us have no direct knowledge of these issues or situations, so we rely entirely on information about them that we get through the media. That means that we have to use credibility judgements – and we need to use them with great care. My own judgement is that the Russian government has very low credibility compared to most Western sources – but to see why you need to look at the kinds of credibility criteria that can be applied and think about each one of them, rather than jumping to conclusions that may be based on your reaction to past weaknesses in Western objectivity. I’d like to invite you to consider the account of credibility below and apply it to this example (and similar ones) for yourself.

This post connects strongly to Critical Thinking 7: Authority and Credibility, which you might also like to look at.

Credibility is an estimation of how much trust to place in a source of information – e.g. a person, an organisation, or a book. Most of the information we actually encounter that is used to support arguments has to be taken on trust, because we are not in a position to check it ourselves. For example, if I’m reading an article about the Large Hadron Collider (a famous massive physics experiment), I am entirely reliant on the physicists who are giving the information to accurately explain their evidence.

There are two extreme attitudes to credibility which would be equally unhelpful: to take everything on trust without question on the one hand, or to believe nothing on the other. If we believed nothing that anyone else told us, then we would could not make use of the vast majority of information we take for granted. For example, I have never been to Australia, so without believing other people I would have no grounds for believing that Australia exists at all. On the other hand, if we believe everything, then we become prey to unscrupulous advertisers, email hoaxes such as “phishing” for bank account details, and sincere but deluded extremists of all kinds in both religion and politics. We need a method of judging others’ credibility. In fact we have all already developed our judgements about this: we believe some people more than others. However, examining the subject carefully  may help you to refine and justify these judgements.

Credibility issues must be carefully distinguished from issues of argument. It is a way of judging the information that feeds into an argument when you have no other way of judging it – not the argument itself. So whilst deductive arguments are either valid or invalid, credibility is always a matter of degree, and judging it is an extension of inductive reasoning in relation to sources.  Credibility is just a way of judging assumptions, where those assumptions consist in claims from certain sources, and we’re not in a position to assess the evidence for those claims ourselves.

An example of a scenario needing credibility assessment
Suppose you are a teacher in a primary school on playground duty, and you hear distressed yells. You turn and see two eight-year old boys fighting. One has thumped the other, who is crying. The crying boy says he was picked on, whilst the thumping boy says the other boy hit him first. Two other boys were witnesses but they disagree about who was to blame.

Perhaps it would be quite common in such a scenario for a teacher to punish both boys due to doubts about who started it: but would this be fair? It is difficult to decide, because both boys (and their respective friends) all have a strong interest in maintaining their side of the story. The witnesses are also divided, so you can’t rely on the weight of their testimony. One possible way out would be to rely on reputations. Are either of the boys known to have lied, or to have been bullies, in the past? If one boy has a record of being involved in lots of fights in the past and the other does not, this might well sway the teacher’s judgement. But of course if this assumption is made too readily it could also reconfirm the “known trouble maker” as such, because even if he is innocent people will assume that he is guilty. Judgements about credibility are always made under uncertainty.

Factors of credibility
When judging the credibility of a person, or any other sort of human source, it is helpful to have a checklist of factors in mind. We are going to consider a list of 5 credibility factors here, which can be easily remembered using the mnemonic RAVEN.
Reputation
Ability to get information
Vested interest
Expertise
Neutrality or bias

We will now look more closely at these 5 factors.

Reputation
Reputation is what we know about a person or organisation’s track record for trustworthiness. This will often come from the assessments of others, whether they are experts or ordinary people. For example, restaurants seek to get a good reputation by being given stars in the Michelin guide. Reputation has also been democratised because it can be so easily shared on the internet, with different book suppliers being rated for reliability on Amazon or different hotels being rated by people who have stayed there on websites like Bookings.com.

Apart from an individual or organisation, you might need to consider the reputation of a newspaper, other publication, broadcaster, or website. Generally, for example, the BBC has a good reputation as an objective provider of news coverage, whereas the Sun is well known for being more interested in selling newspapers and pleasing its readers than providing objective reports. This will remain generally the case even if you feel that certain reports have tarnished the reputation of the BBC or improved that of the Sun. All credibility judgements need to be proportional, so you need to think carefully about what proportion of the BBC’s vast output is generally acknowledged as credible, rather than just about a small number of negative instances, in order to arrive at a fair judgement of reputation.

Ability to get information
This covers a range of ways that people may or may not have been able to access what they claim to know through experience: ability to observe, ability to gain access, and ability to recall. If someone claims to have observed a foul at a football game that the referee judged wrongly, their testimony is of less weight if they were five times further away from the incident than the referee was and could only see it distantly. If someone claims to have seen documents that their company or government would never have actually given them access to, this would also reduce credibility. If someone is known to have an unreliable memory, or only remembers something in a vague way, this would also affect the credibility of their claims.

The ability to observe is also relevant to the distinction (often used in history) between primary sources and secondary sources. A primary source is one which records a person’s experiences directly, but a secondary source gets the information second hand. So, for example, if an officer wrote a memoir of his experiences in the Battle of Waterloo, this would become a primary historical document in gaining information about that battle, but a historian who used that document, together with others, to write a book about the battle would be producing a secondary source. On average, primary sources tend to be more worthy of credibility in reporting an event than secondary ones, but primary sources can be unreliable (the officer might not have been in a good position to see what was happening in the whole battle, for example) and secondary sources may sometimes give a more comprehensive picture with greater expertise and neutrality (see below).

Vested interest
A vested interest is something that a person has to gain or lose from a certain outcome. For example, a salesman has a vested interest in getting you to buy his company’s double glazing, because they will give him extra commission if he sells it to you. This gives him a reason to give you a possibly misleading impression of its high quality, low price etc. Vested interests can cut both ways, though: there can be a vested interest to deceive (as in the case of a salesman), but also a vested interest to tell the truth, for example where someone’s job depends on them maintaining a good reputation for reliability. As well as an incentive for stretching the truth a little bit, a double glazing salesman also has a vested interest in keeping close enough to the truth not to be subject to legal action for grossly misrepresenting his product.

It’s important to keep vested interests in perspective, because most people have some vested interests in both directions. Nearly everyone has something to gain from getting your money or your support or even your friendship, but on the other hand they also have the incentive of maintaining a social reputation as reliable, and – if they are a professional – for maintaining their career prospects, which depend on that reputation. However, in cases like advertising or political campaigning it’s obvious that the vested interests lie strongly in one direction.

Expertise
If someone is an expert on the topic under consideration, then this normally adds substantially to their credibility, because they will know a lot more of the facts of the matter and also understand the relationship between them. We all rely on expertise constantly: the doctor, the computer technician, the academic on TV or writing a book. You can look for formal academic qualifications (BA’s, MA’s, & Ph.D.’s) as evidence of expertise, or it may just be a question of professional experience or life experience (e.g. someone has worked 20 years as a gardener, or practised meditation for 10 years, or whatever). People who hold university posts in a subject, or who have written books on it, are often the starting-point in the media when an expert is needed.

Apart from whether expertise is genuine, the other thing you might want to consider when deciding whether to trust it is whether it is relevant. Someone with a Ph.D. in physics may know a bit about biology, but not necessarily that much. The fact that someone is an Olympic gold medal winner may give them expertise in how to train in their sport, but not necessarily about, say, politics or business. ‘Celebrities’ who are largely famous for being famous, may assume expertise on subjects that they don’t actually know more than average about.

From the Middle Way point of view, it is also worth considering that expertise in modern society often results from over-specialisation that may lead people into making absolute assumptions that are specific to their highly specialised expert groups. This means that whilst highly specialised experts may be very reliable on very specific points within their expertise, the moment their judgement starts to involve synthesis or comparison with other areas it may actually become less reliable, because they may have effectively sacrificed their wider objectivity for the sake of specialisation. For example, when well-known specialised scientists start talking about ethics or religion I often have this impression – not that they are not entitled to express their views on these topics, but that their views are very narrowly based. On the other hand, there are also other people whose expertise is more broadly based.

Neutrality or bias
Finally, you can assess someone’s claims according to their overall approach to the topic and the kind of interpretation they make of it. Some people may clearly set out to be as objective as possible, whereas others adopt a deliberately biased approach in order to promote a particular point of view. Honest bias is probably better than false neutrality, but you need to be aware of the ways that the biased approach will limit people’s interpretation of the facts. For example, the comments of a politician arguing for their policies are going to be biased in favour of promoting those policies – compared to, say, a political journalist from the BBC who sets out to analyse the issue in a more objective way that explains the different views of it.

Bias should not be confused with vested interest, although they may go together in many cases. Someone can have a vested interest, yet take an objective and balanced tone when explaining the facts as they see them. On the other hand, someone without a lot of vested interests may be inspired by sympathy with one side or the other to weigh strongly into a debate: for example, the actor Joanna Lumley got involved in the campaign to give immigration rights to the UK to Nepalese Gurkha soldiers in the British army. She clearly had nothing much to gain from this herself, but nevertheless was a passionate advocate of the cause.

Conclusion

So, do you believe the Russian government? The judgement needs to be incremental and comparative. So, compare it to another source, say the British government on the Skripal Case. What are their reputations, their abilities to get information, their vested interests, expertise, and record on bias? These all need to be put together, with none of them being used as an absolute to either accept total authority or to completely dismiss.

 

For an index of all critical thinking blogs see this page.

Picture: Franco Atirador (Wikimedia Commons)

 

Ten reasons why discussions fail

I’ve recently been involved in a discussion that still feels like the most depressing ever. It started with me trying to gather perspectives for the book I’m writing about S, a well-known Buddhist teacher (I’m going to deliberately avoid specific details and names here), by posting a question on a Buddhist forum on Facebook. I wanted to find arguments that his critics would use against him so as to make sure that I took them into account. But the critics only wanted to discuss one thing: certain practical shortcomings in the behaviour of that teacher, that they believed justified them in dismissing everything he might have said or written. They wanted me to agree with them, and regarded me as biased if I didn’t. Even after I’d quit the discussion of this on Facebook, one of the critics contacted me to continue it by email. What I find most depressing is that this was a very highly-educated, thoughtful person, an expert in mindfulness – but we nevertheless made no progress at all, instead having a protracted, ill-natured, tedious wrangle that ended up, not with agreement, but with him shutting down the discussion because he was bored with it.

This has stimulated further more general reflections about why discussions fail, even when everyone involved in them sincerely believes that they are pursuing ‘truth’, or ‘rationality’, or being ‘reasonable’, and when they do have some self-awareness of the kind you might expect from a regular meditator. Even the supposed experts fail. I am thinking primarily of text-based discussions on the internet, though many of the same points can also apply even in face-to-face discussions. I’ve come up with ten things that people do that cause discussions to fail – not based only on this example, but also many others that I’ve experienced, together with ongoing thinking about the difficulties in applying the Middle Way.

These ten are:

  1. Provisionality markers are ignored
  2. Impersonal points are read personally
  3. Language is essentialised
  4. Motives are ascribed, or assumed to be ascribed
  5. Unrealistic expectations are based on someone else’s position
  6. Unrealistic expectations of one’s own position inhibit learning
  7. Cognitive bias awareness becomes another weapon
  8. Helpful information is interpreted as condescending
  9. Analysis and reflective summarising are boring
  10. Mediating intentions are not adopted or respected

You might ask why these take a negative form, rather than ten ways that discussions succeed. The reason for this is that I think reasons for success are much harder to identify and generalise about, the reasons for failure being all in the end attributable to absolutisation as a general human tendency. Everything is grist for the mill of absolutisation. Nevertheless, reasons for success can emerge out of the avoidance of reasons for failure.

Here they are explained in more detail:

1. Provisionality markers are ignored

Provisionality markers are bits of language that can be used to try to signal that you’re talking provisionally, that you are just raising possibilities for other people to consider, you respect the autonomy of their judgement, and are not in any way trying to force them to adopt your perspective by absolutising. For example, the use of ‘seems’ or ‘appears’ to hedge what you’re saying. There’s a previous more detailed blog about provisionality markers, including lots of examples, here. The big problem that I find with using these, as noted towards the end of that blog, is that people routinely ignore or discount them, and thus I find that I have a tendency to over-rely on them. Sometimes I resort to putting them in bold or surrounding them with *asterisks* to try to draw people’s attention to them, but that’s more of a mark of my frustration than anything else. The only solution to this that I can envisage is either that more people are trained to use them, until a critical mass of social expectation is reached, or (even more idealistically) that they become unnecessary because everyone expects provisionality anyway. In the meantime, I will continue to use them just because they might help in some cases.

2. Impersonal points are read personally

We’re all probably familiar with the phenomenon of people ‘taking things personally’. What that seems to mean in practice is that people absolutise something as a threat to themselves that you merely offered as a possibility for their consideration. Nothing is actually impersonal, because our entire perspective is personally embodied, but impersonal language is nevertheless another way of being provisional, by prompting awareness of a wider context to what one is saying.  For example, instead of saying “You’re defending cruelty and exploitation”, you can say “There’s a danger that people who adopt this kind of view may end up defending cruelty and exploitation”. In the fruitless discussion I referred to above, I twice pointed out that something my interlocutor had taken personally was deliberately phrased impersonally, but he ignored this, apparently not believing in my sincerity in doing so. He believed that the context justified him in continuing to interpret it personally, but of course this all depended on the framing assumptions with which he viewed the context and purpose of the discussion (see 10). As with provisionality markers, I shall continue to do this just because it might help – but very often it doesn’t – unless, perhaps, more people are consciously trained in using it and accepting when others use it. The principle of charity is another approach that can help to prompt more helpful readings here.

3. Language is essentialised

A lot of people seem to have strong implicit beliefs that words have an essential meaning that one can look up in a dictionary. I’m not criticising the use of dictionaries: they can be useful prompts both to clarity and recognising the different ways a word is used. However, the use of a word in one particular sense may continue to send our thoughts down one particular familiar track, and make it impossible to communicate an alternative possibility. A typical example of this is people’s belief that ‘religion’ must mean absolute beliefs, which forces all discussion of the topic down certain unproductive and polarised tracks. Discussion may then fail because people are using words in different ways and fail to recognise it, or because they believe the meaning of the word is ‘obvious’ and assume that the attempt to use a word differently must have some discreditable motive.  To be able to think afresh, the recognition of a right to stipulation (defining a word for oneself) with a practically helpful motive, and an awareness of the need for clarification of language, are often crucial. That applies not only to big, baggy monsters like ‘nature’ or ‘religion’, but also often to words that can be used for evaluation, like ‘responsibility’, ‘reasonable’, ‘free’, or ‘right’. However, the attempt to do this in practice often runs into number  9 below – it’s tedious. For more on this point, see this previous blog.

4. Motives are ascribed, or assumed to be ascribed

A very common scenario in discussion is that one person feels or intuits another’s motives, believes too strongly in that intuition, and interprets everything they say in that light. A simple example might be if someone says something positive about someone you dislike  – say, Donald Trump. It’s then incredibly easy to assume that they’re motivated by support for or bias towards Trump, when actually this doesn’t necessarily follow at all from their remark. This tendency is hardly surprising given how much our embodied heritage involves sizing others up as a threat or an opportunity. In face-to-face discussion ascribing motives to others might have more justification, because body language contributes to our impression of others’ motives, but people even do this on the internet with people they know nothing about, based on a few ambiguous words. The full avoidance of this is really difficult, and I’ve often found myself that although I might have avoided ascribing motives too directly, assumptions about others’ motives can easily creep in. The reverse problem is when someone thinks you have unfairly ascribed motives to them, even though you were only offering a provisional point for their consideration about something they said. Trying to separate views from the people who hold them (avoiding ad hominem) is the traditional cure for this, but there’s understandably a lot of debate about how far it may actually be necessary to take more of the person into account when addressing their views. Again, we can’t actually be impersonal, but the use of ‘impersonal’ perspectives is a useful awareness-raising device.

5. Unrealistic expectations are based on someone else’s position

I’ve found myself doing this a lot, particularly having unrealistic expectations of philosophers, psychologists, Buddhists, and academics. Others might have such expectations about teachers (of any kind), doctors, priests or any other kind of profession or position of responsibility. Such signals about someone’s past experience and training often lead me to assume that they will be aware at least of the cruder biases and fallacies, and able to engage in a discussion with a fair amount of awareness. Because of these expectations I am then inclined to push them harder than I would others. But these are also often an unrealistic expectations – not because these kinds of training don’t make a difference to people’s ability to engage in productive discussion, but because they don’t necessarily make enough of a difference. There are a variety of possible reasons for that, whether it’s the limitations of their training, the possibly unhelpful elements in their traditions, or just their personal limitations. Obviously, it’s necessary to respond to the person you encounter and what they actually say, rather than just the labels, before inappropriate expectations lead to deadlock as the person fails to fulfil them.

6. Unrealistic expectations of one’s own position can inhibit learning

The flip side of having unrealistic expectations of others is having unrealistic expectations of oneself. Personally I can find that these are often based on formalistic assumptions about what I know. I can assume that I know about biases, for example, because I’ve studied them, reflected on them and written about them. But that doesn’t necessarily stop me being subject to them. At the same time, when I refuse to acknowledge to others that I’ve been biased in a particular way, I find that people often assume that’s because of a hypocritical absolute position, rather than because I’ve weighed it up and concluded that I’m probably not actually subject to that bias (false modesty is another trap). It’s very easy to continue to rationalise biases because one substitutes intellectual understanding for practical reflection, and that can inhibit me in learning from others. But embracing such apparent learning to placate others in their preconceptions can be just as problematic. Only the organic development of confidence (in oneself) together with trust (from others) seems to offer a way round the failure of discussion for this kind of reason.

7. Cognitive bias awareness becomes another weapon

I’m thoroughly convinced of the usefulness of being aware of cognitive biases, but there seems little doubt that the use of that awareness in discussion can backfire spectacularly. This can often combine with point 2: even if you mention the possibility of a bias in an impersonal way, or ask a question about it, people can easily assume that it is just another weapon in your ‘biased’ argument, directed against them. Because they don’t experience the bias directly themselves, but only experience their view and the reasons they can give for it, they are likely to reject the idea that they have the bias out of hand, and to interpret your language in a way that makes the very suggestion a threat to their ‘rational’ autonomy. Phrasing it carefully, so as to try to make it clear that you respect their autonomy and are just offering a possibility they might think about, seldom works. The effective raising of cognitive bias awareness in discussion seems to depend on prior relationships of trust, shared goals and a degree of psychological training and awareness.

8. Helpful information is interpreted as condescending

The offering of helpful information, whether that’s about cognitive biases or any other point, can also backfire and disable the discussion, even if that information is crucially necessary to help the discussion proceed anywhere productive. I find personally that I’m particularly subject to charges of condescension. Presumably my critics would tell me that this is because I’m condescending, but I’m often not even sure what that means, beyond indicating the defensiveness of the accuser. It doesn’t seem to be based on my use of language that makes claims of personal superiority or anything of that kind: it is just that most language is ambiguous, and people interpret it in that way. When one offers information one takes a risk, firstly that it won’t be interpreted ‘personally’ (see 2), secondly that the recipient doesn’t already ‘know’ what you’re offering, in which case you may be accused of ‘insulting their intelligence’, and thirdly that they can make some use of this information. But if you are telling people something they either don’t ‘know’, or theoretically ‘know’ but haven’t really applied, then you are unavoidably reminding them of their limitations, which may trigger a reaction due to dissonance with their view of themselves. Again, I’m not convinced that phrasing this carefully makes a great deal of difference to this problem, even though I will carry on trying to do so.

9. Analysis and reflective summarising are boring

There is often only one possible way that I’ve discovered out of a deadlocked discussion in which people are entrenched in mutual misunderstanding. That is to go through everything very carefully, line by line, making it totally clear what is meant so as to unpick misunderstandings. This is related to another practice often used in mediation, which is getting people to summarise and agree the positions of others so they’re not being misrepresented. Both of these practices, though, have the drawback of being tedious, laborious and time-consuming. Usually, in a discussion, people are not prepared to spend the time necessary to overcome the deadlock through sheer analysis, even if they are able to engage in it. This was the issue in the recent discussion I mentioned in my introduction. Although we did reach a bit more mutual understanding through such analysis, it did not go far enough before the other person called time on it. It’s only if people consider the issue important enough, or perhaps if they’re philosophically trained, that they’re often prepared to go down this route.

10. Mediating intentions are not adopted or respected

This last reason is in my view the most important and underlying one. All of the above causes of deadlock in discussion are due to absolutisation of one kind or another: we assume that our language, our interpretation of that language, our interpretation of others and ourselves, or our assumptions about the best use of our time, are the whole story. The Middle Way offers the perspective that merely opposing that absolutisation with its negative counterpart is unhelpful. Instead we need to avoid the absolutisation altogether, by re-framing the situation in non-absolute terms that avoid the polarising assumptions. Where there are two polarised sides to a debate, it is only through clear mediating intentions that I think we can achieve this, but if others’ perspective is that their view is just correct and their goal is thus to ‘win’ rather than to mediate, that mediating intention can be rapidly undermined. I’ve experienced that happening, for example, in trying to question the acceptability of personal attacks on political opponents in a group of people who I mainly agree with politically. It takes some determination and autonomy to stand up to the group pressure that’s involved here, particularly when everyone else is determined to see you as on ‘the other side’ in some sense (or alternatively appropriate you to their side). In my experience you can try to make your mediating intentions clear, even to people who in theory ought to be sympathetic to them, but the discussion will still fail unless to some extent they share them. They need, at least to some extent, to recognise that their own view may not be the whole story, and to trust you sufficiently to believe that you’re not just trying to convert them to the other side’s case by underhand means.

Conclusion

This list is written in a mood of some pessimism, in which it’s been brought home to me yet again just how difficult it is to make discussion work as a way of engaging with or resolving disagreement, particularly on the internet. I’m aware that many people will respond that these are precisely the sorts of reasons why they don’t engage in internet discussion.  I fully understand their reasons for doing so: there’s no doubt that it can be time-consuming, stressful and frustrating. Others will respond by withdrawing to echo chambers in which they only talk to those who will not challenge them. Yet, on the other hand is the reflection that many discussions on the internet occur that would not previously have occurred at all. This is potentially very enriching if one can use it wisely. Between fruitless and over-stressful discussion on the one hand, and echo chambers on the other, I still think there must be a Middle Way somewhere in which manageable amounts of challenge are to be found and progress can be made, with those who are sufficiently but not too sympathetic. But I must also admit that it’s often damned difficult to find that point of balance.

 

Picture: ‘Face Off’ by Aaron (Creative Commons: Wikimedia) 

 

 

Double Vision

When we try to think critically and to open our imaginations at the same time, a kind of double vision results. At one and the same time we develop our awareness of potential alternatives, making our thinking more flexible, but still remain aware of the limitations of our beliefs, and do not allow our imaginativeness to slip into credulity. We develop meaning but also control belief. It seems to me that developing this double vision is one of the hardest parts of the practice of the Middle Way: but if we are to avoid absolutizing our beliefs we need to develop both meaning and belief. Those of an artistic disposition will find it easier to imagine, and those of a scientific disposition to limit their beliefs to those that can be justified by evidence: but to hold both together? That’s the challenge.

I’ve been reflecting more on the metaphor of double vision, since I heard it used recently in a talk by Jeremy Naydler in the context of the Jung Lectures in Bristol. Naydler used this metaphor in a talk called ‘The Inner Beloved’, which was about the way in which visionary men of the past have maintained images of beloved women that were actually projections of their own psyches (what Jung would call the anima). He spoke of Dante’s vision of Beatrice in the Divine Comedy and Boethius’s figure of Philosophia in The Consolations of Philosophy. These were not ‘real’ women, or had the slightest of relationships to real women, but rather became powerful archetypal symbols of the part of themselves that remained unintegrated. They were the focus of yearning, but also the path of sublimated wisdom – never possessed but always beckoning and challenging.

The capacity for double vision is central if one is to cultivate such a figure: for if a man were to project it onto a real woman (or vice-versa) the results could be (and often are)disastrous. “Being put on a pedestal” probably creates conflict when the real person starts behaving differently from the idealisation – for example, needing time of her own away from a relationship. It is only by maintaining a critical sense of how the mixed up, complex people and things in our experience are not perfect and do not actually embody our idealised projections that we can also give ourselves an imaginative space to engage with the archetype itself. Recognising that the archetype puts us in touch with meaningful potentials, showing us how we could be ourselves, and how we could relate to the world, can provide a source of rich inspiration that I see as lying at the heart of what religions and artistic traditions can positively offer us without absolute belief. 

The annunciation, a Christian artistic motif that I’ve previously written about on this site, for me offers an example of the archetypal in its own terms. For most of us, it is much easier to look for the archetypes in art, and separate this mentally from trying to develop balanced justified beliefs with the real people we meet every day, rather than prematurely over-stretching our capacity to separate them by risking archetypal relationships with real people. That’s why lasting romantic relationships need to be based on realistic appraisal rather than seeing the eternal feminine or masculine in your partner, and also why venerating living religious teachers like gods may be asking for trouble.

Personally, I do have some sense of that double vision in my life. My imaginative sense and relationship to the archetypes has developed from my relationship to two different religious traditions (Buddhism and Christianity) as well as from the arts and an appreciation of Jungian approaches. On the other hand, my love of philosophy and psychology provide a constant critical perspective which also provide me with a respect for evidence and a sense of the importance of the limitations we must apply to practical judgement. Sometimes I find myself veering a little too far in one direction or the other, slipping towards single vision rather than double vision, and then I need to correct my course. Too much concentration on cognitive matters can make my experience too dry and intellectual. Underlying emotions and bodily states can then come as an unpleasant surprise. On the other hand too much imagination without critical awareness can reduce my practical resources in other ways, as my beliefs become less adequate to the circumstances.

Our educational system overwhelmingly only supports a single vision, with the separation of the STEM subjects on the one hand from arts and humanities on the other. But a single vision seems to me an impoverished one, even within the terms of that vision. Those with a single vision based on scientific training and values tend to have some understanding of critical thinking, but to think critically with more thoroughness it’s essential to be aware of your own assumptions and be willing to question them – which requires the ability to imagine alternatives. There are also those with a single vision who are willing to imagine, but tend to take the symbolic realm as in some sense a key to ‘knowledge’ of ‘reality’, and thus uncritically adopt beliefs that they can link with their imaginative values. For example, those who, like Jung, find astrology a fascinating study of meaning, often seem to fail to draw a critical line when it comes to believing the predictions of astrology – for which there is no justification.

If it is not simply a product of limited education or experience, a single vision is likely to be associated with absolutisation; because absolutisation, being the state of holding a belief as the only alternative to its negation, excludes alternatives. We avoid allowing ourselves to enter the world of the other kind of vision, then, by regarding ours as the only source of truth, and by disparaging and dismissing the other as ‘woo’ (from the scientific side) or as soulless nerds (from the imaginative). Rather than accepting that we need to develop the other kind of vision, we often just construct a world where only our kind of vision is required. Then we share it with others on social media and produce another type of echo chamber – alongside those created by class, region, educational level, or political belief.

Developing a double vision, then, is an important part of cultivating the Middle Way, and thus also a vital way beyond actual or potential conflicts. A failure to recognise your projection onto someone, for example, creates one kind of conflict, but a failure to imagine may take all the energy out of it and lead to another type of division between you. We may not be able to develop double vision all at once, and it’s best not to over-stretch our capacity for it, but the counter-balancing path is open to you right now from here. Here are some follow-on suggestions on this site: if you’re a soulless nerd, go to my blogs about Jung’s Red Book. If you’re more of credulous “woo” person, try my critical thinking blogs.

Pictures (both public domain): double vision from the US air force and Simone Martini’s ‘Annunciation’.

Ride the elephant (you’ve got no choice), but do you have to eat him?

It was not a sudden, blinding revelation, but more of a gradual realisation: the decision I’d made to eat a vegan diet was not a rational one.

Now, I made that decision back in September 2015 and—rational or not—over the past 20 months I’ve believed that animal products are not for my consumption, with no regrets. Please understand that for me it isn’t all about purity: I’m not a vegan fundamentalist. I’ve occasionally submitted to the social pressure to eat egg-containing (and delicious) home-made cake, and in an effort to prevent food going to waste I’ve ingested some cheese made with cow’s milk. I can’t say that either of those deviations from the vegan norm have particularly troubled me, although I was hyper-aware of what I was doing as I did it.

Over the past few months I’ve come to realise that it would be a mistake to argue that my dietary decision lacks justification simply because it was irrational. Not because the rational doesn’t or shouldn’t play a role in moral decision-making, but because our secular Western culture seems to believe that true moral decision-making flows purely from rational, logical thought. And a quick review of our actual experience shows that this default theory of secular morality is mistaken. Let’s dig deeper…

Moral psychology and an elephanty metaphor
The origins of morality have presumably been debated for as long as debating has been around. In the mid-twentieth century academic psychology got involved and researchers tested hypotheses such as ‘morality is innate’ or ‘morality is learned’. More recently the field has reached a level of sophistication where some psychologists are able to conclude that, as seems to be the case generally with nature/nurture dichotomies, it’s a bit of both.

The Righteous MindOne of the most prominent researchers in this area is Jonathan Haidt, and in his 2012 book The Righteous Mind he sums it up like this:

[M]orality can be innate (as a set of evolved intuitions) and learned (as children learn to apply those intuitions within a particular culture). We’re born to the righteous, but we have to learn what, exactly, people like us should be righteous about.

Now, I’ve been getting to grips with the idea that people’s moral arguments (including my own) can’t really be taken at face value because generally they’re post-hoc cognitive constructions developed for social reasons once the automatic and instantaneous moral intuition has done its work making the initial moral choice. I knew this was one of Haidt’s principles of moral psychology, from reading other books, Robert’s review of The Righteous Mind and online videos. Now I’ve actually borrowed the book from the library and read it I’m better informed about his model, and the research that it is entwined with.

The first part of this book is the most relevant to what I’m talking about here. In it he states the first principle of his moral psychology as being: Intuitions come first, strategic reasoning second. It is this principle that seems to be at work in me with regards to my dietary ethics. In both his books he uses an interesting metaphor when discussing this first principle, namely that

…the mind is divided, like a rider on an elephant, and the rider’s job is to serve the elephant. The rider is our conscious reasoning—the stream of words and images of which we are fully aware. The elephant is the other 99 percent of mental processes—the ones that occur outside of awareness but that actually govern most of our behaviour. … [T]he rider and elephant work together, sometimes poorly, as we stumble through life in search of meaning and connection.

Riding my metaphorical elephant
c-rayban-86816In the subtitle above I imply that I possess the metaphorical elephant, whereas – in the light of Haidt’s moral psychology model – it might be more accurate to say that the elephant both is me (as the part of my mind that works before and at a more fundamental level than my conscious cognition) and has me (in the sense that the conscious cognitive “I” serves it and has little control over where the elephant is going).

Going back to September 2015, there was a definite moment in time when I resolved not to eat animal products any more… I think I had been off work through illness for a couple of days and I’d been watching films on Netflix – including an American documentary called Vegucated which explores the challenges of adopting a vegan diet. Immediately afterwards I went upstairs and found my wife (I think ostensibly to discuss what we might have for dinner) and blurted out something along the lines of “I can’t eat eggs any more. I just can’t.” And then stood there wobbling a bit thinking about the half-finished box of eggs downstairs in the kitchen.

egg-1803361_1920Looking back on it, perhaps there was a bit of embarrassment that I didn’t have a rational reason for not wanting to eat eggs any more, after all, usually people want to know why you’ve changed your position on an issue and it seems culturally appropriate to have a ‘proper’ rational reason. My wife said something like “If that’s your decision that I stand by you making it.” and I went off to make something eggless for our evening meal. The elephant had decided: “Boom! No more eggs for you!” and then my cognitive mind was left floundering around trying to do some strategic reasoning that would no doubt be required in socially justifying my new food ethos.

Rational questions remained: would I have to stop eating cheese and mackerel too, in order to be consistent? Might it be easier to just stop with all the dairy products, since the dairy industry seems to involve the same level of cruel practices as the egg industry? Would my friends think I was being precious when I started turning down offers of milk in tea, or declining to join in a shared non-vegan lunch? These and many more. But, most importantly, the elephant had made up its mind and I was going to have to follow through with my role as its servant.

Feeding the elephant
Clearly the elephant had been heading in this direction for some time – I’d been a sympathiser for most of my life, not a persecutor of vegans and their weird ways. This wasn’t a ‘road to Damascus’ style conversion from omnivorism to veganism. But in order to arrive at the point where I completely cut out animal products from my diet the elephant had taken a long sequence of small turns towards this direction, at at each stage my intellect had dutifully generated post-hoc rationalisations for these.

1929605_22166090365_2545_nIt goes back, of course, to childhood. I was a fantastically fussy eater, ‘difficult’ about food, and I cannot remember ever not being this way. My brother, two years younger, was quite different: a human dustbin, he’d happily eat what I would not, and probably this was why so many adults asked if we were twins; despite him being two years younger he was always the same size as me (as you can see in the photo on the right, taken perhaps when I was 9 and he was 7).

In early teenage years my moral intuition told me that I should stop eating meat, so that’s what I did. The logical reasons were well rehearsed—I’d witnessed many mock arguments between peers about whether it was right to eat meat or not—but it really wasn’t these arguments that persuaded me. It just felt like the right thing to do, even if it was considered rather effeminate for me to give up meat (this was the 1990s… but is it any different now?).

Aged 18 I very abruptly got over my life-long fussiness with food—I’m not at all sure how this came about at all, let alone so quickly—but this was the age where it also felt intuitively OK for me to eat meat again. So I did, and shortly after (when I went off to university) I didn’t introduce myself to anyone as being a vegetarian. Whilst at university I met my now-wife, and we’ve been together for 22 years. She became a vegetarian at university because it seemed like the right thing to do, although one that wasn’t particularly interested in cooking. So for those first 20 years together—with me as the domestic chef—I was a quasi-vegetarian, making and eating vegetarian food at home, but usually eating meat if I ate out anywhere. I only went through a meat-at-home phase during the three years that we lived apart in order to do our doctorates at different universities.

As an adult, then, I was definitely over my childhood fussiness and managed to get a reputation as a human dustbin. The elephant kept directing me back for seconds, thirds and so on, and I subsequently concocted rationalising stories about ‘not letting food go to waste’. Another set of questions came along with the birth of our son, just over 8 years ago: when it came time for him to be weaned, here was a tiny human being whose menu was (initially) completely dictated by his parents’ choices.

5995306074_e400da02c4_zWhat was the ‘right’ choice for us to make? To what extent would he decide what he ate and why? Either the elephant, or pragmatism, made the decision: we weren’t going to specially buy in meat for him to eat at home. He grew up healthy and un-deformed, despite some frowning from meat-eating adults, and he eventually settled into something like a pescatarian diet—vegetarian, but with occasional fish dishes—and, as far as I know, he’s never eaten the conventional chicken, beef, turkey, lamb and pork foods that most of his friends eat.

So, here are the last few elephant moves and subsequent rationalisations that brought me to the point of veganism:

  1. A lactose-intolerant wife meant soya milk in the fridge. I made the (initially gross) switch to soya milk in tea. Rationalisation: Efficiency. There’ll be more room in the fridge if I stop buying cow milk. The cow milk always goes off anyway.
  2. A work trip to California for 10 days in the spring of 2015, started off eating huge beefburgers (“when in Rome…”) but within a few days swung back to an entirely vegetarian diet. Rationalisation: Burgers are gross, my digestive system isn’t used to this kind of treatment, America has an obesity problem, I want to avoid going home ‘smelling like meat’ like last time I had a work trip to the USA.
  3. A later work trip in the summer to the south of France, with Jonathan Safran-Foer’s book Eating animals as reading material at the airport, the meatiest thing I ate was fish. Rationalisation: Eating meat supports horrific treatment of sentient beings – as explored in the book. I described myself to one of the students as ‘a vegetarian who sometimes eats meat’, he said I was hypocritical and being a hypocrite is bad. There’s lots of cheese in France, cheese is OK isn’t it?
  4. Meeting an exceptionally bright ex-student who was back from university for the summer, and had enthusiastically embraced veganism whilst at university. I felt sympathetic towards his position. Rationalisation: He is a really clever person, and a person of great integrity, therefore I should emulate him.

In conclusion
In writing all this I’ve taken a leaf out of Jonathan Haidt’s book – rather than battering you with rational argument after rational argument as to why I think it is right to eat a vegan diet, I’ve instead taken you on a personal journey. I’ve let you see that I’m a real person with vulnerabilities and uncertainties, rather than an android with a socially-awkward diet. Similarly, I’ve not set about criticising your own personal food choices and ethics using (what I would see as) pure logic and cold hard facts. Your intuition would have steered you away pretty quickly, entrenching any differences between us and leaving us both feeling like we were always right and the other was just plain wrong. In fact you would have been very unlikely to get this far in unless you were planning some kind of logical refutation of everything I’d said in order to prove just how wrong I was.

If you have the slightest interest in becoming more vegan than you already are, then I refer you to the recommended listening, reading and viewing that I’ve listed at the end of this post; they’re full of partisan rational arguments, many of them biased, but that’s the way things are and I’m sure you’re mature enough to cope with that.

Now, given that this is a Middle Way Society blog post perhaps I ought to conclude with a few Middle Way pointers:

  • The Middle Way is not a soggy compromise between two contrasting views: one can arrive at a seemingly ‘extreme’ viewpoint (that it is better to eat a totally vegan diet) whilst holding that belief provisionally, without becoming dogmatic about it. It’s not easy, but knowing that it’s not easy is the first step to making it work. Unlike the decision to donate a kidney, the decision to go vegan is instantly undo-able, and yes—of course I’d rather eat meat than starve to death.
  • The Middle Way stresses the incrementality of our beliefs – we do not need to succumb to the ‘all or nothing’ thinking of the nirvana fallacy. It is not a case of either you are or you aren’t (a vegan). You can eat less meat, you can use fewer dairy products. You can give up flesh for Lent. You could just eat less. There are many dietary configurations in the human population, there have never been such a wide variety of food options on offer in supermarkets and restaurants, and—to be quite frank—if you’re reading this you’re probably not starving for lack of funds or opportunity.
  • The Middle Way suggests that we should be sceptical in an even-handed way. If you want to be exposed to extremely polarised views on veganism, I refer you to the thing known as ‘social media’. If your elephant has chosen a particular path for you, based on intuition, make sure that any post-hoc rationalisation that you do isn’t based on the sort of absolutisations that you will quickly encounter on the interweb. If you can’t understand the way that your opponent justifies their position, or their objections to your arguments, then do you really understand the rational justification for your own beliefs?

So, as I said at the start of this post, the decision I’d made to eat a vegan diet was not a rational one. But I hope you appreciate now that choices are only rational to a degree, and that an ‘irrational’ impetus can still lead to a beneficial outcome!


Suggested listeningMelanieJoySmall

Suggested reading

Suggested viewing

Photo credits
Photos of elephants and eggs courtesy of pixabay.com [License: CC0 Public Domain]
Photo of brothers from the 1980s was scanned from the family archive [License: CC BY-SA 2.0]
Photo of my son is all my own work. [License: CC BY-SA 2.0]

Order, disorder, reorder – part 3 of 3

It’s almost a simplistic metaphor, but … picture three boxes: order, disorder, reorder. … [I]f you read the great myths of the world and the great religions, that’s the normal path of transformation.

–Richard Rohr

687px-Rohr20010928svobodatThis blog post is the final part of a three-part series inspired by the above quote by Richard Rohr (shown in the photograph on the right). If you’ve not read parts one and two I recommend doing so now so that you appreciate the context of Rohr’s words and how they might apply to the great myths of the world, and to political maturation.

Here I will attempt to frame my own spiritual development in terms of Rohr’s model, although I have some reservations about using the term ‘spiritual’ about myself. I will also acknowledge the limitations of such a simplistic metaphor, with reference to my personal history. I will conclude by taking stock of where I am right now, aged 40 – which is viewed conventionally as the mid-point of life, and where I may perhaps navigate to in future with the aid of the middle way.

Born into disorder? Not me.

What’s difficult is so many people formed in the last 30 years were born into the second box of disorder. [They] don’t have that order to begin with, to reject and improve on.

Me aged about 7 or 8. That's not my school uniform, that's what I had to wear to go to church!It’s been 40 years since I was born, and I reckon I do not fall within Rohr’s grouping of people “formed in the last 30 years”. Not due to the mathematical exactness of his figures (in context he did not literally mean a cut-off at 30 years old!), but due to the fact that I was brought up in a very rigid  – and ordered – container. My family was of the more evangelical protestant Christian variety and our acts of worship were not confined to Sundays (although there was a service every Sunday, sometimes two), but spread to other activities throughout the week and a general feeling of being watched at all times by an omniscient God who was, by turns, strict and loving. This religious context defined the pattern of my weeks and years, much more so than any other aspect of my life such as school or neighbourhood friendships. To put it into Rohr’s terms, I was, quite definitely, born into a box of order.

Due to the specific strictures of our denomination (which was a part of the so-called Holiness movement) I was brought up with very rigid views on the moral validity of abstinence from alcohol, cigarettes, drugs, gambling and pre-marital sex, as well as the more usual protestant insistence on truthfulness before God, honesty in my human interactions, an awareness of my innate sinfulness, observance of the Ten Commandments, belief without evidence (which was termed ‘faith’), worship of the one true God, Bible-reading and prayer. There was also a very strong devotion to charity work, putting others first, observing Sundays as a holy day (so no shopping or engaging in other worldly pursuits on a Sunday) and a lot of encouragement to proselytise to my unconverted peers, through deliberate use of words of personal testimony and through the example (or ‘witness’) of my actions.

This ordered container – which, of course, seemed normal, reasonable and inevitable to me as I grew up, as I didn’t know any different – started to develop holes as I moved into adolescence. It became obvious that other people did not have the same beliefs, and not just those people in the wider ‘sinful’ world but even people who I respected and looked up to within our own congregation. I’m not talking about anything illegal or conventionally controversial like child sex-abuse or misappropriation of funds, but it seemed eye-poppingly amazing to me that other church-goers might be quietly making money on a Sunda or perhaps discreetly but casually conducting a sexual relationship with someone that they weren’t married to. When the holes got big enough, I could get a better view of what might be in store for me in the disorder box. It was a confusing, topsy-turvy place…

Me, aged 18. And yes, that's a bible under my arm.For example, I was indebted to the Jesus Christ whom – I was told – had died for my sins, but didn’t believe that anyone was listening when I prayed. I respected by parents, who became ministers in our denomination when I was 16, but had to find a way of hiding the fact that I was trying out drinking, smoking and other recreational drugs with my peers. I had been instilled with the ideal that sex was expressly for marriage and marriage was for life, but everyone else seemed to be doing it and when I eventually started doing it too it didn’t seem at all immoral or wrong – nothing had ever seemed so natural, normal and right. In other areas, in my school education for example, I was realising that there were well-justified reasons for believing that the universe was not centred around the human race, and this contradicted the interpretation of holy book that I’d been brought up to revere.

Breaking away
Anyway, the upshot was that by the time I was an adult the cognitive dissonance became too great, in a quiet crisis I abruptly dropped the public pretence of being ‘a good Christian’ in my denomination, to the quiet disappointment and confusion of the older generations in my family. In time my siblings also rejected the same container that they’d been brought up in, but I was the first and with that I had to lead the way. In fact my younger brother rebelled in a more roundabout way when he was 18 by moving to a different continent and becoming even more enthusiastically evangelical… a phase during which we communicated little (he did once urge me, by email, to “repent and get saved”) and which only lasted a couple of years before ending with a rather shameful implosion. He returned and recovered, I’m pleased to say.

Untitled2The thing about this transition is that I didn’t then enter the metaphorical reorder box, I just cobbled together a different order box: I was an atheist, a materialist, a natural realist, a scientist, a rationalist (and so on – follow this link to a piece I wrote in 1997 about the firewalk we did with Wessex Skeptics). This was easy enough for me to do considering that I was at university studying for a physics degree, with only superficial contact with my family back home. I even adopted a new name through this conversion: James became Jim. Conditioned humility kept me from openly trumpeting my new order to all and sundry, along with some guilt that I’d rejected the certainties of the older generations in my family, who as far as I could tell were good people with the best of intentions in the way that they’d brought me up.

I adapted quickly to my new sense of order, and very little occurred to challenge it – at first. I was living in a secular society,  and my friends and colleagues during my degree, PhD, and teaching career were pretty much all of an atheistic persuasion and those who did have religious beliefs similar to my own from childhood were discreet about it. I knew where I stood, along with the secular majority – viewing organised religion as a childish fantasy based on a human need for consolation – and it hardly had any influence on my life. I winced when I read the polemical work of the more vociferous ‘new atheists’ like Richard Dawkins, who seemed to over-simplify a rather complex situation by attacking crude stereotypes and probably succeeded in pushing moderate Christians away via the ‘backfire effect‘.

Going through disorder

And yet, what I always tell the folks is there’s no nonstop flight from order to reorder. You’ve got to go through the disorder.

So, I think it is more accurate to say that it is only in the past few years, after the certainties of the academic world, after working way too hard as a school teacher for ten years and allowing that ordered container to define my existence, that I have moved into the disorder box. I have been brought to disorder, I have not chosen it. In fact it took a while for me to even realise that I was there, but eventually, gradually, it dawned upon me.

It seems a bit too soon to speak as frankly about this period of my life as I have about the earlier, ordered period, so I’ll just say that in my renewed search for meaning I encountered the Middle Way Society – and I’ve found it to have been immensely helpful in my navigation of the ‘messy middle’ between absolute metaphysical certainties. So, in Richard Rohr’s scheme, I’m right on schedule – as I enter mid-life I’m bumbling around in the disorder box, but I think there’s hope that I can bumble less and eventually crawl through into the metaphorical reorder box.

There is, as always, a danger of absolutising this model and treating it as a single linear progression through three distinct stages, with a definite ‘destination’. Rohr’s usage of it is as an over-arching framing of spiritual development during a person’s life, from naive, exclusive ‘early stage’ religion through to a more mature, inclusive, flexible religion that unites rather than divides. In the shorter term our integration is likely to proceed in a series of cycles rather than through a single pass through the sequence order-disorder-reorder. Also our integration may well proceed asymmetrically, which is not wholly a bad thing as explained in this video from the Middle Way philosophy series.

I’m still uncomfortable with the specific term ‘spiritual’, as I (rather clumsily) tried to explain in the podcast interview with Barry last year: to me the term is inextricably associated with New Age ‘woo’, eternal souls and Cartesian dualism, the Pentecostalist understanding on the ‘Holy Spirit’, and other metaphysical absolutes which cannot be justified by experience. Richard Rohr, as you might expect, seems to be quite comfortable with using the term but I’m encouraged by the fact that he’s more inclined to talk about spiritual development as the increasingly ethical use of your intellect, heart and body, which seems a long way from metaphysical woo.

themiddlewaysocietylogoA term that’s more agreeable to me than ‘spiritual development’ is ‘integration’, as used here in the Middle Way Society. What others may look upon as my spiritual development, I would like to name as my progress with integrating desire, meaning and belief – and in the process becoming more ethical, a person of greater integrity. As a child I could see a disconnect between the stated beliefs of the adults around me, and their ethical actions. The archaic collection of metaphysical claims which formed their creed were ostensibly used to justify their actions, but I thought that their actions would probably have been just as ethical in the absence of these words. Perhaps a more succinct way of putting it is that there was great emphasis on orthodoxy, and an equally strong emphasis on orthopraxy, but that the connection between the two was not necessarily what it was claimed to be.

An ongoing process of transformation
I’m not claiming to have achieved perfect wisdom, in fact I don’t really believe that such as thing is anything other than an archetypal aspiration anyway. To be more objective in the justification of my beliefs, to hold them provisionally and adapt them incrementally is a more realistic and ethical proposition. In the past few years I think I’ve had a few tastes of what might lie ahead in the second half of my life, beyond the current disorder.

For example, although I’ve felt guilty about leaving the religion that I was brought up in, I can now appreciate that I rejected the ideology and the beliefs of my parents and grandparents, without rejecting the parents and grandparents themselves. In Christianity this sentiment was expressed by Saint Augustine as “hate the sin, but love the sinner”, but – as Gandhi pointed out – this is easy enough to understand though rarely practiced.

1024px-(3)_Flaxman_Ilias_1793,_gestochen_1795,_183_x_252_mmI can also see that the Zeus-like Christian God that I was brought up with is a rather childish (but widespread) interpretation of Christian theology, and my subsequent rejection of all understanding of an Abrahamic God was also rather extreme – more subtle and nuanced agnostic understandings of the concept of “God” exist, and the meaning associated with the God archetype does not have to be thrown out with the metaphysical bathwater. For example, what I came to see as the preposterous proposition of Jesus’s resurrection at Easter can in fact be a source of meaning and inspiration, as discussed in this superb article by Robert M. Ellis.

Thirdly, with regards to the way that I choose to live my life, I can still abstain from smoking, from getting into debt and from lying… but that it is largely my choice, what currently seems most appropriate within the wider conditions of my life, and not a set of imperatives dictated to me by the absolute metaphysical dogma of a particular religious tradition. My upbringing could be (somewhat uncharitably) viewed as an indoctrination into a specific moral code, but in rejecting the supposed authority behind this code I do not instead have to embrace a nihilistic relativism. To paraphrase from Robert’s books on Middle Way philosophy:

The absolutist’s mistake is to understand the right choice in terms of overall principles regardless of the specifics of the situation. The relativist’s mistake is to believe that there is no right choice.

In conclusion…
Turning 40 is something I’ve mentioned to my friends and associates, not because it has great significance for me, but more because it seems to have significance for them. In opposition to our youth-obsessed culture’s conventional position on ageing, I approve of getting older: looking back I can see that I’m not the same fool I was at 30 (or 20, or 10, or even 39 and 51 weeks). Here’s to maturation in general, not just spiritually, and here’s (hopefully) to the next 40 years!


Featured image is an engraving by William Blake, from The Pilgrim’s Progress, via Wikimedia Commons
Photograph of Richard Rohr by Svobodat [License: CC BY-SA 3.0], via Wikimedia Commons
Image of ‘God on this throne’ is actually an engraving of Zeus from John Flaxman’s Iliad, via Wikimedia Commons
The three photos of me (aged roughly 7, 18 and 23) were scanned in from the original prints. Retro.