Category Archives: Critical Thinking

‘How Minds Change’ by David McRaney

How Minds Change: The New Science of Belief, Opinion and Persuasion by David McRaney (Oneworld, 2022)

Review by Robert M. Ellis

‘How minds change’ is rather a big topic: effectively the topic of judgement. How do we end up making a judgement about what to believe that is different from what we judged before? This 2022 book by David McRaney (previously the author of two popular critical thinking books) actually focuses mainly on how minds change for the better, rather than how they slip into dogma. That’s because it correctly detects that improving mind changing depends on having more options, the expansion of the mind rather than the maintenance of a narrow set of assumptions. We do not ‘change our minds’ in this sense by just switching from one set of assumptions to another, much as the narrow-minded may prefer to portray things in that way so as to close down the options.

So, the topic of ‘changing your mind’ has a large overlap with that of the Middle Way, particularly with the principle of provisionality as a key aspect of the Middle Way. However, the Middle Way has many interdependent elements, and includes the longer-term development of conditions that will enable us to ‘change our minds’. We can change our minds, in my view, because of access to options that avoid the limitations of absolutization. In absolutization there are only two options: our current absolute beliefs and their unacceptable negation. To ‘change our minds’ avoiding absolutization, wider options can be enabled by a variety of kinds of contextualization: embodied, meaning-based, or belief-based. This book, however, focuses only on the immediate conditions for changes of belief in the context of dialogue – not at all on the effects of any embodied practice, or of the use of the imagination, or even on the role of individual reflection in changing belief. However, within the limitations of that focus, it tackles the changing of minds rather well and very readably, drawing in the process on some engaging examples (flat-earthers, anti-abortionists, anti-gay Baptists, and so on), and on some helpful psychology, neuroscience and sociology. Its subtitle, ‘The new science of belief, opinion and persuasion’ accurately conveys that focus.

The most central point of this book is about the compassionate focus needed in effective persuasion. You can only succeed in persuading someone of anything through argument (that is, by giving them reasons to believe otherwise) if they are capable of contextualizing those reasons within the terms of their own feelings and commitments, what they find most meaningful. Argument may thus work (often only within a particular sphere) with those trained to value the usefulness of argument itself in justifying a position: usually by academic or professional training of some kind. With most people, however, persuasion can only occur by changing the context in which they think of the issue, which one may be able to do by asking friendly questions that widen and personalize the scope of the discussion. McRaney discusses ‘Deep Canvassing’ and ‘Street Epistemology’ as approaches that have successfully done this. For instance, in one memorable interview, an elderly lady was gradually enabled to reconsider her conservative view on abortion by recalling a friend from earlier life who had needed and had one. There a question like ‘Do you know anyone who has been personally affected by this issue?’ can evidently be a game-changer, because it induces people to switch from the absolutized abstraction through which they have been approaching the topic to a more contextualized view. That more contextualized view may be offered by the recollection of past experiences, of sympathy with other individuals, or of wider moral commitments.

To be able to do this, of course, one needs to be able to by-pass any kind of stress response, which can be triggered by any kind of expression of disagreement without a reassuring context. In the interviewing techniques investigated by McRaney, this was done just by a friendly approach, reassuring the interlocutor that the interviewer was not there to make them look silly or to pressurize them, and by asking questions of a kind that people often like to be asked – that is, about the experiences that have led them into their views. What McRaney does not note here are the wider possible ways of avoiding stress responses and triggering wider options that are already widely used in other contexts, such as the use of mindfulness, of imaginative re-creation and/or embodiment, of mediation techniques, of psychotherapeutic techniques, of familiarization with the background and different arguments through education, or of a deliberate programme of reflection for an individual (as in journalling). Many of these could probably not be used in the kinds of specific situations discussed by McRaney (such as interviewing a stranger on the street), but they are central to his wider topic of how minds change.

Another way of avoiding perhaps habitual absolutized responses is the use of incrementality, which is an aspect of many of the approaches to helpful canvassing and interviewing discussed by McRaney. Typically this involves asking an interlocutor to place their opinion on a scale of 1 to 10, or 1 to 100. This can often force people to consider whether their opinion really is as absolute as their initial expression of it may have suggested. For instance, are they 100% anti-abortion, or only 90%? If they say only 90%, it’s then fairly unthreatening to ask what the basis of the 10% of openness to the other approach might be. After a friendly conversation that opens up new options, they may be asked again if their point on the scale has moved at all – and quite often it has. This is not a dramatic conversion threatening the place of the person in the group they may identify with at one end of the debate, but a humanizing and individualizing process that helps people engage with more of the complexity of what they’re thinking and feeling.

Social conversions are in any case probably best avoided, not only because in some circumstances they may threaten our social (and thus possibly our economic or political) position, but because they often mark flips in which one absolute position is merely replaced by its opposite. The techniques discussed by McRaney don’t seek to convert anyone, but rather to help them autonomously consider more options and thus develop a greater understanding of uncertainty. When applied to judgement, this kind of understanding is much more likely to result in decisions that address the complexity of conditions. However, there are obviously some situations where we do have to come down on one side or the other – as when voting, or when deciding on any other kind of political commitment, such as joining a party. Here, McRaney rightly observes that what proves important to people in making those choices wisely is whether or not they have passed a particular tipping point in which the need to address wider feelings or conditions trumps the ‘conformity threshold’ (p.277).

McRaney also recognizes the negative effects of social media or other online interaction on raising the conformity threshold (which I would see as the point where other options become available beyond the absolutes maintained by a group that is influential on an individual). Although the internet can make us more widely aware of new options, it often has the reverse effect, because it reinforces social conformity without the relatively emollient effects of face-to-face contact. The stakes seem higher online, because we often have to explicitly agree with the group’s line to maintain our membership of the group (unless it is an unusually liberal or academic group that prioritizes provisionality in certain respects as part of its culture). When we are face-to-face, we instead have all sorts of other reassuring unconscious links with other members of the group, and become less solely dependent on taking a conforming position on hot-button issues to maintain bonding.

McRaney also helpfully recognizes the relationship between absolutizing a belief and absolutizing its source. If we have justified a given belief (say, that the earth was created in seven days), from a particular source whose authority is then taken to be absolute and unambiguous (say, the Bible), then, of course, questioning the belief means that we then become more open to questioning the source, and also questioning other beliefs that we have justified from that source (p.149). This could, of course, lead us on into further observations about the cultural role of metaphysical belief systems claiming authority and their interdependence with absolute positions taken by individuals – but this is not an area McRaney explores. In the larger perspective, though, I don’t think we can understand how minds change without fully acknowledging the cultural entrenchment of the forces that prevent them from changing.

The same goes, in more positive terms, for our understanding of confidence: how we can maintain the embodied and experiential basis for full-heartedly but provisionally justifying our beliefs. McRaney observes that “subjects who got a chance to affirm they were good people were much more likely to compromise… than people who felt their reputations were at stake” (176). The recognition of uncertainty in our beliefs does not mean we need to be apologetic about them, or to undermine our basic sense of security that we are ‘good people’. McRaney emphasises the social aspects of that – that we don’t help by being confrontational – but I felt he could also say much more about the prior psychological conditions for that sense of security. In individual experience, this may go back to secure attachment in childhood, but it also owes a good deal to our general physical state, level of mental awareness, and ability to draw on a rich base of cultural support that maintains our sense of meaning and offers sources of inspiration.

Overall, then, I felt that McRaney’s book was a very useful presentation of the effects and implications of some recent research on interviewing techniques and their effectiveness, along with some interesting examples of encounters between more and less dogmatic groups (Westboro Baptist Church and LGBTQ campaigners, flat earthers and the rest, 9/11 ‘truthers’ and their detractors). It lives up to its subtitle, but is, however, a rather narrow interpretation of its title. There is a lot about how minds change that is not even mentioned or remotely recognized, because of the intense focus on a certain area of research and application (plus an apparent ignorance of areas like mindfulness and the arts, which can have a big input to this topic).

The narrowness of the focus was also reflected in certain assumptions and striking omissions even within the field of science. There is no mention at all of brain lateralization, despite its well-evidenced relevance to precisely the processes of ‘accommodation’ (reinforcing feedback loops maintaining a fixed belief) and ‘assimilation’ (balancing feedback loops adapting to new information) that he takes from Piaget. These two processes are overwhelmingly the business of the left and right hemispheres respectively – as detailed not only in the work of Iain McGilchrist and all the evidence from medical and neuroscientific research that he draws on, but also from studies of animals going as far back down the evolutionary tree as early fish. Much further light is shed on our difficulties in changing our minds by the over-dominance of left hemisphere processes, imposing abstract, conceptually defined beliefs associated with goals on our more open right hemisphere awareness of the products of the senses and imagination.

There is also a lot of dependence on evolutionary psychology in McRaney’s account of human development (e.g. p.179). This tends to see human learning quite narrowly as motivated only by survival and reproduction, which have then shaped our genetic heritage, whereas the wider picture is that much of our psychological and neural heritage is epigenetic, and that our motives are also shaped by more subtle goals higher up the hierarchy of needs: social connection, self-expression, and even what Maslow calls ‘self-actualization’ have emerged as motivators for human development that may at times contradict the evolutionary dictates of survival and reproduction. McRaney also (presumably following his sources) tends to use quite mechanistic language about human learning, referring to us as ‘learning machines’ (p.179), as though learning was linear rather than a matter of the interdependence of complex systems. These kinds of assumptions are scientistic rather than scientific, and may reflect the narrow philosophical assumptions of academics in certain fields.

This topic also requires the use of a lot of philosophical language that is often used equivocally, so in my opinion it’s thus hard to write about well without a good deal of rigorous consideration. Prime amongst this language is the use of the term ‘truth’, where McRaney continues the common equivocal use, as illustrated by the titles of two of his chapters: ‘post-truth’ and ‘the truth is tribal’. ‘Post-truth’ means that there is no longer an absolute truth, but if truth is tribal, it is merely the set of beliefs that people take to be ‘true’ in their tribe. Constant equivocal switching between these two senses of ‘truth’ (absolute and relative) has already caused mass confusion, and it’s disappointing that McRaney does not query or note it. In my view we need to be extremely rigorous about this if we are to make any impression on the confusion: the truth is not tribal. Beliefs are tribal; basic assumptions are tribal; delusions are tribal. The truth, on the other hand, is something we simply don’t have access to, so we can use the concept as a source of inspiration, but should never claim to have it. Similar points apply to the widespread equivocal use of terms like ‘knowledge’ and ‘meaning’.

To develop a broader vision of how human minds change, then, I would very much recommend reading McRaney’s book, but not taking it as a complete account of the subject, even in outline. Those involved in political campaigning may particularly find his information about canvassing and interviewing techniques helpful, and his overall message of the need for compassion in communication can be an inspiring one. However, this account needs to take its place in a wider view: one that begins with an appreciation of basic uncertainty, is prepared to question all dogmatic assumptions (even those of social and neuro scientists), and offers a full appreciation of the ways minds need to change through individual practice that cultivates awareness and imagination – not merely through socio-political discourse, however important that may be in certain contexts.   

Network Stimulus 7: Incrementality

The next main meeting of the Middle Way Network will be on Sun 16th August at 7pm UK time on Zoom. This is the third of the series looking successively at five principles of the Middle Way (scepticism, provisionality, incrementality, agnosticism and integration), followed by three levels of practice (desire, meaning and belief).

There’ll be a short talk on incrementality, followed by questions and discussion in regionalised breakout groups. Some other regionalised groups will meet at other times. If you’re interested in joining us but are not already part of the Network, please see the general Network page to sign up. To catch up on the previous session, on scepticism, please see this post.

There is already a short introductory video (8 minutes) on incrementality as part of Middle Way Philosophy, which is embedded below. You might like to watch this for an initial orientation before the session.

Here’s the video of the actual stimulus talk and Q&A:

Incrementality

Incrementality is seeing things as a matter of degree rather than as an on/off switch. It is an important aspect of Middle Way practice, because it is one of the ways we can challenge absolute assumptions on either side. Absolute assumptions are framed as discontinuous alternatives between one thing and another, seen as necessarily the only way we can understand the situation. However, in human practical experience there is always another way of framing these absolute binary choices, which are imposed by our conceptual assumptions. We do not have to depose conceptual assumptions themselves (or the logic we use to relate them to each other) to do this, but merely use them more carefully, thinking carefully about the meaning of what we are talking about in experience rather than in terms of the concepts traditionally imposed on it.

Some of the most damaging and immediate examples of the negative impact of binary distinctions can be seen in arguments about race, nationality, or any other human group assumed to have a fixed boundary. Not only these, but even some of the most seemingly intractable binary assumptions that have become entrenched into our language and thinking can be reframed. God or his absence is one widespread example of this. Freewill and determinism, and mind and body are others.

The tendency to think in terms of necessary and absolute binaries is also often described as dualism or as false dichotomy. We also have many phrases in everyday thinking that show ways of avoiding them. We often talk about ‘black and white’ thinking versus ‘shades of gray’, or of things as being ‘a matter of degree’. ‘Incrementality’ can also be thought of as ‘continuity’, or ‘gradualism’. It also has much in common with ‘non-dualism’ if this is interpreted practically rather than metaphysically.

Some suggested reflection questions:

  1. Think of an example of an opposed pair of terms that you frequently absolutise. Can you work out how they could be incrementalised?
  2. How do you think incrementalisation might help you in a practical situation: for example, resolving a dispute?
  3. Do you still find yourself assuming there are some opposed terms that can’t be incrementalised? (This may require further philosophical exploration and discussion to be resolved)

Suggested further reading:

Middle Way Philosophy 1:1.d

Middle Way Philosophy 4: Section 4 discusses a whole set of different pairs of opposed metaphysical beliefs and how they may be integrated (see pdf of Omnibus edition on Researchgate).

The Buddha’s Middle Way 3.e: ‘Incrementality: The Ocean’ has more about the concept of incrementality in the Pali Canon and in Buddhism

Critical Thinking 22: The Slippery Slope Fallacy

I’m moved to return to this blog series on Critical Thinking by the appearance of a particular fallacious argument in current political discourse in the UK (in the form of the “best of three” argument about a possible second referendum on Brexit). This is an example of the slippery slope fallacy, which I’ve not yet covered in this series. This fallacy doesn’t seem to be as widely understood as it should be. I regularly see people online using “slippery slope” as though it was a justification rather than a fallacy, and even highly-educated BBC journalists seem either unaware of it, or otherwise unwilling to challenge politicians who use it.

The slippery slope fallacy, like any other bias or fallacy, involves an absolutized assumption that is usually unrecognised. In a Middle Way analysis there is always a negative counterpart to an absolutized assumption (assuming the opposite) and that’s also the case here. In the case of a slippery slope fallacy, it involves an assumption that if one acts in a particular way showing a tendency in a particular direction, this will necessarily result in negative effects that include further movement in the same direction, with further negative effects. The absolutisation here lies in the “necessarily”. Those who think in this way do not consult evidence about what is actually likely to happen following that course of action, or justify their position on the basis of such evidence. Rather, they just apply a general abstract principle about what they think must always happen in such cases. Such general abstract principles are usually motivated by dogmatic ideology of some kind.

Some classic examples of the slippery slope fallacy involve arguments against voluntary euthanasia or the legalisation of recreational cannabis. The argument against legalising voluntary euthanasia goes along the lines of “If you allow voluntary euthanasia, then there’s bound to be a creeping moral acceptance of killing. Respect for human life will be undermined. Before you know it we’ll be exterminating the disabled like the Nazis did.” The argument against legalising recreational cannabis would follow the lines of “If you let people smoke cannabis, they’ll soon be on to harder stuff. It’s a gateway drug. We’ll soon have the streets full of heroin addicts.” In both of these arguments, there is no particular interest in whether there is any evidence that the lesser effect would in fact lead onto the greater one, just the imposition of a dogmatically-held principle that proclaims what would always happen. The absurdity of assuming that this is what would always happen becomes clearer if you think about how easily we could use these slippery slope arguments against currently accepted practices: “If you allow euthanasia for dogs, you undermine respect for life and before you know it, it will be applied to humans.” or “If you allow people to smoke tobacco, they’ll soon be smoking heroin. It’s a gateway drug.” In practice, we draw boundaries all the time, and in law we enforce them. There is no particular obvious reason why new boundaries should be harder to enforce than previously accepted ones.

So now we come to the current use of the slippery slope fallacies in UK political discourse. This is by Brexiteers opposed to the idea of a second referendum – which, at the time of writing, is looking increasingly like the only viable option to release the UK parliament from deadlock over Brexit. There argument goes along the lines of “If we have a second referendum, what’s to stop us having a third one or a fourth one? We’ll never resolve the issue.” Here’s one example of many uses of this argument in the media.  As in the euthanasia and drug legalisation arguments, the objection appears to simply involve the dogmatic application of an implicit principle, in this case, that “politicians can call as many referendums as they like until they get the result they desire”. As in those arguments, also, there is no positive evidence that this would actually be the effect, nor that this is actually part of anyone’s motives. In practice, it seems much more likely that the amount of public resistance would grow the more referendums were called. In its imposition of an abstract dogmatic principle on the situation, this argument completely misses the point that the call for a second referendum is a pragmatic response to a particular situation of deadlock, not an invocation of a general political principle.

As with other biases and fallacies, there is also a negative counterpart to the positive slippery slope fallacy. This is the failure to acknowledge actual evidence that a “slippery slope” might happen, due to an absolute reaction against the slippery slope fallacy. There are some instances where there is positive evidence that a particular course of action can initiate a gradual deterioration – for instance, being unemployed is often correlated with poverty and depression. Not that everyone who is unemployed will necessarily suffer in these ways, but that your chances of becoming poor and depressed demonstrably increase once you are unemployed. The danger of further negative effects from unemployment is probably something you should take into account before you resign from your job, if you have no alternative available: but taking it into account does not necessarily mean that it should determine your response.

So, the slippery slope fallacy is just another common instance of dogmatic assumptions applied in unconscious everyday thinking. It doesn’t imply that there are no “slippery slopes”, only that you need to look carefully at the slopes before you set off down them to see how slippery they really are. You might well be able to keep your footing better than you expect.

Link to index of other blogs in the Critical Thinking series

Picture: ‘Slippery Slope’ by S. Rae (Wikimedia Commons) CCSA 2.0

Believing in Santa Claus

If we are told about Santa Claus, will we “automatically” believe in Santa Claus?

I’ve recently been reading a big tome – ‘Belief’ by professor of psychology James E. Alcock. In many ways this book can be recommended as a helpful and readable summary of a great deal of varied psychological evidence about belief, including all the ways that beliefs based on perception and memory are unreliable, and all the biases that can interfere with the justifiability of our beliefs. However, I’m also finding it a bit scientistic, particularly in its reliance on crude dichotomies between ‘natural’ and ‘supernatural’ beliefs, for instance. It seems like a good indicator of the mainstream of academic psychological opinion, with both its strengths and its limitations. (I haven’t got to the end of the book yet, so all of those judgements will have to remain fairly provisional.)

One particular point has interested me, that for some reason I had not come across before. This is Alcock’s claim that accepting what we are told as ‘truth’ is “the brain’s default bias”.

There is abundant and converging evidence from different research domains that we automatically believe new information before we assess it in terms of its credibility or assess its consistency with beliefs we already hold. Acceptance is the brain’s default bias, an immediate and automatic reaction that occurs before we have any time to think about it. Only at the second stage is truth evaluated, resulting in confirmation or rejection. (p.152)

One of Alcock’s examples of this (seasonally enough) is the child’s belief in Santa Claus. If people tell the child that Santa Claus exists, he or she will ‘automatically’ believe exactly that. Now, it’s one thing to claim that this is quite likely, but quite another to claim that it is ‘automatic’.

If this is correct, it seems to be a significant challenge to the things I have been writing and saying in the last few years in the context of Middle Way Philosophy. If we automatically believe what we are told, then it seems that there is no scope for provisionality in the way we initially believe it, and we are left only with ‘reason’ – i.e. a second-phase reflection on what we’ve come to believe – to rescue us from delusion. The distinction that I like to stress between meaning and belief would also be under threat, because we could not merely encounter meaningful ideas about possible situations without immediately believing them. So, I was sceptical when I encountered this information. But, it coming from a professor of psychology, I certainly needed to look into it and check my own confirmation biases before rejecting it. Was this claim actually well evidenced, or had dubious assumptions been made in the interpretation of that evidence?

Alcock references a 2007 review paper that he wrote in collaboration with Ricki Ladowsky-Brooks: “Semantic-episodic interactions in the neuropsychology of disbelief”. This paper does summarise a wide range of evidence from different sources, but reading it easily made it apparent that this evidence has also been interpreted in terms of assumptions that are highly questionable. The most important dubious assumption involves the imposition of a false dichotomy: namely that the only options in our initial ‘acceptance’ of a meaningful idea about how things might be are acceptance of it as ‘truth’ or rejection of it as ‘falsehood’. If one instead approaches this whole issue with an attempt to think incrementally, then we can understand our potential responses in terms of a spectrum of degrees of acceptance – running from certainty of ‘truth’ or ‘falsehood’ at each extreme, via provisional beliefs tending either way, to an agnostic suspension of judgement in the middle. The introduction to Alcock and Ladowsky-Brooks’ paper makes it clear that this dichotomy is being imposed when it says that

The term ‘‘belief’’ will refer to information that has been accepted as ‘‘true’’, regardless of its external validity or the level of conviction with which it is endorsed.

If we start off by assuming that all degrees of conviction are to be categorised as an acceptance of “truth”, then we will doubtless discover exactly what our categorisations have dictated – that we accept things as ‘true’ as a default. This will be done in a way that rules out the very possibility of separating meaning from belief from the start. But since the separation of meaning from belief enables us to approach issues like religion and the status of artistic symbols in a far more helpful way, surely we need to at least try out other kinds of assumptions when we judge these issues? Alcock’s use of “true” as a supposed default in the “truth effect” that he claims is so broad that it effectively includes merely finding a claim meaningful, or merely considering it. This seems to involve an unnecessary privileging of the left hemisphere’s dichotomising operations over the more open contributions of the right, when both are involved in virtually every mental action.

The alleged two-stage process that then allows us to reconsider our initial assumption that a presented belief is ‘true’, and decide instead that it is ‘false’, also turns out not to necessarily consist of two distinct stages. On some occasions, we do immediately assume that a statement is false, because it conflicts so much with our other beliefs. However, Alcock identifies “additional encoding” in the brain when this is occurring, implying that both the stages are taking place simultaneously. Yet if both stages can take place simultaneously, with the second nullifying the effects of the first, how can the first stage be judged “automatic”?

So, in some ways Alcock obviously has a good point to make. Very often we do jump to conclusions by immediately turning the information presented to us into a ‘truth’, and very often it then requires further effortful thinking to reconsider that ‘default’ truth setting. But the assumptions with which he has interpreted his research have also unnecessarily cut off the possibility of change, not just through ‘reason’, but through the habitual ways in which we interpret our experience. There is no discussion of the possibility of weakening this ‘truth effect’ – yet it is fairly obvious that it is much stronger in some people at some times than others at other times. He seems not even to have considered the possibility that sometimes, perhaps with the help of training, our responses may be agnostic or provisional, whether this is achieved through the actual transformation of our initial assumptions, or through the development of wider awareness made so habitual that the two phases he identifies are no longer distinct.

This issue might not be of so much concern if it did not seem to be so often linked to negative absolutes being imposed on rich archetypal symbols that we need to appreciate in their own right. If I consult my own childhood memories of Santa Claus talk, I really can’t identify a time when I “believed” in Santa Claus. However, that may be due to defective memory, and it may well be the case that many young children do “believe” in Santa Claus, as opposed to merely appreciating the meaning of Santa Claus as a symbol of jollity and generosity. At any rate, though, surely we need to acknowledge our own culpability if we influence children to be obsessed with what they “believe”, and accept that it might be possible to help them be agnostic about the “existence” of Santa Claus? To do this, of course, we need to start by rethinking the whole way in which we approach the issue. “Belief” is simply not relevant to the appreciation of Santa Claus. It’s quite possible, for instance, for children to recognise that gifts come from their parents at the same time as that Santa Claus is a potent symbol for the spirit in which those gifts are given. We don’t have to impose that dichotomy by going straight from Santa Claus being “true” to him being “false”, when children may not have even conceived things in that way before you started applying this as a frame. If we get into more helpful habits as children, perhaps it may become less of a big deal to treat God or other major religious symbols in the same way.

Apart from finding that even professors of psychology can make highly dubious assumptions, though, I also found some interesting evidence in Alcock’s paper for that positive possibility of separating meaning from belief. Alcock rightly stresses the importance of memory for the formation of our beliefs: everything we judge is basically dependent on our memory of the past, even if it is only the very recent past. However, memory is of two kinds that can be generally distinguished: semantic and episodic. Those with brain damage may have one kind of memory affected but not the other, for instance forgetting their identity and past experience but still being able to speak. Semantic memory, broadly speaking, is memory of meaning, but episodic memory is memory of  events.

Part of what looks like a big problem in the assumptions that both philosophers and psychologists have often made is that they talk about “truth” judgements in relation to both these types of memory. Some of the studies drawn on by Alcock involve assertions of “truth” that are entirely semantic – i.e. concerned with the a priori definition of a word, such as “a monishna is a star”. This is all associated with the long rationalist tradition in philosophy, in which it is assumed that there can be such things as ‘truths’ by definition. However, this whole tradition seems to have a mistaken view of how language is meaningful to us (it depends on associations with our bodily experience and metaphorical extensions of those associations), and to be especially confused in the way it attributes ‘truth’ to conventions or stipulations of meaning used in communication. No, our judgements of ‘truth’, even if agnostic or provisional, cannot be semantic, but need to rely on our episodic memory, and thus be related to events in some way. If we make this distinction clearly and decisively enough (and it goes back to Hume) it can save us all sorts of trouble, as well as helping us make much better sense of religion. Meaning can be semantic and conventional, whilst belief needs to be justified through episodic memory.

Of course, this line of enquiry is by no means over. Yes, I do dare to question the conclusions of a professor of psychology when his thinking seems to depend on questionable philosophical assumptions. But I can only do so on the basis of a provisional grasp of the evidence he presents. I’d be very interested if anyone can point me to any further evidence that might make a difference to the question of the “truth effect”. For the moment, though, I remain highly dubious about it. We may often jump to conclusions, but there is nothing “automatic” about our doing so. Meanwhile, Santa Claus can still fly his sleigh to and from the North Pole, archetypally bestowing endless presents on improbable numbers of children, regardless.

Santa pictures from Wikimedia Commons, by Shawn Lea and Jacob Windham respectively (both CCBY2.0) 

Announcing our new webinar programme

We’ve got a new monthly webinar programme now open for booking, running for 13 months from Dec 2018 to Dec 2019. There will be a variety of topics, all of which involve the relationship between an area of practice or interest and the Middle Way – for example, the Middle Way and Meditation, the Middle Way and Science, the Middle Way and Judaism. This is your opportunity to find out more about a Middle Way perspective in relation to a topic that already interests you, interacting with members of the society in real time online.

For more information, including the full programme and how to book, please see this page.