Category Archives: Philosophy

Inside the third person

My own habit when I write even the more academic of my books is to freely use the first person: “I want to argue…”. Of course I’m still trying to put forward a case that has wider significance than just for me, but the use of the first person seems a vital aspect of honesty in argument – to show that it’s me arguing from my perspective, and I’m not pretending to be God. The I is a provisionality marker. So it sometimes comes as a shock when I realise just how much insistence on the use of the third person there is in many corners of schools, colleges and universities – particularly in the sciences, both natural and social, and for some reason also in history. Sometimes that just means lots of impersonal constructions like “it is argued that…” or “this evidence shows that…”, but when helping someone with the proof-reading of their dissertation recently I found that they referred to themselves throughout as “the researcher”. This degree of third person pretence seems very jarring to me, and the reasons I reject it have a lot to do with the Middle Way view of objectivity I want to Thinking girl CCpromote.

The reason that many teachers and academics drill their students to write in the third person are all to do with “objectivity”. The idea is that when you write in the third person, you leave yourself out of it. You’re no longer dealing with the “subjective” experiences of your own life, but with general facts that can be supported with evidence. Now, as an experienced teacher, I’d agree with the intention behind this – students do need to learn how to justify their beliefs with reference to evidence or other reasons, and learning to do this is one of the benefits of education. But I’m also convince that this is the wrong way of going about it. Whether or not you use the third person doesn’t make the slightest difference to whether or not you use evidence to support your claims and argue your case critically – but it does reinforce the apparently almost universal human obsession with the idea that you have ‘the facts’, or ‘the truth’ – an implicitly absolute status for your claims. If you really believe that you have ‘the facts’, then the evidence is just a convenient way of getting others to accept the same ‘facts’ that you believe in, not a source of any possible change of view. The ontological obsession hasn’t just emerged from nowhere, but is fuelled by centuries of post-enlightenment linguistic tradition.

Far better, I would argue, to use the first person to own what we say, in the sense of admitting that it’s us, these fallible flesh-and-blood creatures, who are saying it. Then the objective is objective because we have argued it as objectively as we can, not because we are implicitly pretending to view it from a God’s eye view. If we really recognise that objectivity is a matter of degree and depends on us and our judgements, then it is not enough to merely protest that we don’t really mean it when we use ‘factual’ language that habitually bears an absolute interpretation. If we are to bear in mind the limitations of our perspective in practice, we need to constantly remind ourselves of those limitations. The use of the first person offers such a reminder.

Objectivity depends not on ruling ourselves out of our thinking so as to arrive at pure ‘facts’, but rather on acknowledging our role in reaching our beliefs. Recognition of evidence of the conditions around us needs to be combined with a balancing recognition of the limitations with which we are able to interpret such evidence. Neither idealism nor realism, neither mind nor body, neither naturalism nor supernaturalism: but a recognition that none of these categories are ‘factual’ – rather they are absolutizing limitations on our thinking. If we are to take the Middle Way as the basis of objectivity, we need to stop falsely trying to rule ourselves out of the language with which we justify our beliefs.

I’ve spent enough time in schools and universities to know that academic habits are not easily reformed, and that we will probably be stuck with these third person insistences and their cultural effects for some time to come. No teacher will want to disadvantage their students in an exam by teaching them to use the first person if they know that the students will lose marks if they do so. But please let’s not use or spread this unhelpful custom needlessly, and let’s take every opportunity to challenge it. To use the first person to refer to our beliefs is to connect them to our bodies and their meanings and perspectives – which is one of the prime things we need to be doing to challenge the deluded absolutised and disembodied interpretations of the world that are still far too common.

 

A distillation in four points

I’m always looking for new ways to get across the key points of Middle Way Philosophy in a compact list that can be readily referred to. The Five Principles  on which our summer retreat this year will be based (scepticism, provisionality, incrementality, agnosticism, integration) are one way of doing this, but these five principles focus on qualities to cultivate or use in judgement, rather than on the distinctive world-view they emerge from. So here’s a new attempt to distil that world view into four very brief slogans:

  1. Meaning is body-memory
  2. Belief is assumption
  3. Justification needs provisionality
  4. Truth is archetypal

The explanation of each that follows will no doubt be rather compressed. However, the main idea of this blog is to encourage you to see these points as interdependent (each building on the previous ones), and to at least glimpse how they challenge much conventional thinking and offer new ways forward  for humans stuck in that thinking. For more details on this whole way of thinking, please see first the introductory videos, then the Middle Way Philosophy books.Four Points

1. Meaning is body-memory

The embodied view of meaning tells us that meaning is an accretion of memory. But by ‘memory’ here I don’t anything on the analogy of data-storage which people too often use to try to understand memory.  Rather, whenever we encounter a new experience, we create new synaptic links connected to our whole body’s active engagement in that experience. That experience may involve associating words or symbols with the experience, and when we are prompted by similar words, symbols, or other associated experiences in future, we mildly re-run the synaptic connections associated with it. We thus lay down layer after unconscious layer of memories that then provide the basis of meaning-association in future, and even quite complex or abstract language draws on this embodied experience to be meaningful, via the medium of metaphorical constructions. Think about the most abstract language – a scientific paper, say, or a company board meeting. The meaning of all this language, however abstract, still depends on your body. When you have no body memories to connect with it, you cease to understand what is being said.

2. Belief is assumption

The dominant tradition in philosophy and science, which then influences the way people usually talk about their beliefs, is to think of them as explicit, but explicit beliefs are the tip of a very large unconscious iceberg. Most of our beliefs are a matter of what we assume, rather than what we have explicitly said. If you said you were hungry and then started looking at the sandwiches in a café, it would not be unfair to conclude that you believed that a sandwich might address your hunger, even though you didn’t explicitly say such a thing. Yet, strangely enough, most of the established thinking about how to live our lives just offers explicit reasons for believing one thing rather than another, rather than trying to work with what we actually assume. It is not reasoning (which always proceeds from assumptions) that will help us make our beliefs more adequate to the situation, but rather greater awareness of the assumptions with which we start to reason.

But we can only believe what we first find meaningful in our bodies, so the second point depends on the first.

3. Justification needs provisionality

How do we tell how well a belief is justified? That’s a question at the core of all the judgements we make in everyday life, in ethics, in science, in politics or elsewhere. The traditional answers all involve explicit reasons: for example, that a certain action is wrong because it says so in the Bible, or a certain scientific theory is correct because it can be supported by evidence. But we are constantly subject to confirmation bias, all of us living in our own little echo-chambers in which we seek out what we want to hear. The old ways of justifying our beliefs are not enough by themselves. We need to take into account the mental state in which the judgement is made too, to incorporate psychology as a basic condition in our reasons for adopting one belief rather than another. If we can hold a belief provisionally, so that we can consider possible alternatives, we are better justified than if we do not.

The mental state in which a belief is held is inextricable from the set of assumptions that support that belief. We can hold a belief provisionally if we find alternatives sufficiently meaningful (using our imagination). In the traditional ways of thinking dominant in philosophy and science, this way of justifying our beliefs cannot be taken seriously, because meaning is assumed to depend on belief and belief to depend on justification. In that way of thinking, reasoning comes first rather than the mental states in which the reasoning takes place, but this mistakes the tip for the whole iceberg. The third point thus depends on the first two.

4. Truth is archetypal

People are typically obsessed with ‘truth’, ‘the facts’, God, nature, ontology, ultimate explanations. Surely these things are important? Well, only in the sense that they are meaningful to us, not in the sense that we need to build up justifications of our beliefs by depending on them. If we think of ‘Paris is capital of France’ as true and ‘Paris is the capital of Mongolia’ as false, that is usually a kind of shortcut for the thought that the first is much better justified than the other, and that we assume it in practice. But, according to the third point above (justification needs provisionality), to be justified in believing that ‘Paris is the capital of France’ I need to believe it provisionally, that is to be able to consider alternatives. Whether I actually do this or not, claiming that it is true or false adds nothing to that justification apart from cutting off the provisionality, making it the final story and closing off any further thought or discussion on the subject. Claiming that it is true or false thus actually seems to undermine one’s justification.

Nevertheless, we can respect the motive of those who seek to establish the truth (which they will do best by considering the justification of a belief against alternatives – by doubting the truth of their claims rather than asserting it). Truth can thus still be a kind of symbolic inspiration or archetype (see this blog post for examples), and not claiming to possess archetypal truth a mark of fully respecting it. Just as we need to avoid projecting an archetype on someone else by thinking that they are God, or the perfect woman, or whatever (even though we may also appreciate ideal artistic depictions of God or of the feminine) we need to recognise truth as a symbol that we find meaningful in relation to our body-memory, without projecting it onto a particular set of words that we take to be ‘true’. Instead, whenever there is a discussion about whether we should hold one belief rather than another (in science, politics, ethics etc.) we can focus on justification.

We could not make sense of truth being archetypal if we did not separate meaning (point 1) from belief (point 2), recognising that meaning precedes belief rather than the other way round, and that we can find truth meaningful without believing that we have it. It’s also precisely because of the need to maintain provisionality about our beliefs (point 3) that we cannot justify claims of truth.

This view of truth can potentially transform our view of science, ethics and religion: whether we are talking about scientific facts, the good, or God, we can respect the motivations of those who value these things without accepting that any of them are actually possessed in a particular verbal formula.

The four points and the Middle Way

The Middle Way means a practice of seeking justification for our beliefs in provisionality rather than in consistency or evidence alone. To stay in this provisional zone, we avoid the absolutes of claiming truth on the one hand or falsity on the other. To do this in practice requires our mental states to be provisional, which is just as much a matter of our emotions and body as of our reasoning. It’s not a question of aiming to be in some wonderfully enlightened mental state, but simply of judging better every time by being less confined by our personal echo-chambers than we might otherwise be.

In connection with the founding story of the Buddha from which the term ‘Middle Way’ derives, we need to focus not on the final state that the Buddha supposedly achieved by using the Middle Way, but how his judgements at each stage reflected provisionality and enabled him to move beyond the rigid assumptions of those around him. First he needed to leave the palace with its rigid ‘truths’, then also move beyond the religious world of spiritual teachers and ascetics with their ‘truths’ (which also declared the world ‘false’). If we unpack what is required for the Buddha to go through this process at each stage, it involves maintaining a sense of the meaning of alternatives (point 1), developing a greater awareness of the limited assumptions of those around him (not just their explicit views – point 2), and recognising their lack of justification (point 3). If the Buddha had at any point discovered the ‘truth’, this would have halted his progress by ending the story, but instead the story continues – indefinitely.

Beyond ‘post-truth’ and ‘alternative facts’

After the recent inauguration of Donald Trump, a row broke out about the number of people who actually attended the inauguration, in which Trump’s spokesperson was derided for using the phrase ‘alternative facts’. The evidence, as illustrated by the photos, seems pretty clearly not in support of the ‘alternative facts’ offered by the Trump administration, but I’m not interested in discussing the details of that (which have been much debated) here – rather I’m interested in the outrage attached to the idea of ‘alternative facts’. After all, the thinking seems to go, we have the facts here, so how can there possibly be alternative facts? A similar line of thinking seems to be behind the burgeoning phrase ‘post -truth’ – as if we used to know the truth, but now people aren’t accepting that truth any more. My  view is that people who pay attention to evidence and try to develop personal objectivity are not doing their cause any favours by talking in terms of ‘post-truth’, but rather reinforcing the kind of confused thinking out of which the Trump phenomenon has been born.Inauguration_crowd_size_comparison_between_Trump_2017_and_Obama_2009

Let’s first reflect on some basic features of the human condition, that we can all probably find in our own experience. We all have beliefs about the world, but those beliefs have also often turned out to be wrong, even if we held them along with lots of other people. This applies not just to ‘values’ such s the past belief that slavery is OK, but also to past ‘facts’ such as that the world is flat. As an individual, I can confess that I used to believe a number of ‘facts’ that I now no longer accept, such as that sisters are always older than you, that I would never be able to find a girlfriend, or that it’s safe to drive at 60mph over a moor with loose livestock on it. There are also the limitations of our senses, perspective, and most of all our language, which depends for its meaning on our bodies and metaphorical constructions, not some kind of correspondence with potential truth (for more on these sceptical themes, see this article). On the whole then, we must admit that we have no access to facts. It’s not just Trump who is deluded if he claims to know ‘the facts’ or ‘the truth’, but you and me too. Post-truth politics, along with post-truth philosophy and post-truth everything else, thus seems to have begun with homo sapiens as a whole, and our emergence of a capacity to hold and express any kind of belief that we might assume to be true. The first post-truth politician was the Serpent tempting Eve.

If we finger populist politicians for a fault that it can easily be seen that we all share, then, it is hardly surprising if we’re seen as hypocritical. Though I don’t see a lot of the often unpleasant social media output by Trump supporters, one thing that has struck me about a lot of what I have seen is the predominance of accusations of ‘liberal hypocrisy’. If liberals claim to have ‘the facts’ and respond to Trump’s excesses with blustering counter-assertions in a similar style, that accusation doesn’t seem to me wholly unjust. I think there are far more effective ways of responding that identify much more clearly what Trump and his henchmen are doing wrong and which avoid this ‘post-truth’ shortcut. It’s not whther they have or don’t have the facts, or don’t share your view of them, that matters here: but that they don’t offer those facts with provisionality, are not open to correction, are not interested in examining evidence or justification, are not even concerned about gross inconsistencies in their beliefs, and are not at all interested in improving on their beliefs or making them more adequate. In terms of values, they do not respect the truth or the facts. In relation to all these points, I agree very much with Trump’s critics that the new administration’s attitudes are extremely alarming.

If you genuinely respect the truth, you don’t claim to have it, just as if you respect the power of electricity, you don’t stick your finger in a live socket. Truth is a meaningful concept to us, because we use it in all sorts of ways in all sorts of practical contexts – for example, in science, in law, and in everyday conversation. We have set up an abstract model in which ‘truth’ stands for an idealisation – a fabled position in which we could really know what is going on, not just more than we did before, but totally, as in a God’s eye view. But nevertheless that abstract idealisation is just an extrapolation of our much more limited experience: the experience of recognising things we didn’t ‘know’ before, and of confidently interacting with the world around us in a consistent way. Most of the time, our confidence about how the world works turns out to be justified, so we apply this idealised concept of ‘truth’ to that experience, in the process completely failing to take into account the limitations of our view. So it’s very important that ‘truth’ is recognised as meaningful, and not relativised in its meaning, but at the same time recognised as beyond our reach.

Let’s take an example. A child brought up in a household with a  friendly dog, confidently playing with it and interacting with it, thinks it’s ‘true’ that dogs are friendly. When he encounters the first unfriendly dog, though, his whole model of the world is briefly shaken, the ‘truth’ is shattered. There may be denial – the unfriendly dog doesn’t really count as a dog. There will certainly be initial stress, followed by suspicion and unease in a new uncertain relationship with dogs. But we don’t all have to be as fragile as that child, if we stop thinking in terms of ‘truth’ but rather cultivate the awareness that our beliefs are only justified for the moment on the basis of the evidence so far. If we can be aware of other possibilities to begin with, we will be less overwhelmed by the nasty surprise when it happens.

In the realm of politics, that means concentrating on the qualities that actually matter in making our judgements adequate, the ones that involve a larger respect for truth, rather than railing about others offering ‘alternative facts’: namely the personal qualities of provisionality, awareness, imagination and observation. It means setting the example to those who don’t understand this, and teaching children not to ‘tell the truth’, but rather to reflect upon and justify what they say. It means holding Trump to account, not for ‘lying’, but for being grossly inconsistent and failing to offer evidence or respond to criticism.

It may well be that most of those who complain about lack of truth, when pressed, would actually agree that what they value in practice are these ways of justifying and adjusting our beliefs. But while they continue to use ‘truth’ and ‘post-truth’ as a lazy shortcut, I think they will play straight into the hands of people like Trump. All that such populists have to then do when challenged is to turn back to their unsophisticated supporters, deny the criticism, and offer their own ‘alternative facts’. The discussion will then stay at the completely unfruitful level of mere claim and counter-claim. As our beliefs about the world are wrapped up with our goals, our ontological obsession with what is or is not ‘really’ the case is our biggest weakness, the absolutizing vulnerable spot in our cognitive abilities. It’s only by putting ourselves in the messy middle zone of neither accepting nor denying these ‘realities’ that we can make progress, whether in politics or any other area of human dispute.

 

Picture: Comparison between inauguration crowd sizes for Donald Trump in 2017 (left) and Barack Obama in 2009 (right). Images copyright to 58th Presidential Inaugural Committee (left) and ewel Samad/AFP/Getty Images (right), and are used at low resolution under fair use criteria. 

Alexander von Humboldt, synthesist

I’ve recently been reading a very interesting book, ‘The Invention of Nature: The Adventures of Alexander von Humboldt, The Lost Hero of Science’ by Andrea Wulf, which has made me much more aware of the profound scientific legacy of Humboldt (a figure who seems to be largely forgotten in Anglophone countries). Humboldt (1769-1859) was a towering figure of science, not because he created a massive new theory like Newton or Einstein, but because of the way he linked different spheres of discussion together to recognise new conditions. He can be a new source of inspiration today precisely because science, and indeed the academic world in general, suffers so much from over-specialisation and the narrowing of assumptions that this brings with it.

Humboldt was born into an aristocratic family in Prussia at the time of Prussia’s increasing ascendancy in Germany, but before its unification. Influenced by Goethe and Kant, he treated human understanding as an interconnected whole, developing a concept of nature that recognised all these interconnections at a time when few of them were understood. He was the first to recognise the relationship between animals, plants, geology and climate across the world, and the first to warn of the destructive effects of human activity on the environment, including climate change. He spoke and read four languages fluently and was as equally at home in Paris, London, Washington, or Bogota as he was in Berlin. He travelled to South America and Russia, combining meticulous observation, intrepid exploration, and broad awareness of the relationships between all the phenomena he observed. Back in Paris and Berlin, he wrote books that interwove geology, astronomy, botany, zoology, human geography and politics, describing his experiences with sensitivity and power and illustrating them visually, as well as providing all the data. He spoke to the public and became massively popular, as well as being an inspiration for such varied figures as Thomas Jefferson, Simon Bolivar, Charles Darwin, Henry David Thoreau, George Perkins Marsh, Ernst Haeckel and John Muir. The great strength of Wulf’s book is that she unites an engrossing account of Humboldt’s own life and achievements with one of his influence on all of these figures.humboldt-bonpland_chimborazoHumboldt’s way of doing science united analysis with synthesis in a way that seems to be largely lost today. His general conclusions could be backed up with detailed evidence from observation that was often first-hand, but at the same time he could pan out and make links between diverse fields of study. For example, he noted the effects of the Spanish colonial system on the environment of South America, and the impoverishment of plant and animal life it was already creating in some areas, even whilst abundant life thrived in those less affected by human interference. He also linked human-created deforestation to a feedback loop of climate change, as the lack of trees desiccated a local environment. To make links in this way, across the boundaries of politics, botany, meteorology and geography, is to synthesise, creating new understanding, rather than just to analyse causes and justifications within an accepted field of discourse.

Some of the thinkers I most admire today are synthesisers who have likewise linked together fields that are often falsely separated: Carl Jung, Iain McGilchrist, or Nassim Nicholas Taleb, for instance. But synthetic thinkers today have the odds massively stacked against them, and are typically forced by the academic system to plough a narrow furrow for many years before they can be allowed to synthesise and be taken seriously. ‘The academic system’ here means peer-reviewed journals that take the limited assumptions of a particular specialisation as their sole basis, and expect highly-evidenced work covering a small area that can be fitted into an existing accepted framework of knowledge. Anything in the least synthetic is almost automatically rejected by such journals, and even if they are supposedly inter-disciplinary they are often highly limited in the starting assumptions they will accept. No academic career is possible without the support of these journals, and thus the triumph of analysis over synthesis maintains a stranglehold over the academic world.

But if you read about Humboldt’s scientific world of the early nineteenth century, you find quite a different world. Here a scientist was still largely thought of as an individual thinker and observer rather than someone who had to conform to a massive socially-regulated project. Here synthetic abilities could still be recognised, appreciated and cultivated alongside those of analysis and observation. The scientists were much fewer in number and had much more limited facilities at their disposal, but they still made great breakthroughs, because they were free to reflect on their experience from a variety of perspectives and thus come up with new theories. Humboldt’s recognition of what we now call ecological relationships was a discovery that could hardly be of greater practical importance to us today – probably much greater importance than the relative breakthroughs made today by specialised teams who persist in ploughing a well-ploughed furrow a little bit further.

Of course, it would be easy to idealise that earlier scientific world, and the current one has many other advantages. What seems important to me is not to in any way belittle the efforts of scientists in the current specialised system, but to encourage awareness of the overall limitations of that system and urge it to incorporate more synthesis. It is like a tree that has grown strongly in one direction when the light was available there, but now lacks the flexibility to grow in a new direction when the source of light moves. The scientific system depends on the education system, which gives far too little grounding in philosophical, psychological and emotional awareness which would help people more readily see the limitations of a specialised position. In turn philosophy itself needs root and branch reform, because it has been warped in mistaken imitation of over-specialised science, rather than fulfilling its practical function of a general critical consideration of our widest beliefs and assumptions. Without a recognition of the perspective from which synthesis is so important, we are unlikely to be motivated in changing our institutions to encourage it. Looking back at the strengths of what was done in the past can at least provide a vein of inspiration for that, even if it doesn’t tell us exactly how to act today.

 

Picture: Humboldt and Bonpland by Mount Chimborazo by Friedrich Georg Weitsch (public domain)

Arguing in the stoa

Several times recently I’ve come across friends mentioning neo-Stoicism as an increasingly popular movement. This is perhaps an aspect of a wider revival of interest in the Hellenistic philosophies of the later Greek and Roman times (Stoicism, Scepticism and Epicureanism) as practical ways of life, perhaps developing out of frustration with the dogmatic limitations of analytic philosophy on the one hand and established religions on the other. Coincidentally, too, I’ve recently been teaching about these Hellenistic philosophies in an adult education class, and finding they raise a lot of interest in the students. This revival of interest may well have a lot to do with a search for the Middle Way, integrating experience and avoiding both positive and negative dogmas. But there are also limitations in the traditions of the Hellenistic philosophies themselves that carry the danger of them becoming new dogmas for the people that adopt them.

Perhaps I’ll write some other blogs in the future about Scepticism/ Pyrrhonism and Epicureanism, but here I want to focus on Stoicism, which seems to be the most popular of the three at present. Stoicism is a long and influential tradition, beginning with Zeno of Citium (c. 334-262 BCE), popular amongst educated Romans, and marked by such famous figures as Chrysippus, Cicero, Seneca, Epictetus and the Emperor Marcus Aurelius. The term ‘Stoicism’ came from the stoa (porch or colonnade) where the earliest Stoics used to hold their discussions. It also had a major influence on the development of early Christianity. zeno_of_citium

What might be attracting people to Stoicism today? I suspect that the integration of philosophical theory with moral and spiritual practice is a key element. Established modern thinking has suffered so much from the unnecessary disjunction of facts and values, and accompanying impoverishment of ethics, that people would have good reasons for yearning for a philosophical era before the breach occurred. But meaningful ethics is also an activity needing practical support rather than just instruction.

Writers such as Pierre Hadot and Martha Nussbaum have done a great deal to raise awareness of the spiritual and therapeutic practices in Stoicism, which have a great deal in common with Buddhist practice. For someone with a background in Buddhist meditation, the Stoic practice of prosoche sounds very much like mindfulness, and oikeoisis very much like loving-kindness meditation. There is also an attractive meditation exercise called the ‘flight of the soul’ or ‘view from above’, in which you put your life into perspective by imagining a flight into the sky and look down on the circumstances of your current life. For more on Stoic practices, see this excellent booklet by a number of collaborating academics.

It would be quite possible to make use of such practices without necessarily accepting Stoic philosophy, and indeed, I would argue with Stoicism (as with Christianity, Islam or any other tradition) that one is responsible for one’s own interpretation of it, and can always make use of the resources that it offers whilst taking care to avoid its absolutisations. However, I think it is important to be aware that Stoicism is also a dogmatic philosophy. There is always a danger when people adopt such material from another context that they will gloss over the dogmatic elements, which may seem to have a much more limited practical impact than the more obvious dogmas today coming from evangelical pulpits or the propaganda of groups like Daesh/ Isis. Even if we take such dogmas on board only because they seem to be part of the deal in a practically useful package, there is still a danger that they can be used to support unhelpful absolute judgements further down the line after the approach has become more established and enculturated.

The central dogma of Stoic philosophy is the metaphysical belief in the logos or rational ordering of nature. The universe is believed to have a purpose, and human beings to be too easily distracted from that purpose. Nevertheless, Stoic practice is believed to help us develop the orthos logos, or natural order within each of us as individuals, which then fits into the cosmic order. To do this we can use integrative practices and become aware of our biases. There seems to me here an obvious dogmatic leap: because we overcome our biases and become more objective in our judgement, we do not necessarily participate in a natural order. Given that the appeal to a natural order is so frequently the basis of biased assumptions and fallacious reasoning, a dangerous contradiction thus lies at the heart of Stoic thinking.

The links between Stoicism and early Christianity should also be evident here. Christians have often taken the Stoic logos and merely installed God as the overseer of this natural order. But whether or not there is a personality at the head of it, belief in and absolute order of nature raises the same problems, foremost of which is the problem of evil. If the order of the universe is ultimately good, why do we encounter so much evil in it? The same theological arguments found over evil in Christianity are also found in the Stoic tradition, and they seem to me to arise not because there aren’t hidden benefits to what we take to be ‘evil’ that we would do well to recognise, but because the goodness of nature (with or without God) is absolutised. Whatever explanations for evil and suffering we come up with, they are never likely to fully vindicate the extent of it that we encounter. But we have no need to adopt this belief in absolute cosmic good in the first place when it tends to lead us into defending and vindicating evil.

Together with the metaphysics of logos in Stoicism, there is also an epistemological dogma: the phantasia kataleptike. This is the belief that, despite sceptical arguments to the contrary, it is possible to gain certainty in our beliefs about the cosmos, because our language is capable of representing the truth as long as it is fully formed into propositions, justified by experience in normal reliable circumstances and known by a wise man. This is an approach that closely parallels that of scientific naturalists today, who tend to dismiss sceptical arguments that cast doubt on claims to knowledge by assuming the reliability of normal observation and demanding positive reasons to justify doubt. The trouble is, of course, that we have no way of knowing whether or not our observations take place in ‘normal’ circumstances, and all the evidence about the way we process the meaning of language suggests that it does not simply form truth-correspondent propositions that can be reliably verified. Without a wider sceptical perspective we are liable to get stuck in the most basic cognitive bias of them all – confirmation bias. The Stoics may well believe the universe is ordered because they interpret the world they observe in those terms, which then reinforces their belief that the universe is ordered.

I find that when raising issues like this about any tradition of thinking, they are readily dismissed as philosophers’ quibbles. But, particularly when a tradition has been revived or reinterpreted relatively recently, it seems a great shame if people nevertheless adopt dogmas from the past rather than taking the opportunity to correct past mistakes. To do so doesn’t necessarily mean abandoning a tradition one has found fruitful, together with its potentially helpful cultural, practical and social elements, but it may mean going through a rigorous critical process to distinguish what caused things to go wrong in the past and may do so again. Most basically, I would warn that any absolutisation can be used as a shortcut to justifying the use of power. In Stoicism, for example, one can readily imagine someone claiming to be a ‘wise man’ with claimed true representations of the cosmic logos (functionally indistinguishable from religious revelations) starting a neo-Stoic cult. The best way to stop that ever happening is to ensure that absolute beliefs about the natural order are no longer part of Stoicism.

But in the meantime, I wouldn’t want to discourage anyone from engaging with the rich resources of Stoic practice if they find it helpful to do so, provided they do so also with critical discrimination. Indeed, the Hellenistic philosophies in general offer a great field of cultural and philosophical resources that until recently was largely forgotten and misunderstood by Western philosophers. I’d particularly recommend Pierre Hadot’s Philosophy as a Way of Life and Martha Nussbaum’s Therapy of Desire to anyone wanting to engage with the Hellenistic philosophies as practice.