Category Archives: Linguistics

Not what it really means

The idea that there is something that a person, an observation, a text or a word ‘really means’ seems to me one of the most undermining of our understanding of conditions around us. It is based on a widespread misunderstanding of meaning itself: that meaning somehow stands beyond our experience and we only have to tap into the ‘true’ meaning. To avoid beliefs about ‘true meaning’ is not to give up confidence in meaning or to believe that any particular thing (let alone the world as a whole) is ‘meaningless’: rather it is to recognise that it is us who experience meaning, in our bodies and their activity.Latin_dictionary Dr Marcus Gossler

Here are some examples of the kinds of assumptions people often seem to make about ‘what things really mean’:

A person:

  • “What I really meant when I said that was that you look better than you did before. That was a compliment. There’s no need to take offence.”
  • “What David Cameron really means when he talks about a ‘big society’ is one where the state is so starved of resources that the poor depend on random acts of charity.”

An observation:

  • “What the low level of UK productivity really means is that here can be no long-term or secure economic recovery.”

A text:

  • “What the gospels tell us about eternal life really means an experience that goes beyond the ego.”

A word or term:

  • “What the Middle Way really means is the Buddha’s teaching of conditionality as an alternative to belief in the eternal self (sassatavada) or extinction of the self at death (ucchedavada).”

In any of these examples, I’d argue that, of course, we cannot claim that these things are not part of what is meant. Perhaps they are even an important part. Very often, in practice, by appealing to ‘what is really meant’ people just want to offer an alternative to what someone else has assumed. However, the language of ‘really’ is very likely to involve an implicit absolutisation. Against one set of limiting assumptions, we offer the opposite, which tends to entrench us in further limiting assumptions.

At one extreme, this may amount to a seriously misleading straw man, where we give an account of someone else’s view that they would be unlikely to recognise themselves (e.g. the second example above, about David Cameron). At the other, an illuminating new interpretation may be offered that may greatly add to our useful understanding, and may help get beyond previous absolute assumptions that cause conflict (as in the first and third examples above), but this is still undermined by the new interpretation itself being absolutised. The third example (about UK productivity) and the final one (about the Middle Way) are somewhere in between: they offer interpretations that may be relevant and helpful in some circumstances, but may become limiting and unhelpful in others.

As an alternative, I want to suggest that we not only need to recognise the limitations of our interpretations, but also take responsibility for them. When we assume that our interpretation is the only possible one, we tend to see it as inevitable that we should think in this way: either because it allows us to make claims that are ‘true’ or ‘false’, or because we assume that ‘nature’ dictates how we should think. However, as long as we experience alternatives, we can also experience choice in our interpretations. If you choose to always interpret a particular politician’s statements in the worst possible light because it fits your ideological commitments to do so, then you are increasingly responsible for such a choice the more alternatives you become aware of. If you choose to only interpret the Middle Way in traditional Buddhist terms, you are responsible for deciding to do that to the extent that you have encountered alternatives. You cannot simply avoid that responsibility by appealing to Buddhist tradition as possessing the ‘true’ interpretation.

In my experience people often find it easier to recognise this point in relation to another person than in relation to ourselves. We commonly experience problematic misinterpretation of others and then have to painfully clear it up in order to maintain our relationships with them: that’s the normal grist of social life. Recognising that there was not something that we ourselves ‘really meant’ is much harder, though. We can be taken by surprise by someone else’s reaction because the interpretation they made was not the one at the forefront of our minds, but that doesn’t prove that it wasn’t in the background somewhere. So often “I didn’t really mean it” is a shortcut for “My dominant feelings are friendly, even though there’s always some ambiguity in these things.” The value in giving expression to those ambiguities in humour is, on the contrary, that there isn’t something we ‘really meant’ – rather a set of meanings within us that we can play with.

When it comes to texts and words, feelings can run even higher. For some reason, when it’s written down, it becomes far harder to recognise that the meaning we get from a text lies in us rather than in those apparently permanent words. That’s particularly the case with religious texts, which are deeply ambiguous. Yet relating positively to religious texts as sources of inspiration seems to me to depend very much on acknowledging our responsibility for interpretation, and that interpretation is part of the practical path of our lives rather than a prior condition for it. For example, interpreting the sayings and attitudes of Jesus in the gospels in terms that can be helpful rather than absolutizing is for me a way of engaging with Christianity positively. If, though, on the contrary, I assumed that a certain interpretation was a prior condition of my living my life helpfully, I would be obliged to fix that interpretation from the beginning and thus – however much the traditional view of the text may seem in theory to be supporting responsibility – I would be undermining my responsibility for my life.

But if the interpretation of religious scriptures causes debate, it is as nothing to the outrage that I find can be generated when one attempts to expand the meaning of a word or a term and deliberately use it in non-standard ways. For many, the dictionary appears to be a much more sacred text than any other. But the right to stipulate – that is, to decide for oneself on the meaning of a word one is using – seems to me to be at the heart of human freedom. Other kinds of freedom may turn out not to make a lot of difference, if the way we think about how to use our freedom is constantly limited by conformity to the tram-tracks of accustomed ways of using words. More than anything, I think it is the dualisms or false dilemmas implicit in the ways philosophers and other habitually use certain abstract words that requires challenging: self and other, mind and body, theism and atheism, freewill and determinism, objective and subjective. To use words in new ways, whilst trying to make one’s usage as clear as possible, seems to me the only way to break such chains. Stipulation is never arbitrary, but always builds on or stretches existing usage in some way. It does not threaten meaning, even if at times it can cause misunderstanding, but on the contrary in the long-term aims to make our terms more meaningful by keeping them adequate to our experience.

Provisionality markers: keeping the lines of discussion open

Provisionality markers are the words we use when communicating that signal we are being provisional rather than absolute. Using them well is a crucial part of the practice of provisionality. On the recent Objectivity Training course, I was planning to include some discussion of provisionality markers in one session, and then failed to do so in much detail because I ran out of time. One of the participants remarked that it would have been useful to give it higher priority, given the great practical importance of provisionality markers. Reflecting further, I agree, so I’ve decided to write a blog about it to help make up for my omission.

You will probably need to know a bit about provisionality, and how it differs from absolutisation, to follow this. If you haven’t come across the term before, I’d suggest watching this video.

Provisionality markers consist in words or phrases that try to directly communicate that a statement being made is provisional, usually by linking it to appearance rather than reality, opinion rather than claimed fact, or probability rather than certainty. Here are some examples:

Appearance rather than reality

It appears…Provisionality markers

It seems…

Apparently

Evidently

Opinion rather than claimed fact

In my opinion…

I’d suggest…

On the whole I think…

In my view…

I think…

One might conclude that…

I believe…

Arguably…

Probability rather than certainty

…may…

…might…

Probably

It’s likely that…

The use of these terms is, of course, no guarantee that a person is being provisional. What make them provisional is their state of mind when they are making a judgement, and whether they are able to consider alternatives, not just whether they use these words. You’ll always need to judge provisionality from the context, rather than just from the use of these words, and they may be used formalistically, merely to try to avoid the possibility of offence, or because they are believed to be expected in the group. That judgement is obviously trickier on the internet, where the words are separated from information about tone and context.

In some cases the use of these terms may also indicate hesitancy or indecisiveness rather than provisionality. Indecisiveness should not be confused with provisionality, the difference being that provisionality involves taking as much into account as the conditions allow and then making a balanced judgement, whilst indecisiveness means failing to make a judgement when one is required. Indecisiveness may even often be accompanied by a negative absolute belief, such as that I can’t be justified in making a decision, or that decisions can’t ever be justified.

Nevertheless, I think the use of these terms (and others like them) can be very helpful. There are two aspects to the practice of using provisionality markers, obviously: using them ourselves and noting when others use them.

Using provisionality markers

Using them ourselves adds to the probability that others will recognise that we are trying to be provisional rather than absolute. That makes it more likely that a helpful discussion will ensue, because, if we disagree with each other, that makes it much more likely that we will consider and try to understand and assess each other’s positions rather than being defensive and ending up in conflict. Provisionality markers should set up whatever I am claiming for a discussion where we are both concerned with finding the most helpful outcome, rather than for an adversarial argument in which ‘winning’ or ‘losing’ is our main concern.

However, for provisionality markers to potentially have this effect, other people have to notice that we are using them. Thus it may help to give them extra emphasis in one way or another, particularly when communicating with someone whom you think might not notice or might not have noticed the markers. We can do this through tone of voice, or in text by using asterisks or other emphasising features.

Which markers we choose to use may also have an effect on whether they are noticed. Longer phrases like “One might conclude that…” may seem clumsy and over-formal, but have the advantage of going to great lengths to draw attention to provisionality. “I believe…”, on the other hand, may sometimes be intended as provisional, but (particularly in a religious context) is easily interpreted as dogmatic nevertheless.

The provisionality of the marker also needs to be consistent with the rest of what you are saying, and if the rest of your statement is absolute in form, no quantity of provisionality markers will rescue it. For example, it’s contradictory to say “It may be inevitable that you’ll find the right partner”: if it may be the case (rather than being a certainty), it can’t be inevitable. If you say “In my view conservatives are always greedy”, the over-certainty of your sweeping generalisation about conservatives is undermining the apparent provisionality of “in my view” to such an extent that your provisionality marker is likely to be deservedly ignored. It’s thus impossible to separate the use of provisionality markers from the wider issue of avoiding absolutes, which in turn requires some understanding of the wide variety of forms that absolutes can take so that you can avoid them.

Noting others’ use of provisionality markers

Just as important, however, as using provisionality markers oneself, is noticing when other people use them, and giving due weight to their intention to be provisional. This is related to the problem of people taking offence – something that they are responsible for as well as you. Even if you’re not inclined to enter into discussion of a claim that someone has made, if you recognise that they’re trying to do so provisionally, you can at least pass on without taking offence, recognising that a statement you disagreed with was probably part of someone else’s progression from a relatively ignorant position to a wiser one.

The crucial part of noting others’ provisionality markers is  usually interpretation. Was the provisionality marker intended? Was the whole statement intended as provisional even despite the lack of provisionality markers? This is where I think the principle of charity is really helpful: this is the presumption, when in doubt, that a person had a more helpful rather than a less helpful intention. If you presume wrongly, after all, the outcome is much more likely to be helpful if you do so in a positive direction, and it’s likely to be become clearer in subsequent discussion whether or not the person was being genuinely provisional. The video below shows a variety of ambiguous situations where the principle of charity needs to be exercised!

One of my own bugbears online is having provisionality markers ignored, often by people who feel strongly about a particular view, and are eager to pigeonhole me either in the ‘for’ or ‘against’ camp in relation to that view, even when I want to suggest a third alternative. It’s easy to walk away from such heated debates (for example, about the EU in the current UK political climate) on the grounds that joining in may just lead to people taking offence and to unfruitful polarised discussion. At the same time, though, it is exactly such debates where provisionality and a critical perspective on over-simplifications are most needed. Making strong use of provisionality markers, recognising those of others, and also pointing them out robustly when others ignore them, may be important steps in making such discussions more productive.

Overall, then, the importance of provisionality markers, and of using them carefully, cannot be underestimated. In the end, however, their use cannot be separated from other Middle Way practices: provisionality, agnosticism, incrementality and integration.

Jung’s Red Book 4: Embodied symbol

There are several points in the Red Book where Jung discusses meaning and symbol, all of them suggesting to me a radical understanding of meaning in which Jung was implicitly before his time. Jung recognises that meaning is experienced in our bodies, that it emerges over time, and that meaning needs to be separated from belief. These are insights that can now be more strongly supported using the findings of neuroscience (particularly the differing roles of the brain hemispheres) and the embodied meaning theory of George Lakoff and Mark Johnson: but Jung was a pioneer who (at least as I interpret his text) implicitly understood the basis of these developments.

Perhaps the most interesting episode in the Red Book where Jung engages with issues of meaning is his conversations with Ammonius, the Anchorite. Jung encounters Ammonius as a solitary in the desert, devoting himself to endlessly reading the scriptures and finding ever new meanings in them. But typically, Jung’s relationship with Ammonius changes in the course of his two main encounters with him. Jung starts off as a disciple respectfully approaching the master, but ends up being thought of as Satan and lunged at by Ammonius. Jung, like the rest of us, can learn from his inner figures, but can also challenge them and teach them, even suffer reactions from them, as he recognises their limitations.

As Jung writes of Ammonius

He wanted to find what he needed in the outer. But you find manifold meaning only in yourself, not in things, since the manifoldness of meaning is not something that is given at the same time, but is a succession of meanings. The meanings that follow one another do not lie in things, but lie in you, who are subject to many changes, insofar as you take part in life. (p.262)

I take this to mean that although Ammonius sought multiple meanings in scripture, he still assumed that the meanings lay in the scripture. His solitary studies in the desert reinforced that, as he became more abstracted and ceased to relate to others. But the meanings he was looking for lay in himself. Jung especially stresses the temporal aspect of this recognition. “Meaning is not something that is given at the same time”, as it would be, not only to scripturally-obsessed believers, but also analytic philosophers and naturalistic scientists, who take meaning to consist in a relationship between words and an actual or hypothetical reality, processed in a way that takes no account of the temporal aspects of our experience of meaning. But meaning takes time, and depth of understanding comes from the linking of experiences over time to ever-richer symbols. Meaning is experienced physically, in a gestalt way, through our right brain hemispheres in a way that depends on gradually accrued experience coming together, not just abstractly and hypothetically through the left. The very metaphors we use to describe the process take time: things “sink in”, or we “get our heads around” something. By “taking part in life” we can enrich that process, as meaning depends on experience rather than only on abstraction.

At the final moment when the previously respectful Jung gets lunged at and called Satan, Ammonius switches from right hemisphere receptivity to left hemisphere suspicion. At one moment he is open to Jung’s suggestion that he might find more of the meaning he seeks by returning to human society, but the next he shuts down. Becoming confused, he blames his confusion on Jung. This can stand for any occasion when a self-sufficient absolute belief is challenged, and the problem created by the challenge is projected onto the messenger by a person who feels threatened and defensive.Mandala_from_Jung's_Red_Book_2 Joanna Penn CCBY4-0

Elsewhere, Jung discusses the richness of experienced meaning in relation to symbols. For him, the distinction between a sign and a symbol is important. The sign merely represents, but the symbol connects with the wider gestalt experience of meaning.

The symbol is the word that goes out of the mouth, that one does not simply speak, but that rises out of the depths of the self as a word of power and great need and places itself unexpectedly on the tongue. It is an astonishing and perhaps seemingly irrational word, but one recognises it as a symbol since it is alien to the conscious mind. If one accepts a symbol, it is as if a door opens leading into a new room whose existence one previously did not know. (p.392)

Perhaps the most startling symbols are those of the kind Jung encountered in his visions (such as Ammonius himself, or the Tree of Life discussed in the previous blog), or that we otherwise encounter unexpectedly in dreams. What Jung particularly conveys throughout the Red Book is the importance of exploring the meaning of such symbols in a provisional space of meaning, held apart from any concerns about what we believe.

However, it seems that any word (or visual image, or sound) can be a symbol that evokes a range of associations, connecting more deeply to our embodied experience through the emotions. Signs, by contrast, are supposed to merely denote something within a certain model of belief: think of numbers, for example. But of course, signs are merely dried out symbols or dead metaphors that have been over-handled by the left hemisphere. They still depend on their connection to a set of embodied associations to mean anything at all for us, however clipped and controlled they may have become.

Obviously we need words, and we need both signs and symbols. the challenge is  to use the words for their limited contextual purposes and then (like the Buddha’s raft once it has crossed the river) let go of them. In this final passage that I will quote, Jung directly links the balanced use of words to the Middle Way, and recognises the practical reasons for turning words into beliefs: as long as those beliefs are also provisional.

The word is the guide, the middle way which easily oscillates like the needle on the scales. The word is the God that rises out of the waters each morning and proclaims the guiding law to the people.. Outer laws and outer wisdom are eternally insufficient, since there is only one law and one wisdom, namely my daily law, my daily wisdom. The God renews himself each night.  (p.393)

 

Previous blogs in this series:

Jung’s Red Book 1: The Jungian Middle Way

Jung’s Red Book 2: The God of experience

Jung’s Red Book 3: The Tree of Life

 

Picture: Mandala from Jung’s Red Book by Joanna Penn (CCA 2.0)

 

 

The Tower of Babel

It’s about time we had some more visual art on the site. Norma Smith did a series of blogs on paintings during the first year of the site’s existence, but since she stopped doing those we’ve had very little art. I’m going to try to post occasional art blogs about paintings I find meaningful in relation to the Middle Way, but others are welcome to contribute likewise when they feel inspired.

The picture I’m going to look at is Bruegel’s Tower of Babel (click picture below to enlarge).

Bruegel Tower of Babel 2

The painting depicts the building of the Tower of Babel, as described in Genesis 11. The story (for the benefit of anyone unfamiliar with it) is that in ancient times people all spoke one single language. They gathered in one place (Babel, i.e. Babylon), developed brick-making, and built a city. They set out to build a tower into the heavens. God saw them doing this, and complained “now they have started to do this, nothing will be beyond their reach.” So to stop them, he confused their language and dispersed them, so that they would not be able to work together in building the tower.

Bruegel imagines the tower under construction, under the command of a king, and using technology very much of his own time rather than of ancient Babylon. But for us the anachronism can be a good prompt to understand the painting symbolically, not as a depiction of a historical event. The Tower of Babel has often been interpreted as symbolic of pride: of humans trying to be like God, but not succeeding, and being punished for their hubris.

But we could go a bit further than this in interpreting the painting. It’s a depiction of a massive construction project: think of the Three Gorges Dam. The planners think they’ve got it all worked out, but fail to take into account the unknown unknowns. What are the conditions that really operate when you build that high? It points out a limitation in utilitarian-type thinking which fails to take into account the degree of human ignorance.

But the story also closely links the planning and the over-ambitious goal with language, and in doing that it can represent the close relationship between representational language and goal-orientation in the left hemisphere of the brain. The tendency of the left hemisphere, when it gets over-dominant and neglects the Middle Way, is to think its beliefs are completely accurate, and that its words correspond with reality. The ‘dispersal’ of the builders and the loss of a single language could be related to the recognition that we don’t communicate in that way: our language has no absolute meaning, but rather its meaning depends on what is experienced by each person. The linguistic assumptions in our big plans are thus dangerous and precarious ones. We think the words in our plans must correspond to things in the world, but they may not do so at all.

Bruegel represents the pomp and power of the organising king with his big plan in the bottom left-hand corner, with his servants prostrating themselves before him. But given what virtually everyone viewing the painting will know about the subsequent fate of his construction, this power seems empty. Like Donald Rumsfeld before the Iraq War, he probably throws all warnings about his tower into the waste paper basket, but things turn out rather differently from his obsessive projections.

Empowering words

The limits of our thinking are often the limits of our conventional language, but it does not have to be so. Words are ours to command, and the meanings of words are ours to change as the need arises. If we want our language to be empowering rather than habitual and limiting, we need to exercise our creativity with regard to the language we use.

When I first started researching and writing philosophy for my Ph.D., I remember the recognition of this point as one of the most liberating moments. “You’re always entitled to a stipulation” my supervisor said, and him saying that probably marked the point when I started to realise just how creative philosophy could potentially be (even though much of this creativity is not often used by those trying to climb the academic career ladder). What that means is that when the need arises, we can make up and modify words and phrases – provided of course that you make it clear what you mean. This is exactly what great thinkers of the past have done: think of the Buddha’s Middle Way, Plato’s Eidola (‘Forms’), Jung’s archetypes, Heidegger’s Dasein and Sartre’s Existentialism. All of these terms, that have shaped people’s capacity to have new kinds of thoughts, are coinages, or at least radical modifications of previous terms. Rather than revering and petrifying these past coinages, we need to emulate these thinkers’ creativity. By having a wider variety of word meanings, we then have the tools to potentially develop new and more adequate beliefs.Compass_rose_Cantino Alvesgaspar CCSA3-0

But after my initial period of studying philosophy , I began to realise how relatively unusual this perspective was, and how conservative most people are when it comes to words and their usages. This unnecessary conservatism can take a variety of forms. Perhaps the most basic one is the conviction that a word “really means” what we have been used to it meaning, so that someone using it in a different way becomes offensive in some way. For example, I have been told that religion “really means” supernatural belief, and that Christianity “really means” the belief that Jesus is the Son of God. These are, indeed, meanings that can be adopted for these terms, but they are far from the only ones in a complex field of traditional usages. Those who insist that a word “really means” this or that seem to be avoiding taking responsibility for the fact that they are themselves choosing to interpret it in one way or another. This is a form of repression – of the failure to recognise alternatives as options.

The appeal to a dictionary is another form that this appeal to what a word “really means” can take. Now dictionaries are extremely useful things, but what they tell us is the established conventions of word meaning and usage, not the limits of how we, in our practical situations, may choose to use words. But all too often people use dictionary meanings as prescriptive devices to curtail thought heading in new directions. Anybody who uses a word differently from what it says in the dictionary is assumed to be just wrong.

Another pair of fallacies that may attend the appeal to dictionaries are the etymological fallacy and the original language fallacy . In the etymological fallacy, it is assumed that what a word “really means” is determined by its origins: so, for example, rationality must mean proportionality because it comes from the Latin “ratio”, which involves the idea of proportion. Of course, etymologies can help us appreciate some of the past associations of a word, but not much more than that. In the original language fallacy (beloved of Religious Studies scholars) it is assumed that the true or correct meaning of a term originally derived from another language must be what it meant in that language – and indeed that we must be able to find the truths of a particular religion more directly if we study them in the original language. This takes an extreme form in Islam, where nearly every Muslim boy learns Arabic by rote, and translations of the Qur’an are not even recognised as ‘true’ Qur’ans.

But if we ignore the constraining influences of these kinds of traditional attitudes, as I urge, and dare to use our linguistic creativity, there are also, of course, certain responsibilities that come with that freedom. Though what we do with language is our own business, when we are using it to communicate with others it obviously needs to be transparent to them. Creative use of language, as in Shakespeare, demands a little more of the reader or auditor, and may require glosses or explanations – but with the possibility of greater rewards in return for that effort at comprehension.

One key responsibility seems to me to be that only a helpful purpose should motivate coinages: which means, for example, avoiding mere exclusive language. In my own work I have heard various complaints about ‘jargon’, and ‘jargon’ is normally a term for unhelpful language used by a group to mark ‘in’ status and exclude outsiders. If terms like incrementality, justification, objectivity or archetype are not familiar to you in the sense I use them, then I can only assure you that the normal reason I use new or modified meanings is to try to capture helpful senses and get away from less helpful ones, not to exclude people by confusing them (even if that is sometimes an unfortunate side-effect). For example, the use of ‘objectivity’ to mean ‘God’s eye view’ seems to me comparatively unhelpful because none of us has, or could possibly have, any experience of a God’s eye view – and the use of the same word to mean the gaining of a wider and more adequate perspective, already also in use, is much more helpful. So I use the word in the latter sense and avoid the former, particularly as I see the former sense leading people in some very unhelpful habitual directions. Challenging this use is one way of challenging the basis of the assumption that God’s eye views are possible.

Another responsibility that seems to come with stipulative creativity is that of continuity. People need some sort of hook to hang their meanings on, and generally there is some connection, an association of some sort, between past meanings and new ones. That’s why gradually modifying meanings seems preferable to making up new words from scratch. Continuity can give people the opportunity to start relating to old words in new ways, but it also carries the danger that they will just relapse into the old ways – after all these are reinforced by the context in which they have become usual, and the way that everyone else uses them. Thus a balance needs to be struck – on the one hand insisting on a new sense in order to open up new veins of thought, but on the other maintaining some continuity. That’s what I’ve generally tried to do in Middle Way Philosophy, but that doesn’t prevent the continuing danger of both types of reaction: either bafflement or complacency.

New uses of language often take the form either of distinctions (e.g. ‘joy’ distinguished from ‘happiness’ in Carl H’s recent comment on my previous blog) or of syntheses (e.g. objectivity has the same reference as integration). Either of these can be helpful as long as they’re not presented as the “real meaning” of the term, but rather just as a way of enriching the meanings available to us. The vice of analytic philosophers seems to be to make constant distinctions, with the accompanying assumption that these distinctions tell us final truths that we were previously missing (the ultimate sin for an analytic philosopher is ‘conflation’). But in my experience, people are more likely to use words in different contexts without realising their relationship – for example, scientists often seem to assume that their ‘objectivity’ is completely different from that of artists. Syntheses generally need a lot more attention, but of course they are not final either. There are also differences between the way scientists can be objective and the way artists can be, and the context will determine whether appreciation of these differences needs more attention than the similarities.

But I don’t just want to defend my own freedom to use language creatively. I would like to see other people doing it much more than I generally experience them as doing, and scuttling to their dictionaries and scholarly certainties much less. Perhaps the place where most verbal freedom is actually exercised is poetry (though even here there can be resistance to innovation). If you want a (relatively) safe place to experiment, I can highly recommend that you play with language in the context of poetry. That’s indeed where I started – I wanted to be a poet, in my early twenties, long before I even got interested in philosophy. A blank page can be daunting, but also liberating. You can put anything you like on that page, and it can mean whatever you want it to mean. To quote one great poet of the past “Oh brave new world, that hath such people in it!”

 

Picture: Compass Rose from the Cantino Planisphere, replica by Alvesgaspar CCSA3.0