This is the script to a video on my YouTube channel. If you would prefer to watch it, you can do so here:
“The best lack all conviction, while the worst
Are full of passionate intensity.” - WB Yeats
How sure are you about your beliefs? By the end of this video, I hope it is a little less than at the beginning.
The French philosopher Rene Descartes once sat down and aimed to question every one of his beliefs. He wanted to see what he could be certain of, and where his knowledge was open to serious doubt. He recommended that each person should do this at least once, as a way to check their own reasoning, and realise how little they truly knew.
And today we will follow in his footsteps. We will go on a journey to explore the ways our thinking can go horribly wrong. So that we can fight against our own intellectual arrogance, and become a bit more like Socrates, proclaimed the wisest man in Greece, because he knew how little he knew.
So let’s start at the beginning, with some of the most irritating yet fascinating thinkers of the ancient world.
I am Joe Folley, this is Unsolicited Advice.
Philosophical Scepticism
It could be argued that the history of scepticism began when the first toddler asked “why?” until their exhausted parents had to reply “just because”. But as a philosophical tradition, it emerged in ancient Greece, with the philosopher Pyrrho of Elis and some others. These thinkers aimed to show there are foundational problems with how we form beliefs which mean we cannot really be certain of anything, or, in some cases, not even know anything. However, most of the surviving work in ancient scepticism comes from Sextus Empiricus, and his Outlines of Pyrrhonism.
The ancient sceptics were quite odd by modern standards. They were not particularly interested in coming to conclusions, but suspending judgement about as many things as possible. Rather than find “truth”, their main aim was to sow doubt, which makes them a good historical example for our current exercise.
Empiricus records various different ways sceptics questioned the “knowledge” of those around them, but I want to focus on his “five modes”, which he says he gathered from recent sceptical philosophers, and probably came from Agrippa, an earlier sceptic. While Empiricus wanted to use this to undermine all knowledge, for today we can just think of them as ways to question our own beliefs. We’ll just go over three of them now.
Empiricus’ first mode is the mode of dispute. This sort of does what it says on the tin. It is challenging your own position by reflecting on the strongest arguments that could be put forward against it, and all the people who disagree on this particular conclusion. For example, I do not believe in God, but a good way to start challenging this position is to reflect on the number of people I respect who do believe in God, sometimes very strongly. This does not directly argue against my position, as that would be a mere appeal to authority, but it encourages me to at least consider that I might have missed something. Alternatively, I can ask what arguments seemed to have convinced these people that God exists, and then steelman them as much as possible. I quite often do this on the channel, where I try to explain why these arguments can be so convincing, but also why I am not convinced by them. Not yet, at least. This has had a real impact on my beliefs. It made me realise where I might have misunderstood some key arguments for God like Aquinas’ Five Ways, and where some of my own responses to these responses were weaker than I had initially considered.
He then moves onto the mode of infinite regress. This is where we continually ask what grounds our beliefs, until we find ones that we cannot justify, or get to our foundational assumptions. Empiricus thinks the only other option is that our justifications are an infinite regress, and he finds this unacceptable. A good modern example of foundational assumptions comes from Ludwig Wittgenstein, and his theory of hinge propositions. A hinge proposition is a belief that forms the base level preconditions to all our other beliefs. It cannot really be justified, since for one belief to justify another, the justifying belief must be stronger than the justified one. But it also cannot be refuted, since for one belief to refute another, the refuting belief also has to be stronger than the refuted one. One example of a hinge proposition might be the law of non-contradiction. That is, that P and not-P cannot both be true at the same time. Funnily enough, Aristotle gave something close to a hinge-like justification for this law, saying that it is impossible to have coherent thoughts without it, rather than demonstrating it is true. For our sceptical quest today, we don’t necessarily need to go this far. Rather, we can take from the structure of Empiricus’ mode. If we start with our conclusion, and work backwards, where do we find a premise we are not sure of? Where do we find our justifications wanting, and what are the assumptions we simply have to make for our belief systems to function? What would happen if we changed those assumptions?
Thirdly, there is the mode of relativity, which is questioning how the same object or claim can be perceived differently in different circumstances according to different people. Empiricus outlines ten ways of doing this, and we won’t go through them all. But a good everyday example of an argument like this is “there’s no accounting for taste”. This phrase is often used to mean “if people have different tastes, this cannot be resolved because there is no one truth about the situation. It genuinely is a matter of opinion”. One of my favourite snacks is ketchup on toast. Most people find that disgusting, and why wouldn’t they? It sounds awful. But there is no absolute or even objective standard for something being tasty, so when we say “this food is good”, we are often using it as shorthand for “I really like this food”.
These are all ways to throw doubt onto our pre-existing beliefs, by questioning the process by which they are justified. The modes of dispute and relativity find whether there are other, equally as justified ways of viewing a situation, while the mode of infinite regress asks us to identify our own base assumptions when we make our argument.
This is not the only kind of philosophical scepticism though. Famously Rene Descartes conceived the Evil Demon hypothesis. This aimed to undermine certainty in almost all of our beliefs, by imagining that an evil demon had control of our perceptions and could deceive our thought processes. Descartes eventually thought he overcame this with his own complex philosophical system, which started from the idea that at the very least I cannot doubt that I exist, since in order to doubt anything, there must be a doubter, and I call that doubter “I”. Of course, most people are not troubled by this kind of scepticism, since we don’t tend to aim for complete certainty when we form our beliefs. But again I think the exercise is illustrative. For example, when I am doing research for these videos, I have a few academic publishers I regularly go to, like Routledge, Cambridge University Press, Oxford University Press, and so on. I also have a number of journals I will search through, to find articles on whatever subject I am writing on. I am relying on them in much the same way Descartes relies on his senses to make claims about the external world, and there is much more reason to criticise the state of academic publishing than there is to doubt all of our senses. And this holds for almost all secondary sources of information. Very few of us come to our opinions just using primary data from our senses and research we have personally conducted. We rely on information ecosystems that are fallible. Again, I am not suggesting that this means we should throw out all of our beliefs. Far from it! It is just to encourage us to reflect on where the weak points in our belief-formation might be, and perhaps add a bit of sceptical spirit into our worldview.
But of course, this is not the only reason to doubt our beliefs. We can also draw from modern psychology to talk about the human tendency to employ flawed reasoning chains and unreliable heuristics to come to our conclusions.
Thinking is Bloody Hard
When Daniel Kahneman and Amos Tversky conducted their landmark series of experiments into the flaws of human reasoning, one of their conclusions is well appreciated by almost every first year logic student in the world - thinking strictly rationally is difficult, time-consuming, and often inefficient. It can feel like a sisyphean punishment, going through each of your premises to check whether they are supported, and then checking each inference chain to ensure that you haven’t made a mistake, or used a rule incorrectly. It gets easier over time, sure, but it remains a cognitively arduous process. If we had to go through this every time we came to a conclusion about something, then nascent homo sapiens would have been sat around debating probabilistic inferences regarding big cats before being quickly eaten by the tiger hiding in the bushes. Sometimes it is better for our survival to have a quick, rather fallible process, than a slow, more accurate one. Often making a decision, any decision, is better for survival than cracking out the Natural Deduction systems. This makes me sad, but unfortunately, I’ve not managed to make the universe comply with my preferences yet.
As a result, many of our beliefs come not from what we might call “rational” thinking, but from these less reliable methods. The heuristics mentioned by Kahneman and Tversky are numerous, but I’ll just go through a few of them now.
They found that people judged certain statements as more plausible or insightful when they were pleasant to listen to. For example, in a study by McGlone and Tofighbakhsh, the phrase “woes unite foes” was judged as more accurate than the phrase “woes unite enemies”, despite them having exactly the same meaning. This is known as the “Keats heuristic”, and it suggests that when we judge a proposition we also take into account its beauty, rather than just evaluating its content. This is not always a bad thing. After all, beautifully written novels have probably used this bias to challenge dominant ideas that otherwise might have gone unquestioned. But it does imply that our opinions are not always formed with due deference to the facts, but other qualities play a role as well.
And this is just the tip of the iceberg. One of the most notorious examples of a cognitive bias is the mere exposure effect. This is when the same stimulus is viewed more positively when it is experienced repeatedly, and becomes more familiar. This is linked to the illusory truth effect, where repeated exposure to an idea or proposition makes it seem more plausible or more truthful. One hypothesis explains this by appealing to cognitive fluency. That is, if we hear an idea more frequently, it is easier to process, and we associate an idea being cognitively fluent with it being true. So if we hear an idea repeated, we are more likely to believe in its truth, though this effect is strongest for initial repetitions. Again this is not always bad. Sometimes hearing a claim repeated by multiple, independent sources actually does mean it is more likely to be true. But again, it gives us cause for doubt. Did we really hear it repeated by independent sources, or did we just hear it repeated?
Then there are various biases grouped under the heading “confirmation bias”. This both refers to our tendency to seek out information that confirms our pre-existing views, rather than challenges them, and also our tendency to verify given hypotheses, rather than falsify them. Thus, to use Kahneman’s example, if I ask you “What is the likelihood of a tsunami hitting the west coast of the US in the next 30 years?” you are likely to overestimate because you are likely to form the hypothesis “a tsunami will hit the west coast of the US in the next 30 years” and then look for verifying information, rather than look for evidence that would disprove this hypothesis. Kahneman says that the mere image of a tsunami in your mind may lead to overestimation.
We have the halo effect, where if we already like one thing about someone, we are prone to think other good things about them. We might be more likely to conclude someone is kind just because they are attractive, or funny because they are generous. We tend to over-prioritise information that is available to us, and effectively pretend we know everything when we instinctively form opinions or beliefs.
Perhaps my favourite of Kahneman and Tversky’s findings is that when we are asked a very complex question, we tend to substitute in a simpler one, and then answer that instead. Say I asked someone what their opinion was on the claims of the last non-fiction book they read. There is a chance that the person will sit back, devote careful thought and mental energy to evaluating the book’s propositions, and come to a reasoned conclusion about the quality of the argumentation or research, but Kahneman thinks they are more likely to ask themselves “how did I feel when reading the book?”. If they felt understood, or clever, or interested, they are likely to say the book is good, whereas if they were confused, unhappy, or bored, they are likely to say the book was bad, regardless of what the truth of the book’s contents were. But this also works for more serious matters. If we are asked “what do you think of this political policy?” we are also likely to think “how did that policy make me feel?” and let that dictate our answer. That feeling, in turn, will be influenced by a whole host of factors, like whether we like the speaker, whether it sounds like something “our team” would support, and so on.
Regarding statistical claims, we do not often pay enough attention to sample size, we tend to overestimate how generalisable our personal experience is, and neglect the base rate of events. All of this means we are rubbish intuitive statisticians, which allows us to be fooled by statistics that may have been collected, processed, or presented poorly.
Funnily enough, many of these were preempted in an essay by the German philosopher Arthur Schopenhauer called “The Art of Being Right”, which aimed to satirise the way orators and other communicators can use someone’s baser instincts to convince them of whatever they want, and to win performative “debates”. These include appealing to authority, attacking your opponent’s character, strawmanning them, using metaphors to frame the debate in your favour, and a whole lot more. It is actually quite a fun exercise to go through Schopenhauer’s essay and match almost every one of his points to modern work on cognitive biases.
I could easily go on here. There are findings like the Wason selection test which shows how we misuse logical rules if we are not in the right context, or point to the errors in reasoning in some of the most persuasive and influential speeches of all time. But I think this section has sufficed to throw sufficient doubt onto many of our beliefs.
Kahneman and Tversky are not outright pessimists. They do not think that humanity is unable to come to well-reasoned or true conclusions on anything. They model the mind as having two systems. System 1 uses intuitive judgements to come to relatively swift conclusions, and falls prey to many of the reasoning errors we have discussed in this section. However, system 2 can potentially be much more rigorous in its reasoning, and they are much more optimistic about its ability to reliably come to robust, defensible positions. This is one reason why I think learning about these biases and heuristics are so important. We cannot eliminate them, and nor would we want to, since this would just mean a huge increase in our cognitive load. But we can slowly learn to recognise when they are leading us astray.
So this is another tool with which to question our beliefs! If we think of the way we have come to a particular conclusion, did it simply seem to come to us intuitively, as the obvious answer? Did we use heuristics to get to it? Or did we take the time to carefully check the reasoning process to see if it holds water? When I do this, I am always forced to swallow my pride and admit that there are a whole host of views where I either cannot remember how exactly I came to conclude that, or where I think if I am being honest with myself, I need to go back over my reasoning with a fine-toothed comb and check whether one or more of these biases is at play.
But next let’s move onto one of the hottest topics in both philosophy and public discourse. The way in which our current information landscape may work against our aims of gaining knowledge and insight, and instead mislead or confuse us.
Epistemic Systems and their faults
As we discussed earlier in the video, none of us form our beliefs in a vacuum. We are well past the point where most of our opinions are based on personal observation. And it is a good thing too. If each person had to discover everything for themselves from scratch, then the pace of human advancement would slow to a crawl. But with inter-reliance, more faults begin to creep in, and this is another route of sceptical questioning we can apply to our beliefs.
At one level, we can point to how even our best sources of knowledge regularly produce false beliefs. The mature scientific method is one of the greatest discoveries of mankind. It has been carefully crafted and refined over thousands of years until it was articulated in an incredibly sophisticated form by Karl Popper. However, it also works by self-falsification. That is, by coming up with theories, and then working to challenge and disprove them. This naturally means that the vast majority of scientific theories have been, strictly speaking, false. They just get better and better at predicting observations, which I would argue is basically what we are after anyways but then again I am a filthy pragmatist and my position is very controversial. But then when we say we “know” the conclusions of our best scientific theories, that seems a little bit premature. We do not really know they are true, but rather that they are the best current means we have to predict phenomena. If what we are after is a justified true belief, then this suggests even our best ways to form knowledge do not give us that. If you want to look more into this it is called the pessimistic meta-induction and it is a whole subfield of philosophy of science, so do check that out. Here our scepticism is brought on by something slightly different. This suggests that maybe truth is not what we are after in inquiry most of the time at all, but rather more effective ways of approaching the world and predicting events. This does not challenge the scientific process, but rather throws doubt upon the way we think about knowledge itself.
But this is only when the scientific method is being followed well or honestly. There are numerous examples of scientific malpractice or fraud which can undermine our confidence in reported conclusions. For example, Jan Hendrick Schon was a German physicist who claimed to have made leaps and bounds in the study of organic material being used to make semiconductors. This earnt him a huge amount of acclaim, with his findings published in some of the most prestigious journals in the world like Nature. However, it was later revealed that he had falsified much of his data, and a university spokesman called the scandal the “biggest fraud in physics in the last 50 years”. To take another example, the Dutch social psychologist Diederik Stapel had over 58 papers retracted when it was found he had misrepresented or outright made up his data. By its very nature, we can only know about the malpractice in science that comes to light. It is vital not to overstate things here. A meta-analysis by Danielle Fanelli estimated that only 2% of scientists have, even once, knowingly fabricated or modified data, and I have spoken on the channel before about how the peer review process works as a check to maximise the reliability of scientific findings. But if our current quest is to doubt everything we can, then this is another question we can ask: how well do we really know the scientific findings we cite. Are they replicated by multiple different sources, or are we overly weighting one study that supports our conclusion? This does not invalidate scientific findings by any means, it is just another possible way our belief-formation process can go wrong.
And, of course, these are just the ways our best belief-forming processes can go wrong. Most of our opinions are not formed by careful research of peer-reviewed publications, but we form them in much less rigorous communities. We inherit beliefs from friends, or family, or just the environment we are enmeshed in. This becomes a self-reinforcing cycle. On the one hand we gravitate towards people who are similar to us, and so likely to share our opinions, and on the other a group of people who agree on something are likely to reinforce those agreed upon views. If we are not careful, we can fall into what Noretta Koertge calls “communities of belief buddies”, where we insulate ourselves from critical opinions, and instead simply parrot our opinions back to one another. Considering the findings of psychologists like Solomon Asch that our views are heavily influenced by those around us, this can be a real hindrance to our thought processes. Another sceptical question is added: how often do we talk to people or communities who disagree with our views, to hear what they have to say? Have we converted our own world into one big echo chamber?
Even true and reliable findings can be misrepresented to be misleading. I’ve used this example before, but the UK newspaper The Guardian once published an article titled “Global shark bite deaths doubled in 2023 with 40% occurring in Australia”. This headline consists entirely of true propositions, but it leaves a misleading impression. Rhetorically, it seems like shark attacks are a major problem. But then it turns out shark attacks rose from five instances to ten, and all of a sudden the headline seems like overkill. Especially in popular reports of scientific or statistical findings, all manner of misrepresentations can creep in, even when not intended. If you would like two popular books on this phenomenon, I recommend Bad Science by Ben Goldacre, and How to Lie with Statistics by Darrell Huff. This can become a real issue, because most of us are not trawling academic journals to come to our opinions, and I also think this is an unreasonable burden to expect busy people to take on. This is made even more difficult because news reporting has an economic incentive to grab our attention as much as possible, and as a result has to balance headlines being accurate with being “sexy”. I have to do this as well, trying to jazz up video titles while trying not to veer off course into the realm of clickbait.
But information outside of these economic structures is still subject to its own pitfalls. The democratisation of information online, where almost anyone can say almost anything and have it read by relatively large numbers of people, provided they make it attention-grabbing enough, has led to a dizzying amount of falsehoods passed around as if they are fact. A study by Michela di Vicario and colleagues found that information often spread online based on its ability to tap into a pre-existing narrative, rather than its reliability. This aligns with other findings suggesting people are likely to evaluate online claims based on whether they are consistent with pre-existing beliefs and attitudes. Unknowingly, we may have formed our beliefs based on sources we took as far more reliable than they actually are, and dismissed sources that we should have taken more seriously.
And this is only the start. We could talk about corporate or state interests distorting how information is collected or reported. We could discuss how other improper incentive chains lead to false propositions gaining significant traction. We could expand upon how social dynamics play on the cognitive biases we mentioned in the last section, but I think that is enough doubt-sowing for now.
But then we have a problem: while we want to encourage critical thinking, it is easy to fall into a sort of epistemic nihilism, where we say we cannot know anything, and so we may as well just give up. This was part of ancient scepticism’s original goal, and they hopes this permanent suspension of judgement would grant them peace. But this suspension of judgement would make it very difficult to take action on almost anything. So instead, I think we can find a sensible middle ground, by taking a more nuanced view of how we define our beliefs.
Beliefs, Degrees, and Fuzziness
We often think of beliefs and knowledge as sort of a binary concept. That is, you either know something or you don’t. You either believe something, or you don’t. This is partly what can make sceptical challenges so daunting. They threaten to plunge us into total disorientation, where all of our beliefs and knowledge dissolve in front of our eyes and are blown away by the wind. But I think this is far too hasty. We do not need to view beliefs or knowledge this way. We can adapt our concept to incorporate our doubts! This is the wonderful world of partial beliefs, and I think it can help us steer our ship between the Scylla of despair and the Charybdis of dogmatism. Bear in mind this is just a brief introduction to the study of partial beliefs. It can get very intense and rather mathematical, but in its broad strokes it is incredibly intuitive.
A partial belief is just a belief that we are not certain of. In this sense, most of our beliefs should probably be partial beliefs, except for maybe incredibly simple one’s like “all bachelors are unmarried” or “All triangles have 3 sides”. But, to again draw from Kahneman and Tversky, we do not often appreciate how uncertain most of our beliefs are. If system 1, broadly speaking, prioritises decisiveness over reflection, and as a result, struggles to recognise uncertainty. As Kahneman puts it:
“Conscious doubt is not in the repertoire of System 1; it requires maintaining incompatible interpretations in mind at the same time, which demands mental effort.”
Think about it. When we are in most casual situations, we do not tend to ask “how strongly should I hold this belief?” Even if we reflect on whether our opinions are justified, we often treat them as if they are either totally unjustified or totally justified, without making room for this middle ground of uncertainty. The framework of partial beliefs can come in real handy here.
To use the broadest possible mathematical framework, partial beliefs ask us to estimate a number between 0 and 1 which represents the “strength” with which we hold the belief. This idea of strength can be interpreted in various ways. One popular one is to reduce it down to how much “value” you would be willing to bet on the belief. I know this is a far fetched example, but imagine I had you hooked up to a machine that would make you feel good if your bet was correct, and bad if your bet was not. If you were ⅔ confident in your belief, it would be rational for you to make a bet when the odds were 2:1 against you. This would make the bet reasonable because your confidence aligns with the potential reward you are being offered. Of course, you would also take any bet with better odds than this, but it would be irrational to take the bet on any lower odds than this. If we like, we can expand on this with a whole host of mathematical rules borrowed from a mixture of fuzzy logic and probability theory, such as the confidence in P and not-P summing to 1, or incorporate Bayes theorem and conditional probability.
Of course, in reality, we are unlikely to assign precise numbers to our beliefs, but the general framework is helpful. It allows us to view these sceptical questions not as eliminating our knowledge entirely, but questioning how confident we ought to be in our beliefs. We can go through our belief-formation and our justification processes, see where the weak points are, and then if we cannot fix them, we can mentally adjust our confidence level downwards.
The thing is, in some ways, most people already know that they are not certain in their beliefs. If questioned, we don’t tend to say we are certain about almost anything. We admit that our political views, our opinions on history, or our judgements of one another are all fallible and open to revision. We just forget it most of the time when we are in the thick of our lives, and when we are having discussions. I have found that a good way of getting into this “partial beliefs” mindset is to ask how much you would bet that your opinion is true. Would you bet your wallet, your house, a kidney, your life, your family? This sort of question can immediately alert us to the uncertainty that pervades our beliefs by raising the stakes of the situation until we seem reluctant to carry on. The sceptical philosopher Peter Unger has suggested asking “but do I REALLY know that?” until we are made aware of our own uncertainties. Though his argument goes much further than this.
But what is the point of this sceptical exercise? Why do we want to become less certain of some of our beliefs? Well, I think the benefit can be twofold. The first is that our beliefs will be more epistemically responsible. The Scottish philosopher David Hume once said “a wise man proportions his beliefs to the evidence”, and recognising uncertainty allows us to allocate this proportion with much greater care. But secondly, it can be a much-needed antidote to our pride and our intellectual arrogance. It is well-known that we tend to cling to our beliefs even in the face of contradicting evidence, because we find being wrong about things embarrassing. We see it as evidence that we are unintelligent and rebel against this conclusion. Paradoxically, this cuts us off from learning new things, and so becoming more knowledgeable. But through a touch of scepticism, we already recognise that our beliefs may be incorrect, and by thinking of our beliefs as partial, we become less attached to their conclusions. Thus, I hope this attitude can make us more curious, more likely to spot the flaws in our beliefs, and ultimately, a little bit wiser.
I hope you enjoyed the video (script), and have a wonderful day.
“Between the Scylla of Despair and the Charybdis of Dogmatism” should title the book on this subject! Not many modern philosophers flow with Dickensian style as easily as Joseph Folley does. Rare and fun!
Joseph… always engaging and insightful- but when are you going to return to Patreon?