Philosophers are sometimes seen as superior minds with deep insights that transcend those of ordinary mortals like you and me. Chalmers’ “Hard Problem” of consciousness highlights how silly this idea is. I here show how the so-called hard problem is based on a simple misapprehension. To understand feelings of ‘self’, you need to employ the concept of ‘self-reference’. Chalmers doesn’t, and those philosophers who have, seem to have either been a bit scatterbrained, or got caught up in a welter of detail. The attention his ‘problem’ has commanded suggests that many contemporary philosophers have rather lost the plot. Here’s the introduction to David Chalmers’ paper Facing Up to the Problem of Consciousness (J. Consc. Studies 1995; 2(3):200–219):
“Consciousness poses the most baffling problems in the science of the mind. There is nothing that we know more intimately than conscious experience, but there is nothing that is harder to explain. All sorts of mental phenomena have yielded to scientific investigation in recent years, but consciousness has stubbornly resisted.”
Why does David have such trouble? And what does his ‘explanation’ — “ a nonreductive theory based on principles of structural coherence and organizational invariance, and a double-aspect theory of information” — actually mean?
I will now explain how Chalmers is confused and why his ‘problem’ is nonsensical, in the context of a robust definition of science.
Chalmers explains “the hard problem of consciousness” eloquently as follows:
“The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.”
To summarise, he is mystified by the observation that “there is something it is like to be ‘in’ bodily sensations”. He distinguishes between the ability to sense, store, access and report perceptions from the person’s own perception of say blueness, or ‘emotion’ or indeed ‘a stream of conscious thought’. He asks:
“Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.”
Okay, Dave, let’s give this a go. I’ll try to avoid non-technical language, but first I need to explain my understanding of Science, which is the tool I will use to dissect his ‘problem’.
What is Science?
In my model of Science, science works as follows:
I. Find a problem and create a model in an attempt to explain “how the problem works”;
II. Vigorously attack the problem, trying to destroy it by testing it in the real world;
III. If the model survives, accept it as ‘provisionally true’. We can never be absolutely certain.
It should be obvious that the model of Science is similarly open to criticism. If you have a better model, try to replace my model with yours — after ensuring it’s not a dead end. Before you do, you should therefore be aware of several prior models that have not panned out. Without trying for a comprehensive exploration, these include:
A. The Logical Positivist (Logical Empiricist) Model: Assume that there are grounded truths on which you can base everything. Build Science from these truths. This non-viable approach died out in the 1930s.
B. The Platonic Model: Assert that behind the scenes there is a knowable, perfect set of archetypes of, for example, what it “really means” to be a man, a dog, a table, or whatever. Although similar to the Logical Positivist model, this has never really died out. It is plastic enough to take on many forms, but none is satisfying. How do we ‘know’ the ‘true nature’ of archetypes? Why should they exist? Where is this ‘platonic space’ where they dwell? We always come back to the realisation that archetypes exist only in the mind of Plato, or Aristotle, or whoever. ET Jaynes wrote at some length about the “mind projection fallacy” that results.
C. What we might call the ‘Asymptotic Model’: that, although the Platonic Model can never be achieved, we can at least asymptotically approach “The Truth”. This might work fine if consequences always proceeded linearly from causes. Unfortunately, a tiny change in a model may have wildly divergent results, vastly different models can produce very similar results and (see logical empiricism) we can never be sure of our premises, anyway.
D. The ‘Sociological’ model: that we create our own reality — almost like a consensual, shared hallucination. The view is seen to change from time to time, as exemplified by Kuhn’s “paradigm shifts”. Hard reality unfortunately shows up this simplistic approach again and again.
E. Variants of the ‘Sociological’ model, which are legion. Some go so far as to believe that all of reality is a consensual hallucination. The problem with this — as exemplified by the COVID-19 pandemic — is that reality has no respect for this philosophy. It bites back. That’s why it’s good to disbelieve your most enticing theories, test them hard, and never truly believe them.
F. If you look, you’ll find many more failed theories.
None of the above has worked out in practice, because of the inconvenience of the real world often intruding and destroying our most cherished, but wrong, models, and because of the repeated demonstration that even the most convincing models are always in turn based on assumptions. These observations explain the necessity for “truths” to be provisional — always provisional. Claims of absolute truth are simply unreasonable.
A Scientific Model of Consciousness
Let’s put together a model of how consciousness works. It should be explanatory, testable and even predictive, in order to be useful.
1. Take a brain: an assemblage of neurones.
2. The brain has perceptions. Incoming impulses are converted (transduced) into nerve impulses. It’s important to emphasise that once this transduction has happened, there is no residual “blueness” or “spikiness” or “sulphur-like smell” or “vibration”. These have been turned into patterns of impulses in neurones.
3. The perceptions are manipulated in complex ways, resulting not only in external responses, but also in internal storage of information related to those perceptions.
4. One of these internal phenomena is memory, so prior patterns of responses can be retrieved as ‘information’.
5. This information can be correlated with other information, resulting in both behavioural changes and further modification of internal information.
Remarkably, you don’t need the complexity of even a fruit fly (with its 130,000 or so neurones) to do all of the above. The nematode worm C. elegans has just over 300 neurones in its entire nervous system, but can do all of the above. But where does consciousness then come in? Here’s the addition that I claim to be necessary and sufficient:
6. The organism has, within its brain, an internal model that represents itself.
That’s it! You likely have a few questions at this point, so let’s explore the implications of this addition. Your first question might be along the lines of: “But that doesn’t explain my rich internal environment.” You might also introduce the concept of ‘qualia’ at this point — a ‘quale’ being the internal perception of say ‘blueness’. Well, let’s see. Although Chalmers is chary about using this precise term, we’ll accept the addition of ‘qualia’, and explain them as follows:
7. ‘Qualia’ are accessible (reportable) aspects of this internal model.
Let’s consider the ‘blueness’ of the wing of a Morpho butterfly. Without any self-reference, it seems possible to associate the word blue with my perception of the deep ocean on a bright day, the Blue Grotto, a friend’s eyes, and that Morpho. I have just reported these. But — as Chalmers will be quick to point out — this reported association is different from my own internal perception of blueness, the ‘blue quale’. Let’s try to represent the ‘blue quale’, in terms of my model.
This is not difficult. My reflection about my inner perception necessarily calls up my internal model, and starts playing with it. When I talk of “my inner perception”, I’m playing with this model. When I contemplate “How I feel” or answer the question “How would I respond to seeing a Morpho butterfly flitting past?” this conjures up a host of explicit internal references (qualia) that are subject to further manipulation. They may well feed into the affective (emotional) circuitry in my brain that is intimately tied up with memory.
It’s important at this point to note that my ‘inner model’ isn’t necessarily consciously accessible, in the sense of the ability to dictate what components are accessed, and how they are acted on. I might have a very vivid past experience of fear or joy associated with a blue butterfly, and this may overwhelm my ‘rational’ processing of such a stimulus, the memory of such a stimulus, or even my imagining such a stimulus based on either an internal or external prompt of some sort. There may also be all sorts of “things going on” that have no conscious, reportable equivalents. This is how nervous systems work — we’d drown in a profusion of riches if we were aware of every individual impulse flowing from the back of the eye to the back of the brain, as just one example.
It’s equally clear that my simple additions to the model — self-reference and the acknowledgement of internal parameters — does not demand detailed neurobiology. Answers to questions like “Does this involve the cingulate gyrus?” are interesting but peripheral. The model is simple and general.
At this point, you will see that, even in its naïve simplicity, my model is quite capable of answering all of Chalmers’ questions, apparently bar one. Visual and auditory ‘feelings’ in my model simply emerge from the self-referential nature of the model. A problem question may however be “Why should physical processing give rise to a rich inner life at all?”
In terms of my model, there is a clear answer. Once we have an internal model that is capable of self-examination, “inner life” is simply a synonym for self-reference. The fact that it is rich (or not) depends on the complexity of the nervous system. With about 80 thousand million neurons, and about 16 thousand million neurons in the cortex of the human brain, it would be surprising if an inner model were not rich, especially when we realise that a single cortical neurone can have several thousand connections.
The above explanation (model) is so simple, so obvious, and requires so few ancillary hypotheses, that I find it astounding that Chalmers’ ideas have gained any traction with modern philosophers. But — in the spirit of good Science — let’s immediately try to find flaws.
It’s clear that points 1–5 are unexceptional: they represent basic neurobiology, well-tested and widely accepted. It should be equally evident that the “internal model of self” is not conceptually challenging, and doesn’t need to be especially complex. All that is required is self-reference: internal circuitry that in some way, however primitive, contains some sort of representation of self.
We can refine this representation. It makes sense to have the self-reference refer either to the body (soma) of the organism, or the ‘self’ in a more abstract way. This second abstraction may seem very mysterious, but it shouldn’t. Any representation of “my hand” or “the hand of the organism that is me” is a combination of a patterned set of impulses that make up sensory input — vision of my hand, proprioceptive impulses, and so forth, and the internal storage and processing of related information. The key idea is that neurones are associative: they associate an internal model that is conceptually self-referential with certain incoming stimuli. Once you have an internal representation of self, the transition from the ‘more concrete’ representation of a hand (or foot or flipper) and its properties to a more abstract representation of the “whole of me” or even “my mind” may seem like a large step, but there is no qualitative mystery here. Self-reference is sufficient.
This assertion will doubtless cause some distress, so let’s delve a bit deeper. We know from basic neurology that the parietal lobe contains within it a clear sense of ‘ownership’ of the opposite half of the body. For example, if my right parietal lobe is damaged, I may develop neglect of the left half of my body. In some individuals, this will also be reported as a perception that the left hand or leg “doesn’t belong to me”. The model has been damaged, and that sense of wholeness or possession — of being part of ‘me’ — has been lost. We might also predict that feelings of ‘depersonalisation’ — that one has lost a feeling of self — are explainable on a neurological basis. An older study by Simeon and colleagues (Am J Psychiatry 2000; 157:1782–1788) supports this idea, with a more recent review suggesting that these feelings arise from several sites in the brain (J Psychiatr Res 2020 Sep;128:5– 15).
My model invites an even deeper explanation for Chalmers’ consternation. His rich inner life can be described as merely an ‘epi-phenomenon’: the internal model produces a signal that the brain interprets as “this is me”. This too is not mysterious — in fact, the converse would be quite uncanny, the idea that a self-referential model would not produce outgoing impulses that in turn could result in actions. A computer analogy would be memory which is only ever written to and never accessed, or more accurately, a microprocessor running a complex program full of feedback loops that never, ever produces any form of intelligible output. A strange thing indeed!
Once we realise that “outputs are occurring” from the internal model of self, it is a trivial step to do two things. The first is to transduce those outputs into motor acts, acts like reporting “This is how I feel about the colour of a Morpho’s wing”; the second is to further feed these outputs back into the model yet again. These are natural consequences of the model.
An obvious attack on this clarification is to say something like “Yes but the observation of a motor response, even one that reports an inner state, is not the same as the person’s own realisation that they have an inner state, or their own experience of that inner state”.
This is easily answered. First is the observation that in processing the experience of your inner state, you are simply feeding outputs from the model back into the model. Second, is that it is difficult to justify a priori the idea that there is a qualitative difference between what you tell yourself and what you tell others — even about your experiences! (You may hold back on certain private feelings, but it seems silly to propose an entirely different set of ‘internally reported’ and ‘externally reported’ feelings), and even more daft to assert that these sets are ‘of a different nature’. Finally, we observe that there will naturally also be internal movement of information that does not occur at a perceptual (or reportable) level. Qualia can be seen as no more than the emergence of information from this internal, self-referential model.
There may however be other ways that we can attack my model. Let’s go back to Chalmer’s “problem” and see whether he provides us with further information that shows up my model as too simplistic or just plain wrong. Part 3 of his paper tries to explain the hard problem in more detail.
Chalmers “functional explanation”
He says: “By contrast, the hard problem is hard precisely because it is not a problem about the performance of functions.” As this is completely at odds with the functional (self-referential) explanation I’ve just provided, let’s see how he justifies this assertion.
Now Chalmers takes several paragraphs to work through what he correctly calls a ‘trivial’ explanation of most of my points 1–5, that is, basic neurophysiology. I’m even tempted to say that he labours the point a bit. He is then mystified:
“When it comes to conscious experience, this sort of explanation fails. What makes the hard problem hard and almost unique is that it goes beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience — perceptual discrimination, categorization, internal access, verbal report — there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? A simple explanation of the functions leaves this question open.” [his emphasis]
You can however see that my model already answers this question. What is Chalmer’s problem? I’d here suggest that it’s a problem of misunderstanding how Science works. In his several paragraphs of digression about brain basics, he’s already hinted at this misunderstanding with:
“Throughout the higher-level sciences, reductive explanation works in just this way. To explain the gene, for instance, we needed to specify the mechanism that stores and transmits hereditary information from one generation to the next. It turns out that DNA performs this function; once we explain how the function is performed, we have explained the gene.”
We however have to read further to see precisely where he’s coming from. Here’s his key mistake:
“There is no analogous further question in the explanation of genes, or of life, or of learning. If someone says “I can see that you have explained how DNA stores and transmits hereditary information from one generation to the next, but you have not explained how it is a gene”, then they are making a conceptual mistake. All it means to be a gene is to be an entity that performs the relevant storage and transmission function. But if someone says “I can see that you have explained how information is discriminated, integrated, and reported, but you have not explained how it is experienced”, they are not making a conceptual mistake. This is a nontrivial further question.
Can you see where he’s gone wrong? He has no concept of self-reference. His concept of science is reductive, and his concept of models is linear, from cause to effect. Let’s explore further. He says:
Why doesn’t all this information-processing go on “in the dark”, free of any inner feel? Why is it that when electromagnetic waveforms impinge on a retina and are discriminated and categorized by a visual system, this discrimination and categorization is experienced as a sensation of vivid red? We know that conscious experience does arise when these functions are performed, but the very fact that it arises is the central mystery. There is an explanatory gap (a term due to Levine 1983) between the functions and experience, and we need an explanatory bridge to cross it. A mere account of the functions stays on one side of the gap, so the materials for the bridge must be found elsewhere.
Chalmers seeks a linear association (‘bridge’) but can’t see that the bridge must be one of self-reference. For him, a bridge too far. From my model, there is no central mystery. In fact, it would be mysterious if your inner model did not produce outputs that allow “reporting of what’s happening”. The qualitative ‘explanatory gap’ that Chalmers asserts is simply absent. So where does he obtain this ‘gap’?
The thing is, he doesn’t. He merely states, on the basis of what he’s already said that: “To explain experience, we need a new approach.” That’s it! Perhaps however, we’ll obtain some clues in part 4 of his paper, “Some case studies”.
Here, Chalmers outlines ‘case-studies’ that, he asserts, always devolve to ‘easy’ problems and ignore ‘the hard problem’. He’s correct that Crick and Koch’s model explains something else — but this is clearly because they don’t contain a description of a self-referential internal model, so one cannot expect them to explain the reported manifestation of consciousness. The case of Baars’ “global workspace” is slightly more complex because there are hints of greater sophistication here. A central workspace allows for communication between specific sub-processes. But here too, the concept of self-reference is absent from Chalmers’ rendering, disallowing a model of consciousness. And so it goes for the other theories he briefly alludes to.
But Chalmers moves on, and so should we. He characterises various theories as one of (i) explaining something else; (ii) denial; (iii) invoking magical thinking; (v) assuming the existence of experience and thus accounting for “some facts about its structure”; or (v) trying to isolate the ‘substrate of experience’. Throughout, we have the same absence of self-reference.
But perhaps section 5 of his paper — enticingly titled “The Extra Ingredient” — will address this defect? It turns out that, instead, he merely highlights his cluelessness, for Chalmers is literally looking for an ‘ingredient’, not a different model. He makes it quite clear that not only his own model, but also any model he can conceive, is hierarchical, of the form:
“components added together” ⇒ “consciousness”.
Indeed, all his exploration and rejection of ‘nonalgorithmic processing’, nonlinear and chaotic dynamics (hot stuff in the 1990s), and of course ‘quantum mechanics’ runs along these lines. He even dusts off and hauls in the hoary old Copenhagen interpretation of quantum mechanics, and gives it a whirl. And then Chalmers leaves the rails completely, saying:
“It follows that no mere account of the physical process will tell us why experience arises. The emergence of experience goes beyond what can be derived from physical theory.”
The train leaves the station
Chalmers’ subsequent exploration is ‘off the rails’ in two senses. The first is that it’s complete nonsense; the second is that, although he tries to depart to a non-linear or even non-coherent realm, he does so while still hanging onto the linear coupling of his toy train of thought!
It is however instructive to examine his thinking carefully. Incoherent thought is of little value on its own, but incoherent thought that is widely commented on as if substantial, or even accepted, tells us much more. It often tells us a lot about how we think. First, he says:
“Given any such process, it is conceptually coherent that it could be instantiated in the absence of experience. It follows that no mere account of the physical process will tell us why experience arises. The emergence of experience goes beyond what can be derived from physical theory.”
Words like ‘Gosh!’ and ‘Wow!’ rise to the lips unbidden — although not instantiated in the absence of experience. The mere invocation of self-reference puts paid to this assertion, so where is he coming from? Here’s his exegesis:
“Purely physical explanation is well-suited to the explanation of physical structures, … [but] the structure and dynamics of physical processes yield only more structure and dynamics, so structures and functions are all we can expect these processes to explain.”
and most tellingly:
“The moral of all this is that you can’t explain conscious experience on the cheap. It is a remarkable fact that reductive methods — methods that explain a high-level phenomenon wholly in terms of more basic physical processes — work well in so many domains. In a sense, one can explain most biological and cognitive phenomena on the cheap, in that these phenomena are seen as automatic consequences of more fundamental processes.”
Aagh! Chalmers reveals that his entire understanding of biological and other processes is one of dissection — of reduction to components. He apparently has no concept of the ‘dynamics of physical processes’ that he refers to. Ordinary differential equations anyone?
It’s obvious that you cannot explain anything with a feedback loop in it in terms of a purely linear or indeed hierarchical model. After a puzzled digression on élan vital, Chalmers gets to the gist of what he really feels. This is section 6 of his paper, and it’s a bit disturbing.
Chalmers ‘solves his problem’ by not solving it! He calls his ‘solution’ a “nonreductive explanation”. Although he’s already beaten up those who assume the existence of experience, he does the same, just worse. This is awesomely asinine — consciousness is to his mind, a fundamental entity like mass and charge! He even says this in so many words:
“I suggest that a theory of consciousness should take experience as fundamental.”
This is the sort of statement that, even in a nominally serious journal, should be followed by about five exclamation marks. We have moved quite quickly from misunderstanding to madness. It gets worse. A gem:
“In particular, a nonreductive theory of experience will specify basic principles telling us how experience depends on physical features of the world. These psychophysical principles will not interfere with physical laws, as it seems that physical laws already form a closed system. Rather, they will be a supplement to a physical theory.”
Translated as best possible, this bizarre theory seems to say:
- Experience depends on inputs from the physical world;
- BUT ‘psychophysical principles’ do not affect the physical world (interfere with physical laws) in any measurable way.
Can you see that we’re back to the computer memory that is “write only”? But then behaviour emerges — presumably reporting of the experience of qualia, for example — without demanding a physical explanation. It has no measurable effect, yet it does!
This is not simply trash, it is worse than trash, because the way Chalmers’ theory is formulated, it cannot be refuted. Woo is sanctified, woo that says “I am because I well, er, just am”. He ultimately invokes some sort of recursion, but it is the recursion of the World ouroboros eating its own faeces.
Now, it’s conceivable that, at some point in our exploration of the Universe, we’ll encounter some phenomenon that defies explanation, despite the best attempts of minds far greater than ours, deployed over millennia. Is Chalmers’ self-contemplation one of these problems? This seems unlikely, not the least because of the irony burnt into self-contemplation without allowing for self-reference. That’s all he needs! The answer is, in other words, contained within the question.
Bread and Cheese
Now in section 7 of his paper, Chalmers explores the consequences of his error by outlining his “theory of consciousness” at quite some length. Read this if you wish, but as his basis is the above ‘nonreductive’ approach, his attempts are bound to fail. His deliberations vary from the banal (most of the first two parts of section 7) to the peculiarity of his “double-aspect theory of information”, a reification of information theory that would have ET Jaynes spinning in his grave.
To make the simplicity of my model and the incoherence of Chalmers ‘explanation’ quite clear, let’s illustrate both approaches. Today, I’ve been stuffing myself with fresh fruit, so I don’t feel like much of an evening meal. Perhaps just bread, cheese and a slice of tomato? I’m also tempted by that hummus spread, but should I use butter instead? To help me with my tiny problem, I conjure up the qualia of the components — the slightly tart, salty Cheddar; the sweet tomatoes; the texture (in my mind) of the sourdough; and not the least, the smokey smell of the hummus and its umami taste. In my mind, a good combination!
Now contrast this process — simple biology, self-reference to experience, and a decision, with Chalmers’ mystical “nonreductive theory of experience”, a fundamental or perhaps even fundamentalist interpretation of the simple act of choosing a meal. Transcendent cheese anyone? Can you see how silly it all is?
The attractive image at the start is of a suitably blue cheese (not Cheddar, the images were more staid) and tomato sandwich from Wikimedia. I was seduced by the bright colours and the appeal to my memory of similar meals. I’ve already explored the information-related aspects of qualia elsewhere; in a forthcoming post — if an angry philosopher doesn’t get me first — I’ll try to answer the obvious follow-up question “Why do Chalmers and many other philosophers have such a linear take on science and such a boxy, hierarchical view of reality?”