To understand the world as usefully as possible, we need to go beyond the first-person perspective — not to achieve the third-person perspective or “view from nowhere,” but to achieve a knowledge that is without perspective. Because the results of science don’t imply that we can attain the godlike “view from nowhere” but something more shocking: namely, that there is no view.
It certainly feels like we have a view of things. But we’re deluded. I call this hard to shake delusion perspectivalism.
Scepticism and the problem of knowledge
The victim of Descarte’s evil demon (or hallucinator, brain in a vat, lucid dreamer, stooge in a simulation, hapless Boltzmann brain, etc.) cannot find evidence within their experience to doubt the global integrity of the environment. The evil demon can improvise and delude them at each turn, perhaps simply by erasing memories so that repeated or cumulative reality testing is futile.
Indeed the coherence and consistency of a world may not be a point in its favour. In a lucid dream, there are answers for everything; it’s too convenient. Should reality therefore have gaps or aporia? (Our consciousness experience does and arguably fundamental physics does too.) Even this is a mislead. The point is that if knowledge is something the knower has or gains or collects, then it’s already useless and you’re in insoluble problems of solipsism, Boltzmann brains, and simulation arguments. Knowledge can only be events or happenings, something you do. And it doesn’t matter what an alleged “knower” (like a person) “believes” “about” the world. (Here’s my long, companion post about aboutness.)
In science, the best practices achieve knowledge that is distributed, offsite, embedded in technology or cultural conventions that are beyond (but linked with) our brains.
Is this behaviourism or some kind of horrible instrumentalism where we’re replaced by algorithms that think without perspective? Not quite. AI, along the machine learning path is certainly heading that way. But the gifts of perspectivalism are actually richer than we realise. Sure, we hallucinate whole metaphysical domains that aren’t real, but what it buys us is time… (That’s meant to sound enigmatic. It’s a plug for my forthcoming book, How to Free Time.)
We should be sceptical of our senses and our conscious experience. But we need to shift that scepticism away from the content of conscious experience, towards the nature of a consciousness that has us believing in something as daft as content.
The observer as homunculus
Some will spy the homunculus fallacy lurking here.1)To me, the homunculus is still just as salient in science and philosophy, but it’s mainly been banished form the brain. Now it lurks in (i) information and (ii) perspective, especially in physics. Most ways of thinking about consciousness end up in some kind of framing whereby there’s a little homunculus — a weird little guy, normally gendered male — inside our skull looking at a screen or presentation of reality, or looking at whatever comes into it from the outside world, the senses. The problem is that it doesn’t explain consciousness, it simply removes it one layer: how does the homunculus experience things? Is there a little being inside its mind that reads-off reality?
Well, most ideas about consciousness, or mental representation, even in hardcore neuroscience, still have a homunculus. The very idea of a representation, or content, or intentionality relies on there being an interpreter, or reader, or decoder, or noticer of that representation. But surely the buck must stop somewhere. Surely, it’s precisely our ability to represent that marks us off from derivative representational media like paintings, language, codes, and so on that are designed and interpreted by us?2)Philosophy people: see John Searle’s distinction between derived versus intrinsic intentionality; in The Intentional Stance, Dennett makes a big deal out of this distinction and I think he’s right to, but he doesn’t follow through with as hardcore a conclusion as others, like Alex Rosenberg, Richard Rorty, or the Churchlands; or continental creepies like Meillassoux.
But I think this simply shows that we still think in terms of homunculi, it’s just that we assume there are person-sized homunculi called humans. We can’t say how our brains work but we know that representation exists: we do it, we know this from the inside. How could this be illusory?
Well it is, and the reason is hidden in plain sight. Indeed it is our faculty of vision that lures us into this mirage of non-locality that is at the heart of all our folk intuitions and how they work for us in some domains (primate evolution, social life) and fail us in others (science, philosophy).
Sight: the origin of perspectivalism
The fact that we see things long before we make contact with them inaugurates a basic misapprehension of the world. To be fair, it’s an heuristic that works perfectly well for navigating macroscopic objects moving at non-relativistic speeds reflecting visible light on terrestrial surfaces. If a predator approaches an amoeba, it cannot sense it until the predator makes contact by bumping into it. Some are more sophisticated and can sense changing chemical gradients and thereby detect predators in basically the same way we smell: molecules drift across the intervening space, hopefully giving advanced warning of an approaching predator. But once you get a bit bigger and start moving around (animals) detecting light is massively advantageous because there is a flood of photons bouncing off every object in view (especially in above ground daylight) and light moves so much faster than predators. The difference is so great that you can model where the object is, even if there’s a large intervening space — say 50 metres — because the light still gets to you, all but instantly. Now it makes cognitive sense to suppose that seeing the predator is decoupled from coming into contact with it.
But this, in a sense, is profoundly misleading. There is no such thing as this kind of nonlocal effect. You can’t know about something over there without making contact with it. It seems like we do, but really we only see things over there provided some photons hit that thing and careened off towards us and actually make contact with our retinas; without that happening, we cannot see the object. The electromagnetic wave suffusing all space has to ripple and the ripple has to propagate from over there all the way to over here — no gaps or gulfs or spooky action at a distance. There is only sense based on contact; all sense is touch; there are only contiguous physical effects in our world.3)Natura non facit saltum? Eesh. Quantum entanglement is arguably an exception, but it doesn’t seem to apply macroscopically — and many physicists argue it doesn’t violate locality anyway. I don’t know, but I haven’t found anyone arguing compellingly that it could have any impact on our senses. As usual, there is an exception: pigeons and other animals might utilise quantum entanglement in their method of navigating by way of magnetic fields; see Life on the Edge pp. 246–64.
Human vision allows us to infer a scene in front of us from the blizzard of photons making contact with our eyes. We have very sophisticated visual processing in our brains. Unlike the brain of a lizard, which probably “sees” more like a motion-sensor, we use the pattern of photons (including their wavelengths and their angle of incidence) to construct a conscious experience of being in a given environment.
Seeing without viewing
What noun to use for this thing that is constructed by and for our brains? Some would call it a model, or a simulation, or a picture, or a representation, or a rendering of what is around us (provided it reflects or emits visible light). But all of these nouns grant too much pre-existing metaphysics. My argument is that we only think of things as being like representations, models, pictures, and so on (viewpoint or perspective phenomena), because of the way our visual consciousness works; i.e. our intuitive ideas about representational phenomena and aboutness (reference or signification in continental philosophy; intentionality or content in analytic philosophy) are a by-product of our visual (especially) cognition.4)I’ve always liked Dennett’s notion of the physical, design and intentional stances, even though I think some of his work on the last one is misguided. I like them because stance is a non-perspectival, non-aboutness word. I use approach but stance is synonymous.
Evolution of perspective
Here’s my little telescoped story of how it happened.5)I suspect it predates language because all grammars are suffused with animacy, agency, subjects and objects, POVs. Here’s a paper I’m still trying to get published. Our ancestors had lizard-vision. They were lizards. But more advanced visual systems evolved in mammals, partly for night vision purposes, partly for social purposes — recognising predators, conspecifics, kin, mates. At some point, the actual experience of vision developed. Prior to this, animals could “see” but had what we call blindsight. My guess is lizards have blindsight, apes have vision, who knows about everything in between. Humans (and probably other hominins?) have a special kind of vision that is evidently populated by other beings equipped with this same vision.
We perceive what I call an arena.6)Pure crazy speculation: the arena evolved when two pre-existing capacities fused — in a watershed moment not unlike the origin of the eukaryotic cell. (i) The system we use to track our own valenced experience (affective states: feeling good or bad) was subsumed (functionally, not physically — or maybe physically, what do I know?) inside (ii) our visual system with its animacy detectors (that help us see erratic motion as tagging a living thing, distinguishing it from inanimate objects). With this fusion, one now perceives the world as populated by some animate things, including oneself: the most important agent in the world. And now these agents, self included, have a valanced nature to their animacy. One’s own position in the world is now as an animate thing with qualitative aspects. How to make sense of this weird state of affairs? Maybe, the brain’s solution is what a perspective or point of view is, i.e. a way of understanding reality as happening to a POV behind the eyes (wild when you think about it), in a world with a mix of inanimate stuff, some other animate POVs with gloopy subjectivities behind their eyes, and empty space in between. [Edit 16-05-22: I’d change my suggestion for two capacities fusing. I’d now nominate (i) high-level conceptual thought: as in the very late evolving stuff that we use to parse things like language, rich visual scenes, animacy detection, inference, etc. which is a whole bag of things but can actually mostly be done unconsciously and indeed has to be discovered by plumbing unconscious processes rather than stuff that’s “visible” to introspection; and (ii) the specious present or the last few seconds of working memory: the signals that are dominating the global workspace. After going through a big list of different states of awareness and what you can/can’t have in operation (like during a lucid dream vs a certain kind of stroke victim, etc.) I think the absence of either one of these results in loss of the arena. So it’s when the conceptual grammar of high-level thought merges with the running tally of “now” that a sense emerges of certain trackable high-level signals (vision, theory of mind, the body schema, etc.) happening to a being that is going along in time. An arena is not just a space: it has perspectives in it. Along with our vision and developments in social cognition, theory of mind — perhaps language was crucial too — we got an arena. This means we have a first-person view of what’s happening, that has the “viewer” behind the eyes, in the centre of vision; and a second-person assumption, namely that an interlocutor has an equivalent view of us; and a third-person view that can appreciate the arena as a diorama, seen from without or from a wider vantage.
All three perspectives or points of view are, I think, part of the same illusion and purely artefacts of our evolved cognition. 7)Although I assume that in order of coherence and robustness it goes: first, second, third. All can be improved but most people have a pretty good first-person just through normal social learning. I take it the second isn’t as good based on how people drive, the ubiquity of the Golden Rule, and couple’s counselling. The third person is even weaker — based on history and current events.
Obviously there really are other people, with eyes, in our line of sight, and we really do move around a space, avoiding bumping into solid macroscopic objects, thanks to our eyes. And our human friends presumably have a similar experience of what this feels like to live in the arena (although, people with aphantasia, blindsight, congenital blindness, etc. probably have it different). But the perspectival nature of the arena is just some brilliant solution to a problem for how to navigate space as a highly social primate where other people’s motives and ideas are paramount.
The arena posits that you are a special kind of point in space, a special kind of object, that has a point of view, a vantage, a perspective, a viewpoint on the world. Really, you’re just another part within the world: you can’t really stand apart from it and “look at it”.8)Consider which part of you has this perspective, this distance from the world. You’r eyes? They’re just organs on the outside of your body. The rods and cones in your eyes? They’re just in contact with photons that come to them. The circuits and nuclei involved in higher visual processing? They can’t have a perspective of the world, they’re surrounded by other grey matter. What would a clean vantage on the world look like? It can’t happen. Our cognition rounds it off to us having a magical POV inside our heads. We shouldn’t take it literally. You’re in it, you’re bouncing around with it and your brain makes sense of it by pretending there’s a porthole through the eyes through which “you” — a homuncular self sitting behind the eyes — view the world. I think here of Michael Graziano’s experiments that show how we model people’s awareness as a kind of gloopy, immaterial ectoplasm that emanates from people’s eyes. We’re super attuned to what other people are paying attention to, chiefly by using eye-tracking — the eyes as a window to the soul.
From this, I think all our assumptions and unexamined biases about representational phenomena flow. These days it’s gauche to posit a homunculus inside the brain; instead it can nowadays be found in the way people characterise information, or data, or content, or representations, or language.
In reality, there are no perspectives, no views. But try telling physicists that.
The hard problem of consciousness
Illusionism in the philosophy of mind is the view that the phenomenal properties of consciousness are illusory. In other words, consciousness seems like some nonphysical process, but it’s merely that our brains model or represent our awareness as having nonphysical, spiritual, immaterial, phenomenal, subjective, qualitative properties. Basically it’s the view that physical laws describe everything else in the world pretty well in terms of things like energy, particles, chemicals, proteins, cells — so consciousness too should have a physical explanation for why it seems to have nonphysical properties.
It’s meant to be a solution to the “hard problem of consciousness”. Effectively, it says the hard problem is a non-problem because the phenomenal properties of consciousness are hallucinated, so the real problem is how and why we come to believe that consciousness has such properties. To put it gnomically: why does it seem like there is something it is like to be? (This is the meta-problem of consciousness — why we think there is a hard problem.)
The main criticism of illusionism is the one you’re probably thinking of now. Who or what is under the illusion? If you can be subject to illusions, that’s the thing that consciousness is and that’s what needs to be explained. Saying the brain (mis)represents consciousness as having spooky properties doesn’t solve the problem: what about the subject, the first-person perspective, the self that is the consumer of this representation? That’s the conscious subject. How is it there can be an entity who can be prey to such illusions?
Critics of illusionism (or similar tactics from allied positions like eliminative materialism or functionalism) typically think this is shutdown argument, or at least an excuse to not even consider it as a coherent argument. They like saying illusionism is the dumbest idea they’ve ever heard, self-evidently untrue, a performative contradiction, unthinkable, or insane.
I think these criticisms are unfortunate, betokening an abject lack of imagination — in some cases deliberate (a kind of performative confusion), sometimes sincere. In both cases I’m unimpressed, because professional philosophers and scientists (who are paid to think) should be able to at least imagine their interlocutor’s views. Admitting they can’t should be embarrassing; admitting they won’t should be damning.9)An admirable exception is David Chalmers who’s an unyielding opponent of illusionism but fully comprehends it and has shown he is willing to at least imagine what it would be like if it were true — surely a minimum ask for a philosopher of mind.
Still, even if you can imagine illusionism, this objection (that being able to be under illusions is consciousness) is a good one, although some illusionist philosophers are equally unable to admit it.10)Honourable exceptions include Francois Kammerer, an excellent professional philosopher.
I agree with much of the illusionist view but get there from a different direction. And in doing so, I hope to overcome that problem.
Perspectivalism: the last and only illusion
I think this is the final illusion to be dispelled. Not because it is the most important or even the most thoroughgoing, necessarily, but because it is the source of the notion of illusion itself. Without the intuitive preconception that there are perspectives or POVs in the world, there cannot be illusions in the usual way we think of them, i.e. that some subject of experience — a person, or an animal — is prey to an illusion.11)Quick note: I realise anyone who thinks illusionism is insane will also think my idea is literally unthinkable, insane, a performative contradiction, reveals how evil I am, etc. Obviously I don’t care about such objections, but I do care if this turns into some kind of weird argument about neurotypicality. I don’t happen to have aphantasia, autism, ADHD, or blindsight. I have vivid mental imagery, a phonological loop, internal monologue, secure attachment, stable mood, etc. I am pretty fully neurotypical, whatever that means. The important point is that I can imagine, thanks to some excellent writing about such topics and by comparing notes with other humans about our experiences, what it would be like to have a totally different mental life. I have taken this into account. My advocacy of radical notions like the illusion of perspective is not at all based on my own stream of conscious experience. Nor is it based on my experiences with lucid dreaming, psychedelics, meditation, anaesthetics, etc. If I relied on any of that, I would never think of it. I entreat others to think beyond their own experience and beyond their own experience of experience.
And as it happens, I think the illusion of perspective is also the source of (i) the illusion of intentionality or content (aboutness) in thoughts, language, symbols, information, etc.; as well as (ii) the illusion of teleology (purpose or the future causing the past) in human action, evolution, animal behaviour, etc.; and also (iii) the illusion of observation or “direct access” in epistemology (theories of knowledge). Quite a lot.
Although it’s very hard to think in disillusioned terms, it’s not impossible. I’m learning to do it. And I think it obviates so many seeming paradoxes in science and philosophy. Admittedly, it also destroys almost everyone’s worldview including the very idea of a worldview, view being an incorrigibly perspectivalist term. Still. Cost/benefit comes out in favour.
What kind of relation does consciousness bear to the world?
You never see anything “over there”. Your phenomenal visual experience is not “of” the world, it’s a private show. It simply is not how the world looks. It’s a fiction, a metaphor. (Or maybe not a metaphor exactly. See below.) Your eyes get hit by photons. Your visual cortex — in a series of perfectly domino-like chemical reactions, honed by evolution and childhood development — makes inferences that allow your body to navigate the scene in front of you without dying every five minutes.
We so easily call our experience a view, a presentation, a image/map of reality. Slightly less perspectival is language like simulation, VR, hallucination, model. But these are still shot through with perspectivalism.
We need a good analogy12)Analogy, importantly, is a non-intentional, non-aboutness mode of knowledge: a mere detection of similarities. A camera can do analogies, so can basic algorithms and insects. Most of our cognition is, I think, more like analogising than modelling or representing. . What is like this process of perspectival consciousness we experience? Well here are some features.
- This process involves an internal system using internal dynamics to navigate an external system — not, emphatically, building a model of or using information about that external system. (Falling back on this is a homuncular regress.)
- It has made-up properties that, because of evolutionary history, nonetheless work quite well — the properties are not “isomorphic to” reality: that would be aboutness again. Also, reality isn’t anything like conscious experience, see above.
- It is ongoing, processual; it doesn’t work in a snapshot. It’s like getting a motor running, or keeping some plates spinning in the air..
Hmm. Tricky. In a sense, analogy is the best analogy. Conscious awareness, with its illusion of perspective, seems like it’s a representation or simulation of reality. But those things are themselves artefacts of consciousness (they can’t be features of reality, including the reality of consciousness). So perhaps we could say consciousness is an analogy of reality? (Doug Hofstadter would love this.)
This still isn’t quite right. Reality has heaps of features that consciousness (which is a subset of reality) doesn’t. And it’s not clear that we’re picking out the most salient features of both by saying one is an analogue of the other.
So, is it a metaphor of reality? Kind of. It’s almost like a dead metaphor of reality. It has long since lost its obviously metaphorical nature and is now taken literally. But it’s not like you say that consciousness is reality or vice versa (although this would work for panpsychists or idealists).
Strictly, we might say consciousness is a synecdoche of reality: a part standing in for the whole.
But, this week, I prefer to say it’s a case of metonymy. We take something associated with reality — its sequential nature (thermodynamic arrow of time, at the macroscopic scale at least) and its apparent saturation with animated POVs (agents) — and call it by that feature. We experience reality with perspectival consciousness, a metonym for all of reality, chosen for relevance to our evolutionary history, which is why it doesn’t include any exotic shit like dark matter, x-rays, or archaea.
The observer in science
Alas, many are trying to put the observer back into science13)More troubling still is that a lot of crucial ideas in computer science, mathematics, logic, AI, and more “formal” disciplines have a freaky little homunculus along for the ride. Consider the vaunted Turing machine (which is a mathematical abstraction, an idealisation of a universal computer, but very much the intellectual basis of understanding real, physically implemented computers and their promise and limits). The Turing machine “reads” symbols on a tape and executes a couple of basic operations dependent on rules and its “memory” of earlier states. It can be implemented with dominoes, vacuum tubes, logic gates, clothes pegs, or anything else that can be carefully configured to eliminate the noise in the environment, apply error-correcting processes, and analogise to the moving beads on an abacus. A cynic (me) would say that the computation is happening as much in this set of constraints as the parts that make up the Turing machine itself. But, heuristically, we think of the Turing machine as just the head, the tape, and the memory. “Where” is the computation happening? Again, to me it’s in the whole set-up, including all the stuff surrounding the system that keeps it isolated, running, error-free, supplied with energy, etc. Yet there is mental shorthand that imagines the head, in “reading” the symbol on the tape, somehow receives or takes on the meaning of the symbol. But the head is just a head it has no “internal states” — even its memory (which some might term an internal state) is a bunch of mechanical settings that change in the same way gate in a circuit does (this is barely an analogy of course because those logic gates are how low-level computation is implemented). A gate can be any physical mechanism that is amenable to the kinds of constraints above: it can be chemical, electrical, kinetic, but not anything. In fact most substrates won’t work. Almost nothing in the universe is a computer. Weirdly, this leads most people to conclude that information, meaning, and other abstract things are substrate independent because you can get a bit of computation going in a few different media. I take it to mean that if you expend a lot of energy you can set up different media to do analogous tasks. The fact that you can make a bat out of metal or wood doesn’t make me think there is an abstract bat out there in the Platonic aether. But I am aware that most people’s intuitions do lead them that way. Oh well, this footnote’s quite long and is now developing its own POV, thereby ironising this very post, but point is we inveterately model all knowledge-building phenomena as having a little micro-knower, a perspective, at their heart. We shouldn’t. — or find a way to account for an observer-dependent reality! I understand the rationale. We are bounded by what we can “observe” (although I would use terms with baggage that is non-intentional, non-representational, non-perspectival — terms like infer, detect, sense). So accepting our observer limitations is a good idea. But this leads to all sorts problems to do with choosing reference frames; quantifying uncertainty; coarse-graining; and anthropocentric ideas of entropy, information, order, complexity, etc.
I’m no physicist, but I’m confident that a lot of paradoxes to do with knowledge, uncertainty, unobservables, computation, and (especially) information, will be transcended if we have a saner idea of the place of the “observer” within the universe. Specifically, that they are just a part of the universe performing certain actions to try and navigate that same universe and thereby perform more actions, ultimately to survive.
Conscious human experience, with its bias towards phenomenal vision, is a brilliant metonym for acting in a social space: an arena, a stage, a domain. But it is misleading when trying to work with the substrate of existence: the unthinking processes that make up our universe, including our minds.
Footnotes
↑ 1. | To me, the homunculus is still just as salient in science and philosophy, but it’s mainly been banished form the brain. Now it lurks in (i) information and (ii) perspective, especially in physics. |
↑ 2. | Philosophy people: see John Searle’s distinction between derived versus intrinsic intentionality; in The Intentional Stance, Dennett makes a big deal out of this distinction and I think he’s right to, but he doesn’t follow through with as hardcore a conclusion as others, like Alex Rosenberg, Richard Rorty, or the Churchlands; or continental creepies like Meillassoux. |
↑ 3. | Natura non facit saltum? Eesh. Quantum entanglement is arguably an exception, but it doesn’t seem to apply macroscopically — and many physicists argue it doesn’t violate locality anyway. I don’t know, but I haven’t found anyone arguing compellingly that it could have any impact on our senses. As usual, there is an exception: pigeons and other animals might utilise quantum entanglement in their method of navigating by way of magnetic fields; see Life on the Edge pp. 246–64. |
↑ 4. | I’ve always liked Dennett’s notion of the physical, design and intentional stances, even though I think some of his work on the last one is misguided. I like them because stance is a non-perspectival, non-aboutness word. I use approach but stance is synonymous. |
↑ 5. | I suspect it predates language because all grammars are suffused with animacy, agency, subjects and objects, POVs. Here’s a paper I’m still trying to get published. |
↑ 6. | Pure crazy speculation: the arena evolved when two pre-existing capacities fused — in a watershed moment not unlike the origin of the eukaryotic cell. (i) The system we use to track our own valenced experience (affective states: feeling good or bad) was subsumed (functionally, not physically — or maybe physically, what do I know?) inside (ii) our visual system with its animacy detectors (that help us see erratic motion as tagging a living thing, distinguishing it from inanimate objects). With this fusion, one now perceives the world as populated by some animate things, including oneself: the most important agent in the world. And now these agents, self included, have a valanced nature to their animacy. One’s own position in the world is now as an animate thing with qualitative aspects. How to make sense of this weird state of affairs? Maybe, the brain’s solution is what a perspective or point of view is, i.e. a way of understanding reality as happening to a POV behind the eyes (wild when you think about it), in a world with a mix of inanimate stuff, some other animate POVs with gloopy subjectivities behind their eyes, and empty space in between. [Edit 16-05-22: I’d change my suggestion for two capacities fusing. I’d now nominate (i) high-level conceptual thought: as in the very late evolving stuff that we use to parse things like language, rich visual scenes, animacy detection, inference, etc. which is a whole bag of things but can actually mostly be done unconsciously and indeed has to be discovered by plumbing unconscious processes rather than stuff that’s “visible” to introspection; and (ii) the specious present or the last few seconds of working memory: the signals that are dominating the global workspace. After going through a big list of different states of awareness and what you can/can’t have in operation (like during a lucid dream vs a certain kind of stroke victim, etc.) I think the absence of either one of these results in loss of the arena. So it’s when the conceptual grammar of high-level thought merges with the running tally of “now” that a sense emerges of certain trackable high-level signals (vision, theory of mind, the body schema, etc.) happening to a being that is going along in time. |
↑ 7. | Although I assume that in order of coherence and robustness it goes: first, second, third. All can be improved but most people have a pretty good first-person just through normal social learning. I take it the second isn’t as good based on how people drive, the ubiquity of the Golden Rule, and couple’s counselling. The third person is even weaker — based on history and current events. |
↑ 8. | Consider which part of you has this perspective, this distance from the world. You’r eyes? They’re just organs on the outside of your body. The rods and cones in your eyes? They’re just in contact with photons that come to them. The circuits and nuclei involved in higher visual processing? They can’t have a perspective of the world, they’re surrounded by other grey matter. What would a clean vantage on the world look like? It can’t happen. Our cognition rounds it off to us having a magical POV inside our heads. We shouldn’t take it literally. |
↑ 9. | An admirable exception is David Chalmers who’s an unyielding opponent of illusionism but fully comprehends it and has shown he is willing to at least imagine what it would be like if it were true — surely a minimum ask for a philosopher of mind. |
↑ 10. | Honourable exceptions include Francois Kammerer, an excellent professional philosopher. |
↑ 11. | Quick note: I realise anyone who thinks illusionism is insane will also think my idea is literally unthinkable, insane, a performative contradiction, reveals how evil I am, etc. Obviously I don’t care about such objections, but I do care if this turns into some kind of weird argument about neurotypicality. I don’t happen to have aphantasia, autism, ADHD, or blindsight. I have vivid mental imagery, a phonological loop, internal monologue, secure attachment, stable mood, etc. I am pretty fully neurotypical, whatever that means. The important point is that I can imagine, thanks to some excellent writing about such topics and by comparing notes with other humans about our experiences, what it would be like to have a totally different mental life. I have taken this into account. My advocacy of radical notions like the illusion of perspective is not at all based on my own stream of conscious experience. Nor is it based on my experiences with lucid dreaming, psychedelics, meditation, anaesthetics, etc. If I relied on any of that, I would never think of it. I entreat others to think beyond their own experience and beyond their own experience of experience. |
↑ 12. | Analogy, importantly, is a non-intentional, non-aboutness mode of knowledge: a mere detection of similarities. A camera can do analogies, so can basic algorithms and insects. Most of our cognition is, I think, more like analogising than modelling or representing. |
↑ 13. | More troubling still is that a lot of crucial ideas in computer science, mathematics, logic, AI, and more “formal” disciplines have a freaky little homunculus along for the ride. Consider the vaunted Turing machine (which is a mathematical abstraction, an idealisation of a universal computer, but very much the intellectual basis of understanding real, physically implemented computers and their promise and limits). The Turing machine “reads” symbols on a tape and executes a couple of basic operations dependent on rules and its “memory” of earlier states. It can be implemented with dominoes, vacuum tubes, logic gates, clothes pegs, or anything else that can be carefully configured to eliminate the noise in the environment, apply error-correcting processes, and analogise to the moving beads on an abacus. A cynic (me) would say that the computation is happening as much in this set of constraints as the parts that make up the Turing machine itself. But, heuristically, we think of the Turing machine as just the head, the tape, and the memory. “Where” is the computation happening? Again, to me it’s in the whole set-up, including all the stuff surrounding the system that keeps it isolated, running, error-free, supplied with energy, etc. Yet there is mental shorthand that imagines the head, in “reading” the symbol on the tape, somehow receives or takes on the meaning of the symbol. But the head is just a head it has no “internal states” — even its memory (which some might term an internal state) is a bunch of mechanical settings that change in the same way gate in a circuit does (this is barely an analogy of course because those logic gates are how low-level computation is implemented). A gate can be any physical mechanism that is amenable to the kinds of constraints above: it can be chemical, electrical, kinetic, but not anything. In fact most substrates won’t work. Almost nothing in the universe is a computer. Weirdly, this leads most people to conclude that information, meaning, and other abstract things are substrate independent because you can get a bit of computation going in a few different media. I take it to mean that if you expend a lot of energy you can set up different media to do analogous tasks. The fact that you can make a bat out of metal or wood doesn’t make me think there is an abstract bat out there in the Platonic aether. But I am aware that most people’s intuitions do lead them that way. Oh well, this footnote’s quite long and is now developing its own POV, thereby ironising this very post, but point is we inveterately model all knowledge-building phenomena as having a little micro-knower, a perspective, at their heart. We shouldn’t. |