I don't believe in consciousness.
2025-06-17
Epistemic status: everything below feels so blindingly obvious that I worry I've misunderstood the prevailing position, and everyone else actually does agree with me (despite their claims to the contrary).
This post isn't an argument, and it won't convince anyone. I'm more attempting to state my position, to have it all written down in one place. If you disagree with me, which the vast majority of people seem to do, this lack of any actual debate will probably leave you a bit miffed. Sorry.1
I'm sure there's a rich history of philosophers who have described all this better than I could, then proceeded to write it up in a book I haven't read. Skimming Wikipedia, I'm pretty sure my position on it all can accurately be labelled Reductive Physicalism or Illusionism (though I couldn't tell you the distinction between the two, nor even if one is a subset of the other). I didn't study philosophy. I'm a physics and computer science major - two disciplines that have a long history of overconfident claims in areas outside their domain, while not caring overmuch about accuracy - I'll take that lineage in stride, and round my position off to something simpler: I don't believe in consciousness.
Partition A
It's been pretty glaringly obvious, for at least a few thousand years, that we can usefully partition the universe in two categories:
- The stuff that grows, and moves, and self-repairs, seeks out energy gradients, and creates copies of itself, and
- The rest. (Granite, a puddle, a dead pet, the planet Neptune.)
At the turn of the 20th century, if you had asked a leading expert on Category One2 what the difference was, they likely would have cited a "vis vitalis," or "élan vital." An animating vital force, present in the members of Category One, responsible for the various macroscopically observable phenomena that made them distinct from Category Two.
Partition B
Likewise, most people today tend to split the universe into two:
- The stuff that reports an internal experience, has opinions about the world, feels pain, looks in a mirror and tries to rearrange its hair a few times before giving up and rushing out to catch the subway with a bit of a lingering unsatisfied feeling, and
- The rest. (Rocks and trees, and stars, and nematodes, and Covid-19, and SQLite databases, and all the rest. Probably mosquitoes, debatably shrimp, likely not the lobster, definitely not the octopus.)
They claim the members of the first category have "consciousness," and that the members of the second category do not. Much effort is spent trying to decipher exactly where the line is.3
Of course, in the case of Partition A, it turns out there was no such vital force, and molecular biology was just really, really complicated. As our microscopes and chemistry improved, the need for a vital force withered away - we exchanged it for a bunch of smaller explanations individual mechanisms, which we then recombined to understand how the microscopic pieces gave rise to observed macroscopic phenomena of life.
What's more, if you look at the field today, there also isn't really any hard line between biology and the rest of chemistry. People certainly try to draw one - the best I've heard is commonly cited as NASA's working definition for exobiology: "a self-sustaining chemical system capable of Darwinian evolution" - but even that has edge-cases (viruses, RNA world, prions, a collectively autocatalytic set of reactions), and "life" is certainly not an intrinsic property that all the parts of an alive cell will have.
I am very confident that the same will be true of consciousness. I believe with >99% certainty that:
- There is nothing metaphysical happening in the brain that couldn't be explained through physics, chemistry, biology, and neuroscience.
- The algorithms that are run on neurons and atoms in the brain could be run on silicon in a computer, or by 10,000 monks with abacuses and a lot of spare time, and there will be no difference in the experience of the resulting human.
- You could remove all quantum effects, and (after re-tuning some diffusion rates and other constants such local physics works the same) there would be no difference in the experience of the resulting human.
- You could chop a brain up and rearrange all the pieces, and as long as you reconnect everything in the right way, there will be no difference in the experience of the resulting human.
I'm also believe it's quite likely that:
-
Any approach that starts inside a brain, and tries to build a model using introspection (as opposed to, say starting with neuroscience and building up from there) will fail to find much anything useful.
Things that look like meditate really hard (and / or take psychedelics), and try to detect properties of your mind will basically be picking up random (albeit, repeatable) noise, unrelated to actual features of the algorithms of human cognition.
-
Our current language for describing consciousness (eg. "internal experience" and "qualia") will either need to wither away, or drift in meaning, to be replaced by more accurate and specific descriptions of algorithms running on neurons.
These specific descriptions will not map cleanly onto the current language used to describe consciousness (at least, not in any way closer than molecular biology maps onto the vital force).
To be clear, my position isn't "I don't believe in life," it's "I don't believe in the vital force." Unfortunately, we live in the pre-decoupled world, and crossing the metaphor the terms for "life" and "vital force" are both interchangeably called "consciousness", so terminology is a bit hard.
take my money
If you think there's something necessarily metaphysical about consciousness, and if there's any way to formulate that as a bet (even one that wouldn't resolve for a few millennia), I will take your bet. Seriously! Email me.
If, for one reason or another, your version of consciousness isn't something you can formulate a bet against, I don't think it's a belief.4
does this matter?
Yeah, at least a bit.
It seems like a lot of people who believe in consciousness are pretty convinced that it's a binary property, one which all humans have equally and various species of animals either do or do not. People almost always tie moral worth to the presence of this property.
I take issue with just about every part of the above.
-
I am fine with the "all humans equal" part. Even if some people are clearly a lot less conscious than others (very young children, severe brain damage, etc.), "treat everyone as having the same amount of moral worth" is a very useful property for running a good society that seeks to do good things (or, at least, societies that don't have this ingrained to the deepest level have tended to do a lot of harm).
-
However: given the vast similarities in both brain structure and behaviour between humans and - at the very least - some closely related mammals, it would be exceedingly surprising if there was anything even approximating a hard line.5
Without any metaphysical on-off switch - assuming there's just a bunch of cognition algorithms of greater or lesser complexity, running on neurons - you're left assuming most animals are like us, only a bit less so6. This scales smoothly7 down from chimps, to cats, to mice, to shrimps, flying past nematodes, beyond bacteria and archaea, and into self-catalytic chemical reactions, and rocks and electrons and whatever.
Henri Lemoine and some of my other fellow E.A. McGill friends have actually given this a shot, trying to estimate the amount needed to donate to animal welfare charities to offset eating various sources of meat8. It's a pretty fascinating exercise, you'd be surprised at how tractable it is to get your uncertainty down to an order of magnitude or two.
Least important takeaway from that era of discussion: if you're trying to ethically eat meat, don't eat bees (shockingly complex social behaviour, problem-solving capabilities, and general cognitive ability - paired with an abysmally low caloric density). -
If you believe consciousness is some nigh magical property, bestowed on some forms of life and not others, then it conveniently takes care of a rather tricky question. If you believe that it's just a handful of entirely comprehensible algorithms, then it's not inconceivable that whatever properties we care about those algorithms having could be present in some near-future AI.
The reason I assign moral worth to humans (and, to a lesser extent, other evolved minds) is not exclusively because they are conscious9, but because their value functions correlate strongly with my expectation of a CEV. It seems likely that minds born from a gradual mesa-alignment failure on inclusive genetic fitness tend to cluster in one corner of mind-space - able to love, to look up at the stars and feel wonder and awe10 - while I would expect a mind trained though gradient descent to be something utterly alien, with terrifying values, and experiences that look absolutely nothing like ours - whether or not it is able to feel the "redness of red".
-
Don't get me wrong, I'm sure it'd be great fun to have a proper debate about this one. The main problem is that I honestly don't understand the main-line opinion. Most existing explanations of the prevailing position I've encountered feel one step up of just saying "the redness of red", which doesn't really do it for me, and I can't really argue against a position I don't understand. ↩︎
-
called a "Biologist". ↩︎
-
This was hard enough a few thousand years ago (What about very young humans? Very brain-damaged humans? Humans who are asleep? Very scary looking humans of a culture very different to mine who live on the other side of the river? Humans under general anaesthesia? Chimps? Cats? Corvids?).
If you could all answer those questions, I'd imagine you'd be a pretty apt question-answerer, and the arrival of characters who live inside of stories, and plays, and choose-your-own-adventure fanfic over the last few hundred years shouldn't have made it much harder.
GPT-4 might be a little bit tougher to categorise, but at this point most people are getting pretty good at definitively declaring whether or not things are conscious, so it's not too much of an upset to the game. ↩︎
-
Sorry. I cannot emphasize this one enough: If your belief isn't something you can bet on - even in some super abstract way - then I don't think it's a belief. If you don't agree with me on this, we probably have very different epistemologies.
Note: I don't think this necessarily forms the definition of a belief, but - like disallowing backwards time-travel for theories of physics - I think it's useful heuristic to check whether a definition of "belief" is sane. ↩︎
-
You might be able to recover something approximating a hard cutoff if there was a runaway drastic change on brain algorithms post creation of language; though I still think evidence would favour "humans are more conscious but others are still somewhat conscious," over "non-humans uniformly round down to zero." ↩︎
-
Somewhat related:
Campaigns to bear−proof all garbage containers in wild areas have been difficult because, as one biologist put it, 'There is a considerable overlap between the intelligence levels of the smartest bears and the dumbest tourists.'
This quote's been floating around the internet for longer than I've been alive. I've been able to trace it back as far as 1995. If you want to search further, godspeed. ↩︎
-
You can end up in the same moral place if you assume consciousness is a binary, but you have uncertainty over the probability of various species possessing it. I would not advise this, but it works for some. ↩︎
-
Some screenshots of the main calculation spreadsheet, if anyone's interested in attempting something similar (or just curious):
↩︎
-
Though that is a prerequisite. ↩︎
-
Pretty sure this is a quote from someone on LessWrong I'm butchering. Apologies for the plagiarism! ↩︎