In last week’s newsletter I introduced the strange worldview of Donald Hoffman, a cognitive scientist who believes that reality is radically unlike what we perceive it to be (an argument he made in his book The Case Against Reality). This week we offer the second part of my conversation with Hoffman, in which things get, if anything, stranger. We pick up the conversation where we left off last week: Hoffman had argued, on Darwinian grounds, that reality isn’t what it seems, without yet giving us his theory about what reality is.
DONALD HOFFMAN: I do have a theory, and I can discuss it with you, but I should point out that that theory is separate from the evolutionary conclusion.
The evolutionary conclusion is: we don't see reality as it is. The second step is: okay, now, as scientific theorists, what shall we propose as a new theory of that reality? And someone can buy my first proposal—that we don't see reality as it is—and not buy my proposal about the nature of reality...
ROBERT WRIGHT: And the proposal you have, there's an actual mathematical version of it, I think it has maybe seven variables or something like that. And we won't be able to get into that in any depth at all, but one interesting feature of it is I think you claim it's testable.
Before we get into that, I want to get a little more deeply into the question of, okay, if this is not the real world, what is the real world that this is a kind of reflection of?… And here's where things get weirder, as if things weren't weird enough, at least by my reckoning...
The foundation of the theory: conscious experiences are real
As I understand it, the world is kind of co-created by conscious agents... You tell me—you refer in your theory to conscious agents—does "a conscious agent" correspond to what we would think of as a conscious agent? Like, I'm a conscious agent, you're a conscious agent, so right now we are two conscious agents interacting—is that the correct terminology in your theory?
That's the first step, yes. But there's more to it. I'll unpack it just a little bit. So here's the motivation for the direction I've gone.
The idea is it may be the case that all of my beliefs are false. I may know nothing. And that's a serious possibility. As scientists, we have to acknowledge that possibility.
But if there is anything that I believe that's true, it's that I do have conscious experiences. If my belief that I'm feeling pain, or smelling a rose, or tasting chocolate—if my belief that I'm having experiences is wrong, then I'm pretty much wrong about everything, and we might as well just eat, drink, and be merry, because there's really nothing else that we can do as scientists.
This is a little like what Descartes said. The minimalist assumption is that you are having this experience whether or not the experience is true.
That's right. I don't want to therefore go to the "cogito ergo sum" kind of thing; I don't want to perhaps go where he goes in terms of trying to prove my own existence, but just merely that if I'm wrong that there are experiences, then pretty much there's not anything secure that I can go with.
So I decided to say, okay, let's go with that. If space and time and matter, which are just the format of my perceptual system, are not the right predicates to describe reality, and I can't let go of the idea that I have conscious experiences, let's just start with consciousness.
Let's see if we can get a mathematical theory of consciousness and conscious experiences—but a new kind of theory.
There are many cognitive neuroscientists and philosophers who are trying to get a theory of consciousness. But the typical approach is to say, let's start with brain activity, and say that this kind of brain activity or this pattern of brain activity is consciousness or gives rise to consciousness. So consciousness is secondary to some physical description of the brain, or microtubules in the brain, or thalamo-cortical loops in the brain, or something like that. And I'm saying, let's not start with our interface—so, space and time—let's start with the description of consciousness on its own terms.
So, what do we mean by consciousness not as inhabiting space and time—because we don't know that it inhabits space and time—we just want its own properties on its own.
And so the idea is that there are conscious experiences, and there are actions that you can take based on those conscious experiences, where the "you" is used in a more generic sense—not you as a person, but, you know, consciousness itself. And then there is some kind of notion of a choice of the action that you're going to take based on your conscious experiences.
And then those actions have an effect on the objective reality, whatever it is.
And that objective reality in turn affects your conscious experiences.
Those are the key ingredients that I've put together into a very, very simple mathematical formalism.
And [...] you kind of have to stick with those premises if you're going to remain within the framework of natural selection, which is the basis of your whole theory anyway. I mean, there have to be these discrete entities, which we call "organisms"—that's what they look like to us, whatever they may be at the deepest level—these discrete things that act on the world based on their experiences.
And so you're sticking with that at an abstract level, those premises of natural selection, but beyond that lies the weirdness. So, go ahead.
Yeah, I absolutely agree with what you're saying. And it is actually in accord with what Dawkins and Dennett have called Universal Darwinism, the key algorithmic principles underlying Darwinian evolution of replication, retention, and selection.
Variation, at some point.
Sorry, yeah, variation. That's right.
So that's what evolutionary game theory is based on. It's essentially a mathematical representation of Universal Darwinism. But what it does allow you to do is to think about biological evolution without some of the physicalist assumptions that are usually made as well: that organisms are real physical objects in 3D space, and DNA is a physical thing that exists in space and time. We don't need to have those assumptions. And in fact, what evolutionary game theory suggested is that those assumptions are actually false. So we have to rethink Darwinian evolution based on evolutionary game theory itself.
But you're right, the theory of conscious agents does still keep the core of Darwinian evolution in terms of the Universal Darwinism aspect of it. Absolutely.
Okay. And there is this idea of discrete conscious entities, or discrete packages of consciousness that has to be preserved too, right? Consciousness can't just be this undifferentiated universal mass. There have to be discreet "incarnations"—you know what I mean—your theory calls for there to be more than one conscious agent.
Absolutely. In fact, perhaps an infinite lattice of them—from the very, very simple to the very, very complex. And it turns out that the formalism allows for conscious agents.
There's a formal definition of conscious agents. Anything that satisfies that definition is a conscious agent, according to the theory, and they can go all the way from the simplest ones, which have only one bit of possible experiences and one bit of possible actions, all the way up to [an] infinity of possible [experiences and actions].
The first case is strictly hypothetical—it’s not like you think that there is some organism out there that has only two options in terms of consciousness, right?
Well, my theory forces me to entertain that possibility and to imagine consciousnesses that are very, very, in some sense...
Oh, I see. So, consciousness may go beyond the organic world in your theory.
Panpsychism, idealism, and Don’s “conscious realism”
So what we call a proton, which of course is not anything like what we conceive it to be in your theory, but, whatever it is, it could have consciousness. So this is kind of like what's called panpsychism, which is that everything in the universe is [conscious].
It's not panpsychism, and that's a very important distinction.
Panpsychism, for example, with respect to electrons and protons, is the idea that electrons, in addition to their physical properties like position, momentum and spin and so forth, also have consciousness properties. But I'm denying that there is such a thing in objective reality as an electron with a position. I'm saying that the very framework of space and time and matter and spin is the wrong framework, it's the wrong language to describe reality. So I'm, in some sense, more radical than panpsychism.
Panpsychism is almost a dualism, the way I'm formulating it. There are physical objects like electrons, and in addition to their physical properties, they have these consciousness properties. I'm saying, let's go all the way: it's consciousness and only consciousness all the way down.
But it's not a pure idealism like Berkeley or whoever, right?
Right. For Berkeley, to be is to be perceived. But the conscious agents are perceivers, and they could be without being perceived. They might be inert, but their existence is not dependent on being perceived. ... My view has a higher affinity with idealism than with panpsychism. ...
It's often been put out as “realism versus idealism,” and I think it's just the wrong way to frame the thing. I'm a realist about consciousness. Others are realist about physical reality. I'm not a realist about physical reality—I think that it's part of the user interface of consciousness. But I am very realist about conscious experiences and conscious choices and so forth.
So I call it “conscious realism”: the nature of objective reality is consciousness that can be described mathematically.
Now, for me then to solve the mind-body problem, I have to solve it in a different way than everybody else is trying to solve it, right? Most people, 99% of the researchers trying to solve the mind-body problem are starting with the physical reality—in particular, brain states, brain activity—and they have to show how consciousness might arise from that or be identical to that.
I have to go the other direction. I start with the mathematical theory of consciousness, completely independent of any notion of space, time, or matter. And it's my burden then to say that this mathematical model of consciousness can give rise to all of the modern mathematics of, say, quantum mechanics and general relativity.
So, if I cannot derive from this theory of conscious agents, first, what we already know about quantum field theory and gravity, and then, hopefully, a new theory of quantum gravity itself, then I'm wrong. It's just that simple.
So, there can be conscious agents that don't correspond to any of the physical phenomena we currently describe as part of a scientific framework—don't correspond to protons or anything else—is that right?
It isn't just that science hasn't yet discovered the physical phenomena that would correspond to the conscious agents; you're saying that, in this sense, conscious agents just are more fundamental than what we refer to as the physical world.
A rock as a symbol representing “reality that’s completely unlike a rock”
So, first came the conscious agents, and then came the physical world. But see, this is where I get a little caught up in the chicken and egg problem when I start putting this in terms of natural selection.
I mean, for natural selection to happen, as we conventionally conceive of it, you've got these ... agents, and then you've got this world for them to respond to. And it's true that the world consists partly of each other—bacteria respond partly to each other—but, in the first instance, when the first self-replicating form of organic life came into being, ... the physical world, which we normally conceive of as unconscious, was a given, and then self-replicating things came along and started responding to it. So, in the conventional reckoning, you just have this dichotomy.
Now, I recognize that this dichotomy, in a way, rests on the assumption that what we refer to as the inorganic world is not conscious, and you would take issue with that assumption. But still, I can't be alone in having trouble wrapping my mind around this whole thing and vaguely perceiving a kind of chicken and egg problem, right? Even you must have this problem a little, right?
Well, yes, because, like everybody else, naturally I'm a physicalist and a naive realist. Whenever I'm not thinking carefully, I always believe ... that I'm seeing reality as it is—that's just ... the way we've evolved. It's very, very difficult to step outside of that, when evolution itself forces me to take that idea seriously.
But the idea is that, when I interact with something that to me looks inanimate—like a rock, it doesn't look living to me—I'm not saying that the rock itself is conscious, but that the rock is a symbol that I'm using to represent a reality that is completely unlike a rock, a reality that is not in space and time, and a reality that is in itself a potentially infinitely complex interaction of conscious agents.
Wait, reality is itself an interaction of conscious agents—including you, or not necessarily including you?
Well, I'm certainly interacting with it, so yes.
But, before you started interacting with it... I mean, you would say, I guess, if it exists in itself, then it itself has to be an interaction of conscious agents?
Exactly right. That's the ontology I'm proposing here, that it's conscious agents all the way down. Even if the conscious agent that I call me didn't exist, there would be other conscious agents. So that's what I'm interacting with.
Now, because I'm a finite conscious agent, I have finite representational capabilities. This is presumably though an infinitely complicated universe of conscious agents. That means that almost all of it is beyond my representational capacity, my ability to represent it...
So, when I interact with you, by looking at your expressions and your body language and so forth, I can get some idea about your conscious state, the kinds of experiences you're having, your mood and so forth—I get some insight into you as a consciousness. When I interact with my cat, maybe a little bit less. When I interact with a rat, even less. With an ant, even much less. And when I get to the point of a rock, I've essentially given up, in terms of understanding the nature of the consciousnesses.
But that is a limit of my representational system. And here's the problem that we always have: as humans, we tend to conflate the limits of our representational system with an insight into the nature of reality. We mistake our limits for an insight into the nature of reality. It's like saying that the icons on my desktop only have two-dimensional positions and pixels—therefore, reality can only have two dimensions and pixels and nothing else. Well, no, that's just a limit of my desktop. If I could just think outside of the box, I would realize that most of the categorizations I make in the world—like between animate and inanimate—are artifacts of the limits of my perceptual system. They're artifacts of the fact that I have to give up at some point in what I can represent of the nature of reality.
So when I see something that I call a rock, and then conclude that, of course, nothing conscious is going on here, what I've done is mistaken a deficit of my representational abilities for an insight into the nature of reality. ...
Look, my perceptions are the humble abilities of a particular species, and they're not very sophisticated. And I just need to understand: I have a very, very limited species-specific set of representations that was just designed to get me through the day, not to give me an insight into reality. When I see a rock, all bets are off about the objective reality that I'm really interacting with.
And so my proposal is not that the rock itself is conscious—that's silly—but that I'm interacting with a complex world of conscious agents, and the best that I can come up with—because I'm a poor, weakly endowed conscious agent myself—is this stupid little symbol that I call a rock. But then, because I'm so unimaginative, I assume that the rock in space and time is the final reality, and there's a big distinction between animate and inanimate.
Is our mathematical reasoning more accurate than our perceptions?
I mean, to think about it, it's amazing that, given the fact that our brains were really just designed by natural selection to get us to the point where we can survive in a hunter-gatherer society and navigate the social landscape and everything else—it's just kind of stunning that science has gotten as far as it has gotten. ... However misleading you think its representation of reality is, its predictive capabilities are reasonably impressive.
It's amazing it's gotten as far as it is, and, by the same token, it would be kind of shocking if we could see the whole picture, given what natural selection actually designed the brains to do... Which is the heart of your theory, in a certain sense.
Now, we have to take our cognitive capacities one at a time to see what evolution does with them, right? My simulations—and those of my graduate students, I should mention Justin, Mark, Brian, Marian and Chetan Prakash, who've been doing this work with me—our work indicates that the probability that our perceptions have been shaped to reflect reality as it is, is zero. It's, strictly speaking, zero.
But there are other cognitive capacities. For example, my ability to reason mathematically.
It turns out that there are circumstances where I do need to reason mathematically about fitness—not about reality as it is, but about fitness. The fitness consequences of eating … two apples might be roughly twice as good as just one apple.
Or, in a hunter-gatherer society where we're cooperating and having to share, I might need to be able to detect a cheater—you didn't contribute as much as you could have contributed—I'll need to be able to quantify how much you contributed, how much I contributed, and then punish you if you're cheating.
So in these very limited domains, there could be selection pressures not to give us, like, deep insights into mathematics, but just enough mathematics to survive.
But that's the interesting thing. Whereas natural selection uniformly gets rid of veridical perceptions, it does not uniformly get rid of veridical math. It gives you little pockets of potential veridical math where you need it for just little things like checking for cheaters and counting apples because of the fitness consequences.
You're suggesting, then, that ... some of our logical intuitions bear a closer correspondence to a kind of ... real logic out there ... than our perception of the physical world bears to a real physical world.
Exactly right. So I'm not saying that evolution has endowed all of us to be mathematical geniuses that see all the truths of mathematics, not by a long shot. Every once in a while you have a brilliant mathematician who comes along. But most of us have just a modest endowment that was selected because we needed just a modest amount of mathematics and logic to reason about the fitness consequences of our actions.
So that's the interesting thing. I found it beautiful, because, if the evolutionary simulations had shown that natural selection systematically eradicates true logic and math, just like it eradicates true perceptions, then I would be in a position of shooting myself in the foot. That would be a self-refuting use of evolutionary theory. So I found it quite interesting that the theory of evolution does uniformly shoot down true perception, but it allows pockets, in which limited amounts of logic, limited amounts of mathematics can be used for limited species-specific needs.
And here's the beauty of mutation: every once in a while, you get a von Neumann who comes along and has just the right collection of mutations such that all of a sudden the mathematical prowess is spectacular. Again, it's finite, but it's more than most of us. ...
“The truth-seeking organism can never win”
When you say that ... in your simulations, veridical perceptions are completely extinguished through evolution ... it makes me think that I misunderstood a little what your simulations are up to. ...
I mean, when there's a discrepancy between veridical perception and fitness-conducting perception, we know which one will win, but it seems to me you'd have to assume that there's a discrepancy between those two. You don't know for sure that there will be a discrepancy, right?
Well, we're about to publish a theorem. We have the simulations that I've published before, but we've now proven a theorem, and the theorem basically says: for generically chosen worlds—and, if you want, we can get into the mathematics...
No, I think we'll pass, thanks.
Yeah, we'll pass, yeah.
So, for generically chosen worlds and generically chosen fitness functions in those worlds, an organism that sees reality as it is in that world, can never be more fit than an equally complex organism that sees none of the reality in that world and is just tuned to the fitness function. And as the complexity of the world increases and as the complexity of the organisms increase, the chance that, even by accident, the truth-seeking organism can ever win in the evolutionary competition goes to zero.
Because the perception of reality is costly...
It's costly and it's irrelevant. …
Strictly speaking, yeah. Although … if it turns out that veridical and utility-maximizing perceptions correspond, then it's not irrelevant.
You're right. You're absolutely right about that. But that's why I was talking about ... generically chosen worlds and generically chosen fitness functions. ... What you said is strictly true, but the probability of that happening is zero.
That seems more like an assumption than a consequence of a simulation, but maybe I'm wrong. ...
I'll give you a concrete idea about why it's more than just an assumption, that it's more of a mathematical property.
Suppose that I have just the real line. Just a line. So I have an order: 0 < 1 < 2 < 100, right? Maybe that represents resources. Zero means no resource, no water. Hundred means a lot of water, and so forth.
And now I ask [about] possible fitness functions. Maybe fitness function [is] that more water is more fit. ... So that fitness function is monotonic with the amount of water. That's one kind of fitness function.
Another kind of fitness function might be Gaussian, where an intermediate value of water, like 50, is really fit, but a hundred is too much—you might drown—and zero is too little, you might die of thirst. And so you could have that kind of fitness function.
And … you can look at the collection of all possible fitness functions, and then ask: which, out of all of them ... happen to be monotonic with the truth, with the reality, such that if you were tuned to fitness, you would also happen to be tuned to reality?
This is a clean mathematical question, and it turns out those have probability zero.
So if you're tuned to fitness, yes, there is a probability-zero chance that you will be tuned to reality. But it is probability zero. Probability one, you will not be tuned to reality.
That's the sense in which I say the probability is zero that natural selection will favor veridical perception.
Okay. I think I'm going to have to review this part of the conversation later and try to grok it, but, in the time remaining, let's do a couple of things. ...
So you have an actual mathematical formula that reflects the relationship between consciousness and the world, I guess. Why don't you see if you can just put it into words—and that will be an imperfect rendering, I'm sure—and then I'd like you to discuss the sense in which you say it's testable.
But for starters, why don't you just try to, without hope of complete success, give us at least the flavor of the mathematical theory, what it is trying to do. ...
So, the mathematical theory.
I'll just say first, it's in the spirit of … a Turing machine for computation. It's a very, very simple formalism, ... five or six little mathematical components that are apparently universal. Anything that can be computed can be computed with a simple Turing machine.
And the idea is to come up with a simple formalism as well for consciousness. The components of this formalism are:
- Each conscious agent has a set of experiences that are possible for it. It could be a small set, or it could be up to countable infinity of discrete conscious experiences it could have.
- Based on those experiences, it can make decisions about how it wants to act. It has a collection of actions that it can take.
- And then, based on the action it chooses, it then can act on the world, where it doesn't know what the world is.
All the agent knows is it can choose an action; it does that action, but it doesn't know what the world is. And then the world feeds back, and changes the perceptions that the agent has.
And the only other part of the structure then is a discrete counter. Every time the agent gets a new percept, a new experience, this counter increments. ...
So, to spell it out, there's a space that I call X of experiences; a space G of actions that the agent can take; a mapping D, which is the decision: given my experience, what action do I want to choose? There's mapping A that says, given the action, I'm going to now act on the world—so A is the action on the world. And then there's a map P from the world, which is the perception map where the world then changes my experiences. And then there's a counter T that just is a discrete integer counter that increments every time I have an experience. That's the entire formalism.
There's some mathematical details about the nature of these maps—technically, they're Markovian kernels—and the spaces are measurable spaces.
And having said that, now I actually told you the entire mathematical structure.
Testability of the theory
Right. And it, kind of, roughly speaking, seems comprehensible and makes sense, with a possible exception of the part where you realize that the world is itself conscious agents... whereas normally we think ... of our consciousness in here,
[and] there's the world out there.
There's a kind of recursive quality to the theory, which is not shocking when you think about it. I mean, it's not shocking that the correct theory of consciousness might be incomprehensible ... because the problem is deep.
But it's true that that's ... a stumbling block to ready intuitive comprehension of your theory, right? It's this kind of self-referential feature, that the world is not just this world out there, it is itself conscious agents—although again, conscious agents needn't be organic beings.
No, no. They're anything that satisfies the mathematical definition of a conscious agent. And the idea is to think of objective reality as a big social network.
That's the idea. It's a bunch of interacting conscious agents, and the dynamics is much like the dynamics of social networks; and the evolution of consciousness will be very much like the evolution of social networks in social network theory.
And what the idea is going to be [is] that by looking at the evolution of social interactions among the conscious agents, and then projecting that evolution back into the space-time interface, say, of homosapiens, I should get back Darwinian evolution. That's going to be a constraint on my theory of the evolution of consciousness in this big social network. So it's going to be very, very testable.
If I have a theory of conscious evolution, which is going to be a graph-theoretic theory, very much like we use graph-theoretic analyses of interactions on the internet ... to understand why some websites get a lot of hits and others less; who's got more influence, who's got less—all the work that's been done in social networks is going to apply here—it will be an evolution of conscious agents, and that evolution, when projected back into a particular space-time interface of homo sapiens better give us back the Darwinian evolution we know and love, or the theory of conscious evolution is wrong. So that's going to be one way that we're going to have testable prediction. ...
So wait, does the testability lie in the more fine articulation of the formalized theory in a way that succeeds in corresponding to the dynamics of natural selection or something?
That will be one of the many ways that will be tested. But then I definitely will want to show how the theory of evolution of consciousness in this framework of conscious agents gives detailed understanding of how, why we see Darwinian evolution, when we look at it through the lens of space and time.
But that's not the only thing. That's not the only constraint.
I will need to be able to make predictions about the dynamics of space-time, for example, at the Planck scale. If I can't do that—if I can't actually make new predictions about quantum gravity from this, and understand how interactions of conscious agents can create interfaces of space-time, and make new predictions about the dynamics of space-time at the Planck scale—then once again, I'm wrong.
Does a tree make a sound
And there are other predictions that come out of this that have already been tested, that come out of just the evolutionary game stuff. I mean, if I'm correct, and space and time and physical objects are just the wrong language to describe reality as it is, there's a clean prediction that follows from that. ...
And this prediction, by the way, has caused lots of comments on the Internet. People just don't understand this. So I'll try to explain it.
A clean prediction [is] that a physical object like an electron has no position and no momentum and no spin at all when it's not observed. And that's a clean prediction of what I'm saying. If I'm wrong about that, then everything I'm saying is wrong.
Well, something like that had been suggested by people who all along had responded to one of the mysteries of quantum physics by saying that it is the observation of the system that forces the wave function ... to collapse into definite form, right? So this is a little like that, although maybe not exactly like it.
It's a bit like that, although some quantum physicists have tried to keep a physical reality. For example, Bohm tried to keep up a genuine position and velocity for a particle even when it was not observed, in his approach.
Quantum physicists have been not completely in agreement about how to interpret the quantum formalism [in] this regard, but the evolutionary games forced me to take a strong position. If space and time and physical objects, position, momentum, is the wrong language to describe reality, then I'm forced to interpret the quantum formalism and my own theory to say an electron cannot have a position or a momentum when it's not observed. If it does, then there's no reason to listen to me any further, I'm completely wrong.
And the objection that people have to this is they say, well, how in the world can that be a testable statement? If you're not looking, if you're not observing, then how can you know what's going on?
There's two ways to think about this. First, take the opposite statement that everybody believes, that the electron does have a position when it's not observed. Or that a rock has a position when it's not observed. Everybody thinks, of course, that's a genuine scientific statement, and it's probably true.
But we can't verify it.
Yeah. The question would be: if it's scientific, then it has to be falsifiable, and how would you falsify it? Well, you'd have to show that it's not the case. That the electron doesn't have a position when it's not observed. In other words, my hypothesis needs to be part of that whole package.
Yeah, but if you set up a video camera to record the rock when you're not there and you'd come back, what would you say about that evidence? Would you say that the video camera was itself conscious, or would you say that there's some kind of retrospective effect of consciousness, or what?
Well, you don't know anything about the rock until you yourself view the video.
The assumption that the video exists when you're not looking is part of that whole assumption that you're making there. So it's a far more clever experiment that has to be done to test this hypothesis. Just taking a video and then looking at the video later is not going to do it because the question still arises: Did the camera itself exist when you weren't looking? Every time I look, I see a camera and I see a video, but that doesn't mean that a camera and a video exist when I don't look. Right? That doesn't logically follow. ...
By the way, very, very bright people, like Pauley and others thought that this question was not a scientific question, that it was like asking how many angels dance on the head of a pen—to ask, does an electron have a position when it's not observed.
But ... John Bell in 1962-63, something like that, showed that this is not an "angels dancing on the head of a pin" kind of question. It's an empirical question. He came up with very interesting mathematical Bell's inequalities that give you a precise way to test the hypothesis that an electron does not have a dynamical property like spin when it's not observed.
It's one of the deepest successes of the human mind, Bell's theorem. And in the years since Bell published that, there have been over a dozen very, very careful tests, and in every case, the experiments come out compatible with the interpretation that an electron has no dynamical properties when it's not observed.
Now, no experiment ever forces a theoretical interpretation. That's just part of elementary philosophy of science. No data ever force a scientific theory to be true. There's always an infinite number of theories that are compatible with any data. So I'm being very, very careful when I say that [in] all of these experiments, the results are always compatible with the interpretation that electron does not have a position or a spin when it's not observed.
And in fact, probably a more common interpretation of some of this stuff is that what forces the electron to assume a definite location ... is interaction with a macroscopic, physical measuring device, not observation by what we think of as a conscious being per se. That's an alternative interpretation, right?
That's been one of the interpretations that quantum physicists have tried to give. In particular, Bohr with the Copenhagen interpretation tried to do that. I'm taking a very, very different tack.
I'm saying that electrons—and any physical objects that have position and momentum and so forth—are just a human species-specific representation of a reality that's utterly different.
So it's no surprise—since electrons are nothing but our symbols—it's no surprise that those symbols don't exist when we don't token them, when we don't observe. They only exist when we observe because they are the symbols that represent our observations.
Okay. I think you've given us plenty to ponder, and I myself, I'm sure, should listen to this conversation several times and ponder it before interrogating you further. But final, final question.
Personal implications of the theory
Has this your view of the world affected the way you live in any way?
It has, in a couple of ways.
First, it's very painful. Because I'm, like everybody else, a naive realist at heart.
I mean, my default mode is that 3D space and objects in space-time are the reality. And it's when I put on my thinking cap that I have to recognize otherwise.
But, frankly, when I go outside and look, it looks like the Sun is going around the Earth. Every time I step outside, it looks that way. And I know otherwise. And so, it's painful in the same way that it was painful for us to give up a geocentric universe. I mean, people died and were burned at the stake for that. It's sort of painful psychologically for me to give this up. And slowly my perceptions of the world are changing, but...
It's interesting: evolution shaped us with symbols that we have to take very, very seriously, right? If we don't take them seriously, we'll die. And we were given the symbols that we have, because they're what we need to survive. So we have to take them very, very seriously. And for some reason, we have a penchant for taking them literally. I see a snake. That symbol means there are things I should not do.
Well, it's more efficient to take them literally. Just think of the time you've wasted not taking them literally. From evolution's point of view, you're better off not wasting the time.
Yeah, why waste the time wondering about it? Just get out of the way of the snake, don't step off the cliff, step out of the way of the train.
But, strictly speaking, taking them seriously does not mean we have to take them literally. And that dissociation is emotionally wrenching. So that's affected me that way.
But I guess the other effect is it's made life far more interesting. There's a lot to explore, a lot I don't know. And things that I thought I knew, I've had to give up. And so it makes life far more interesting for me as well.
Illustrations by Nikita Petrov.
Most of the content on this site is published as a (free!) newsletter first.
Please, consider subscribing: