First posted 21 January 2005. Questions of consciousness. Questions of the role of subjectivity. Questions of time. Questions of a Platonic reality. All still central to what keeps my curious mind busy... Penrose was the adviser of another of my scientific heroes: Julian Barbour.
Roger Penrose, the Oxford Physicist, is not convinced: quantum theory, he believes, is incomplete. In The Road to Reality he argues that a further revolution is required in quantum mechanics, as indicated by its inability to address the reduction process for the wave function (and thereby its inability to 'join up' with classical physics) as well as troubling incompatibilities with general relativity. The time asymmetry associated with the wave function reduction (or collapse) upon measurement of a quantum system contrasts sharply with the symmetry associated with the propagation of the wave function itself. The latter can be made sense of moving either backwards or forwards in time; the former works only moving forward. A more familiar time asymmetry, the one we experience every minute of every day, is grounded in the extraordinary nature of the Big Bang itself  its strikingly low entropy. The Big Bang was so ordered that the everdecreasing order of the universe is a probabilistic nearcertainty. This is what lies behind the 2nd law of thermodynamics and the 'arrow of time'. It points to the peculiar behaviour of gravity at cosmological singularities  not only the Big Bang but (less spectacularly) black holes. The presence of this time asymmetry in both the reduction of the wave function and in the Big Bang suggests that gravity might play an important role in wave function reduction. Discovering this role would amount to a revolution that could well resolve the 'measurement paradox' and render quantum mechanics consistent with general relativity and contiguous with classical physics. According to this idea, it is the gravitational effects of the classical measuring apparatus (and other macroscopic entities in our everyday world) rather than the perceptions of any observer that bring about the collapse of the wave function. As such, the reduction is an objective rather than a subjective one. This takes the conscious observer from the limelight of quantum theory. How does this happen? As the wave function propagates through time, nonuniformities develop in the distribution of energy and matter among its superposed states, and at some point become gravitationally significant. The gravitational interaction with the measuring apparatus (or other macroscopic entity) then brings a collapse into a measurable single state. Although Penrose takes the consciousness out of quantum reduction, in The Emperor's New Mind he puts quantum reduction centre stage in consciousness, thereby turning the world (as seen by conventional quantum theory) on its head. These same quantum gravitational effects account for the difference between consciousness and artificial (computer) 'intelligence', and Penrose calls upon them in his rejection of the computational theory of mind. There are things  including nonalgorithmic, noncomputable ones  that the human mind can comprehend while no computer (Turing machine) possibly could. This is in keeping with Godel's theorem, which states that no formal mathematical system (or at least none of the richness required to handle even common arithmetic) can be complete. There must always be truths that cannot be expressed without recourse to 'metamathematical' language that is not part of the formal system. Penrose suggests that our access to such truths is due to quantum fluctuations, gravitationally induced, within the brain (he suggests maybe in the microtubules of the neurons' cytoskeletans). Multiple states may exist in superposition in our brains until gravity triggers a collapse to a specific state, with resulting (possibly nonlocal) effects on our neural states. This is something that is not possible (at least for now) with computers. There is a deep connection among the timeasymmetry of the wave function reduction, the behaviour of gravity at singularities and the presence of nonalgorithmic (noncomputable) elements  including consciousness  in the world. This helps to explain the relationship among Penrose's "Three Mysteries":
There is also an "Escher element" to the relationships among the three mysteries. Escher was an artist (and obviously a mathematician) whose works included paradoxical staircases and streams that seemed to always lead in one direction (up or down) yet returned to their own source. In Penrose's three world / three mystery model, a small portion of the mental world is all that is needed to capture the mathematical one (since we obviously spend lots of time considering other things). Similarly, a small portion of the mathematical world is applied to the collected (total) formalism of physics, with much else being dedicated to other questions. And finally, only a small portion of the physical world (that part that makes up our cells) is drawn on to explain the mental one. Each part is able to 'swallow' its neighbour in an illogical, unending cycle. Penrose believes that the secret to this mystery of the mysteries is that all these worlds are in fact one. Perhaps in a holographic, holistic, nonlocal sense like that evoked by David Bohm, another of my creative scientific heroes? First posted 17 Feb 2004  This film covered a lot of ground. It may have lacked rigour. It may have made insinuations that outreached any fact base. But you can't deny it prompted questions and presented interesting material in relatively digestible form. My own views have shifted significantly from those I held at the time of this review. Some are now closer to the film's, some still not.
Having seen 'I Heart Huckabees' on Sunday, I saw a preview screening of 'What the Bleep Do We Know' this evening  quite a philosophical week. 'Bleep' will probably launch properly in London in March or April. Well, Bleep certainly covers a lot of ground. I am on board with the need for a paradigm shift  from one dominated by the residue of our longstanding and recently ended infatuation with major western religions to something that retains the connection with the numinous while using what modern science has to offer. I have to say that the paradigm I envisage differs in a fundamental way from that suggested by the film, but it also shares much ground. Irrespective of whether I or anyone else actually subscribes to the film's specific direction, it is a must see  simply because it is so thoughtprovoking. Particular items that caught my attention or stirred a reaction were:
Quantum mechanics (QM) The film starts with a strong version of the Copenhagen Interpretation of QM, which says that subatomic particles cannot be said to exist independent of observation. Unobserved, 'they' exist only as potentials, the probabilistic evolution of which is well defined by a mathematical construct called a wave function. In this wave form, the 'particle' exists as a weighted superposition of all its possible selves (with different positions and momentums for each potential self). Only upon measurement by an observer does the wave function 'collapse' to a unique particle with definite characteristics (not all of which can be known to arbitrary accuracy at the same time). This interpretation obviously gives a special role to the 'observer' in nature. This is combined with a specific view of the self, one in which Mind stands outside the laws of material nature and in a position of primacy relative to the material world  literally Mind over Matter. I guess you could say the film was espousing an Idealist as opposed to a Realist (read materialist) view of the world: thoughts, ideas, intentions and emotions are the primary building blocks of the world, not atoms, molecules and cells. The film intertwines these two propositions and draws the conclusion that we each create reality everyday. Further, by adopting more positive attitudes and engaging in more positive thought patterns, we can impact the material world around us to make our world a better place. As you'll know if you've read my articles on QM (If you think you understand this, then you don't, Quantum Determinacy, Problems with Quantum Orthodoxy, and Revisiting the Quantum  information please) and the Self (Who Am I?, Destiny, Subjective Objects), I disagree fundamentally with each of the two propositions above. I am a causal realist at heart, believing that there is an objective material world that exists independent of us and that subsumes us. And although I think that the Mind is aweinspiring, I think that it is wholly resident in and reliant on the body. So without going into any refutations of the film's positions (because I've discussed that in the articles I've mentioned), I'll just say that the film's position on those dimensions does not resonate with me. I don't view either of them as absurd in their own right. However, I do think that the leap to the overall conclusion about our ability to literally impact matter and space with our minds is a bit far. QM's interpretation is still a mystery, with many holding views close to the interpretation cited in the film and some holding views closer to mine. Consciousness is also a puzzle, with clearthinking people on each side of the Idealist  Realist debate. However, just because QM and consciousness are both unexplained doesn't mean that they are related to one another in any way, let alone a causal tie as blunt and direct as the film proposes. The Arrow of Time One of the contributors pointed out the peculiar asymmetry of time. Most (I don't know whether we call say 'all') of the mathematical formulae that so accurately describe the world around us are indifferent to the direction of time, working just as well backwards as forward. Yet we can only experience time in one direction. We can (if we can trust our memories) have knowledge of the past but not of the future. We are troubled by the thought of our not living into the unending future, but we have no problem with the fact that we were not alive for the many thousands of years before our birth. Some (but not this contributor) have suggested that time's arrow is tied to the second law of thermodynamics, which says that in a closed system, entropy increases over time. Entropy MUST increase as time moves forward, so perhaps this irreversibility drives the same irreversibility in time. But upon closer inspection, entropy's increase is not absolutely necessary: it is only probabilistic. It just so happens that the universe began in a relatively ordered state. Since there are many more (uncountably so) disordered states than there are ordered ones, entropy's march is staggeringly probable  NEARLY assured. Yet that is not the same as being necessary, absolute. So... if we are to tie time to entropy, we would also have to accept that time's direction is not irreversible in theory, but is only practically guaranteed by the high probabilities discussed above. The Brain and learning, habituation and addiction Several contributors discussed the role of neural pathways or networks in our behaviour. We reinforce the formation of certain sets of connections through our habits. The reinforced sets 'wire' themselves to respond to the frequent call for their combined performance. Other possible combinations, if not called upon, do not wire themselves up. We can, through conscious habituation, rewire some of these networks (e.g. the ones associated with more positive outlooks, more pleasant moods, more confident postures and more successful behaviours). And this electrical component is accompanied by a chemical one, with parts of the brain creating (or causing to be created) different chemicals for different needs. Just like we can become addicted to external drugs, we can become addicted to some of these internal, homemade concoctions. We then engage in the behaviours and nurture the states of mind that give us our fix. The important point is that a bit of us can stand outside the fray, perhaps up on the mental balcony, observing and intervening to break the vicious cycle. But we have to recognise and support that bit, exercise it and have confidence in it. I don't know the science well enough to comment on the accuracy of this 'folk' version of it, but it doesn't seem outlandish; in fact, it jibes quite well with the rough understanding I have from some previous reading. I'll see it when I believe it One scene is built around the story that the natives in the Caribbean did not see Columbus's ships as they sailed in, because they had no visual or mental construct for a ship. The more general point is that we cannot see or accept things that do not already exist in our mental model or paradigm. To be honest, I don't buy the foundational story at all. I can accept that the natives would not know that the ships WERE ships. I can understand that they would be confused as to what these dark patches on the horizon were, confused by the shapes they became as they grew closer. But I cannot believe that they literally did not SEE them. Like everything else in the film, though, it is thought provoking. It recalls to mind a vague picture I have of how we deal with sensory input and with anomalies in particular. We are bombarded with sensory input, with much more than we can process, in every waking moment. Our brains are partially hardwired through evolution (i.e. natural selection) to help discern the useful info from the 'white noise', and our particular experiences further shape the more plastic aspects of that filter. From our earliest days, we begin to assemble our working model of the world. What matters? What does not? What framework allows us to maintain internal consistency across the broadest range of our experience  to make sense of the world? When new input arrives that is labeled as irrelevant, we do not attend to it (unless perhaps we pay the price for ignoring it and our brains pick up on that fact and adjust the framework). When new input fits the paradigm and is labeled as important, we attend to it. But what happens if new information is so far outside our accumulated experience and reasonable extrapolation from it that we can make no sense of it at all? We tuck it away into a certain bit of the brain where it sits in a cache; at night, while we sleep and dream, among the routine brain maintenance that takes place is a reassessment of the framework (or paradigm) in the light of any new anomalous information. What is the smallest adjustment that can be made to the overall model in order to accommodate, make sense of, this new input? Do we need to scrap the whole model and start anew (when rocks begin to talk or we find out that we are just carnival entertainment for some other, alien and invisible race!)? If the accommodation necessary is too large, we may well end up just disregarding the anomaly (the Red Sox didn't REALLY win the World Series) and just continue with the paradigm intact. So you can see that I identify more than a grain of truth in the film's underlying point here. Junky cells Back to the addiction theme, another segment looked at it from the somatic cell perspective. Every cell has loads of receptors for receiving information from its environment, including the chemical drinks discussed above. If the receptors are incessantly bombarded by some protein 'hit' they shrink and become less responsive to it, meaning it takes more of it to give the same 'fix'. Cells can then become so engrossed in getting their next 'high' that they neglect other important functions like communication with neighbouring cells and even elimination of their own waste products. Keep in mind that I'm talking about 'internal', not 'external' drugs here (although I wouldn't be surprised if the story were much the same for external ones). You can tell by my overuse of analogy that I'm not up to speed with the proper science here, so I can't judge the accuracy of the scientific claims. It does, though, appeal to common sense. (Yeah, yeah, I know, common sense often leads us astray when we venture away from the normal life scales and conditions in which it developed.) Conscious Cells One contributor spoke (a bit too loosely, I think) about cells being not only alive  an assertion with which I wholly agree  but also conscious. She said a cell was conscious because it interacted with its environment and processed chemical information. This doesn't, for me, suggest consciousness. Or, to put it another way, if it DID qualify as consciousness, then we would have to admit that computers and computer networks are conscious. Perhaps we should... No good, no bad Several contributors pointed out that there is no objective good or bad 'out there' in the world. A belief with which I am in agreement, as you can read in my articles: Right and Wrong, Sources of Morality, Ethical Notes, Disobedience, Pragmatic Ethics and Nietzsche's Call to Creativity. No Personal God All contributors who discussed religion found the creation of a personlike personal God harmful to mankind and in many instances antithetical to what they saw spirituality as. I tend to agree that, however powerful, insightful and wellintended the original spiritual messages are, when organised religion accretes around them, the foibles of man dilute, pollute and hijack them. This isn't at all to say that all clergy are guilty or that all followers are silly. I just think that the more organised a belief structure is, the more likely it is to lose sight of the wood for the trees. Recap As I said, there were a number of things with which I agreed and a number of things  including the major thesis  with which I didn't. Still I can heartily recommend 'What the Bleep Do We Know' as an interesting, challenging, thoughtprovoking film that may well make you want to sit down and put your thoughts to paper. First posted 15 January 2006. Barbour's picture is about as close to my worldview as you can get from a materialist perspective. I 'just' replace his infinite set of particle configurations with an infinite set of experiential moments.
I've just read Julian Barbour's The End of Time, a good history of the physics and philosophy of time that also puts forward a radical view of time itself. Update to the Block View Barbour's historical account helped me to realise that the view of time I put forward in All the Time in the World (or more specifically a couple of aspects of it) is Newtonian and doesn't recognise a couple of truths from the relativity revolution. In that post I said, when distinguishing between the space dimensions and the time dimension in space time that although we can easily imagine orienting a threeaxis grid however we like in space, we cannot think of 'rotating' the time axis at some angle. In fact, special relativity and Minkowski spacetime do away with absolute simultaneity and DO in fact allow rotating the time axis in 'tradeoffs' with the space ones. (It also gives us light cones and timelike, spacelike and lightlike relationships among events, but I'll save that for another post.) General relativity goes even further by getting rid of the idea of 'clean' Euclidean planes of simultaneity altogether, at least in the vicinity of mass and the distorting effect it has on spacetime. Although I think I've got my head around the special relativistic implications for my view, I can't quite claim to be on top of what adjustments general relativity requires. Getting rid of the vessel and defining a new space Barbour doesn't adopt the 'block view' of time that appeals to me. Instead he asserts that time does not exist. Einstein's radical step was to dismiss the notion of absolute simultaneity and to take the very practical approach of asking what we could know via the use of rods and clocks. Barbour thinks a further radical step is necessary, and his denies not only the existence of temporal becoming, which is denied by many others, but also the existence of any spacetime 'vessel' for holding all the things that exist. In the absence of that vessel, all that matters is the relative positions of the things themselves. A scientist named Mach, whose influence on Einstein was great but fell away in Einstein's final analysis, is the father of this view. Barbour believes that a Machian world is strongly suggested by Einstein's general relativity, but that Einstein didn't quite carry things through far enough to see it. He goes further to say that such a view may be exactly what is necessary to reach an acceptable theory of quantum gravity, reconciling and uniting Einstein's cosmological theory with the weird microscopic world of quantum dynamics. For Barbour the building blocks of reality are the infinite possible relative positional configurations of existing particles. Each of these configurations might be thought of as an 'instant'. (Certainly, in a layman's evolutionary view of time, each instant corresponds to some configuration of particles  a snapshot of the universe, although one might ask just how long an instant is.) These snapshots are not layered neatly onto one another to create a book or block. They are just in a jumble, like wooden shapes in a bag. So Barbour's view might be called the 'timeless bag' view. Now if we can imagine a mathematical space of huge (indeed, infinite) dimensionality, we could match each possible configuration of the universe to one point in that space. Barbour calls this space Platonia, and it is central to his story. The case for Structuralism The term structuralism, as applied here, is mine, not Barbour's. His theory is structural in that the 'shape' of configuration space (Platonia) determines how the wave function  that messy thing (or mathematical construct, depending on your view) central to quantum mechanics  hovers over it, making some configurations more likely than others. So causality exists not 'vertically' (i.e. through time) but rather horizontally (i.e. sheerly through eternal relations and resonances). Barbour reminds us that Schrodinger's first derivation of his wave equations was actually a timeindependent one. It predicted with great accuracy the energy levels in the Bohr model of the atom. Schrodinger went on to develop timedependent equations, which came to be viewed as the more fundamental relations. For Barbour, the first is the more fundamental, and it is strong support for a timeless universe. He also cites what is known as the WheelerDeWitt equation. This equation is controversial in that its derivation may be flawed, in that it is mathematically incomplete (new techniques must be developed to solve it), and in that its interpretation is not clear. But Barbour thinks it could be an early version of the ONE equation that describes the whole universe, uniting the two great theories of relativity and the quantum. And he thinks that it points to the 'timeless bag' view. Philosophically, Barbour's interpretation of quantum mechanics has much in common with Everett's Many Worlds interpretation, in that it gives positive ontological status to all possible configurations of the universe. But whereas Everett thought in terms of infinitely branching histories, all of which are 'real', Barbour sees, no 'real' histories, since histories require time, the existence of which he denies. No, the infinite possibilities are simply configurations  those wooden shapes loose in the bag, those individual points in Platonia. Every possible configuration exists, and the quantum wave function for the universe determines the 'number of copies' of each (which is a timeless way of thinking about the frequency of occurrence for each). And because there is no time dimension, all of these instants exist simultaneously (or more properly, eternally). Reconciling with our experience of temporal becoming But why does it seem to us that only a tiny percentage of these possible states has been or will be realised? And why is it that they seem to us to be strung together by a causal history (notwithstanding apparent quantum indeterminacy)? Finally, why do we FEEL like we are moving through time or that events are flowing from the future to the present to the past? The answer starts with this: all of the most probable states (again, as determined by the universe's wave function) are what Barbour calls 'time capsules' in that they contain 'records' referencing an apparent past. These records include fossils, books, empirical test results and artefacts. More important, they include human brain configurations that contain memories. Barbour reminds us that any conscious access to 'the past' is through CURRENT brainstates. In any instant a brain state is nothing but a small bit of the universe's overall configuration and but one of the many records potentially contained in that instant. Barbour must also intend (although I don't recall him saying it explicitly) that 'time capsules' are specifically those configurations that contain overwhelmingly (although not necessarily perfectly) CONSISTENT records. Otherwise, if we lacked a significant degree of intersubjective agreement, or if physical records pointed randomly in different directions, life would be very difficult and confusing. Another thing Barbour doesn't (as far as I could make out) make clear is whether the configurations to which records refer are in themselves time capsules. If not, then the configurations (and the records' referents) still EXIST, because all possible configurations exist, but the records are in a sense false, because the apparent past to which they point is not a likely one. If the configurations to which time capsules point ARE time capsules themselves, then there is a linked path of time capsules through Platonia, and our memories are 'true' in a sense they would not otherwise be. As for our feeling of temporal becoming, Barbour puts that too down to brain states and the notion of the specious present, familiar to phenomenologists. I wrote about the specious present (retention, protention and the immediate now) in The Anatomy of Experience, so I won't go into any detail here. The bottom line is that each instantaneous brain state contains not only the current input from the world but also a record of the last brain state, and within that of the one before, etc. This recursive encoding of records in our brain states accounts for our subjective experience of the flow of time, of temporal becoming. But, says Barbour, just like longerterm memories, this feeling is actually wholly contained in each single instant, so long as that instant is one of the probable ones he calls time capsules. Do I buy this? Now, although I have a lay interest in physics and read a fair bit about it, I obviously am not at its cutting edge, so I can't comment on the technical aspects of Barbour's theory. But here is what I think of it at the level at which I can engage. I am on record elsewhere as arguing for realism and determinism in the interpretation of quantum dynamics. Barbour's view is a probabilistic one and therefore not deterministic, so my instincts lean away from it. But his equating our experienced states with those that are most probable (maybe by a vast margin most probable) goes some way toward alleviating my discomfort. If time capsules are, by definition, the states 'awarded' highest probability by the wave function (as it overlays the structure of the configuration space, Platonia), then our specific history IS given a special, privileged position, one that arises directly from the structure of Platonia. But I'm not quite sure whether Barbour is saying this. My biggest questions are those alluded to three paragraphs above. Are the many instants that are referenced in the records at instant T, time capsules, as instant T is? Are the records in instant T3 the same as those in instant T, except that they exclude reference to instants T2 and T1? If the answers to these questions are 'yes' then I think Barbour's proposal could be a sensible one, and I'm willing to throw away the dimension of time. My comfort arises from the fact that Barbour would then be assigning to the wave function (as influenced by the shape of Platonia) the ordinal role we normally assume to be performed by time. From a practical perspective, the upshot would be the same as in the block view of time that I have endorsed  all that IS, exists eternally. The STRUCTURE of the universe is set, and it is a peculiarity of our experience that we seem to flow through a time dimension. In either case, the god's eye view is of an eternal structure (one in Platonia, the other in 4dimensional spacetime). I'm not saying that there are no differences between the two  there are very big ones, especially for physics. But at the level of philosophical implication, I think that they are quite close. If the answer to either question above is 'no', then I have a hard time swallowing the theory. (This of course doesn't mean the theory is wrong, and it needn't bother Barbour at all!.) If the instants, the configurations, of existence are not related to one another consistently  whether via a neatly stacked vertical causation or via consistent reference among configurations jumbled loosely 'in a bag', then there might as well be (and may indeed be!) only one instant. This is certainly philosophically possible, but it makes me want to go to bed and not get up again. (But this is what the determinism that I expound does to many other people, so there you go!). For a more coherent and/but much more technical analysis of The End of Time by a philosopher of science, see Jeremy Butterfield, of Oxford University. First posted 31 Oct 2003  Another wrinkle to follow, if you're interested. Beables and Changeables.
At the end of If you think you understand this, then you don't, I mention Bell's Inequality. Experimental tests of the theorem show that no hidden variable theory can be consistent with quantum mechanics' well documented and highly accurate predictions. Or do they? Reading the article below in Nature Science Update, I learned that a new theory may show how a deterministic universe is consistent with QM's mathematical formalism and its experimental findings. Einstein may be smiling in his grave. Having made one of the breakthroughs that led to the quantum revolution, he expressed great dissatisfaction with QM's indeterminacy. He believed that it was simply incomplete, and that a more comprehensive theory would show the indeterminacy to have been simply a result of our earlier ignorance. This looked impossible until recently. Physicist proposes deeper layer of reality. Nobel winner Gerard 't Hooft says that QM's apparent indeterminacy is due to information loss as we zoom outward from the Planck scale, where deterministic physical laws rule. Although the article says he is not attempting to resurrect hidden variables, he is quoted as saying: "Contrary to common belief," he says, "it is not difficult to construct deterministic models where quantum mechanics correctly describes stochastic behaviour, in precise accordance with the Copenhagen doctrine." Where the Copenhagen doctrine is the most widely accepted interpretation of QM, stating that the world really is probabilistic at quantum levels  that there are some things we simply cannot know, some things that we cannot even say exist independent of observation. 't Hooft splits measurable properties into two classes  Beables which maintain their precision as one moves from Planck scales to laboratory scales, and Changeables, which don't. Changeables really aren't observable properties but rather ways of describing how a system behaves when it is perturbed. The trick is that we can't know in advance whether a property is a Beable or a Changeable. No, I don't understand all of this, but it is good to see that the deterministic debate is still alive and well in science as well as philosophy. First posted 21 Sep 2004. Reading back through it now, this seems a descent summary of about as far as I got to in understanding the debates on the interpretation of quantum mechanics. I'm not at all sure things have progressed in the last 13 years. I know that my understanding hasn't!
I find it hard to ascertain exactly what the current status of the quantum mechanics (QM) interpretation debate is. I know that Einstein believed that QM was incomplete as a theory because it suggested an irreducibly random atomic world with 'spooky action at a distance'. He believed that God would do better by us than that, that the world must be causally deterministic and local. The seeming randomness behind QM, according to him, really came down to our lack of understanding. Future theoretical and experimental work would discover currently hidden variables that would put QM back onto causal, ontological grounds. Niels Bohr won the day, though, with his interpretation, rooted in logical positivism. QM, he said, does not describe a quantum world but rather only our observations and measurements of subatomic activity. We cannot know and should not be concerned with what, if any, metaphysical entities underlie those observations. In bolder moments, he seemed to suggest the stronger claim that no such entities did exist independent of observation. J.S. Bell, hoping to swing the argument back in the direction of Einstein's thinking, developed a test for whether any hidden variables approach might conform with experimental results. Alain Aspect devised the actual experiment to carry these tests out, and the resulting data supported Bohr's view, which has since come to be called the Copenhagen Interpretation (CI). Many tests, some similar and others quite different, have confirmed those initial findings. But let's look more closely at just what those findings were. Bell's theoretical test and the experiments based on it were meant to decide whether any LOCAL and DETERMINISTIC hidden variables theory could fit our observations. David Bohm came up with a wholly deterministic, ontological interpretation of QM that passes Bell's test. Bohm's theory did not wholly satisfy Einstein, though, because it was still nonlocal (allowed action at a distance)  intentionally and explicitly so. [Nonlocality is demonstrated by the entanglement of particles that have interacted with one another, where the choice of measurement on one impacts the other (via conservation laws) instantaneously, even at a great distance. I need to understand this better, as it seems to me that the measurement of the one particle only affects WHAT WE CAN KNOW about the second particle, which is very different from having a causal impact on the particle itself. Anyway, many people say that such instantaneous 'action' is not a problem, as long as it doesn't involve the transmission of information at above light speed (which it doesn't)]. The scientific community has never liked Bohm's approach. As far as I can tell, this is for two reasons. First, it gives a privileged place to position visavis momentum, two characteristics of any particle that the CI gives equal weight. This asymmetry lacks elegance. Second, it imparts a physical reality to the 'wave function,' which in CI is simply a mathematical tool for describing the evolution of an unobserved system. CI itself is unable to explain the collapse of this wave function at the time of measurement, but Bohm's theory runs afoul of Ockham's Razor, which says that we should not call into existence any more entities than absolutely necessary in explaining a phenomenon. Still, Bohm's interpretation is perfectly consistent with the mathematical formalism of QM and with observed behaviour of subatomic systems. In 1995, Tim Palmer pointed out that, strictly speaking, J.S. Bell's test and the results to date do NOT preclude there being a local, deterministic hidden variables theory behind QM. Chaos theory has introduced new and powerful concepts to the stage. There exist wholly deterministic nonlinear dynamic systems that are noncomputable  that is, there is no algorithmic way to 'solve' for their results. These systems have interlaced riddled basins (a basin is the 'region' of initial conditions that 'flows' to a given attractor). Palmer showed that just such a system could fit perfectly with the seemingly random elements of QM and the 'collapse' to concrete states upon measurement. Palmer updated and amended his approach in a later paper, but with the same conclusion  that a local deterministic interpretation of QM is NOT impossible. Of course, neither does that mean that QM necessarily IS local or deterministic. It just means we cannot, as so many orthodox QM theorists have wanted to, close the door on the possibility. I have so far not found any responses to Palmer's papers, so I don't know whether the 'community' sees the debate as an open one or not. Anton Zeilinger noted in his survey of the interpretations of QM that it did indeed seem to be lacking some fundamental, unifying principle. He went on to nominate just such a principle, that 'An elementary system carries one bit of information.' Here, an elementary system is a fundamental building block of the material world  like the spin of an electron. What does this principle do? Well, it explains why our experience of the microscopic world is quantised  because information, the only access we have to that world, is itself quantised. We interrogate the subatomic world with yesno questions, and the answers cannot be broken down into anything simpler (e.g. smoother) than the 0s and 1s that our computers use. Zeilinger's principle also provides a more intuitive explanation of the uncertainty principle and entanglement. Zeilinger himself subscribes to Bohr's CI and believes that his informationbased view gives additional support to that interpretation. Of course I realise that I understand very little of the guts of information theory and quantum theory. I still can't help pointing out that the implications drawn from Zeilinger's principle concern what we can know about the world rather than what the world is actually like. It is epistemological rather than ontological in nature. This may be entirely appropriate, since what is beyond our knowing is beyond our knowing  full stop. Yet I still feel that we should make some commitment to what our best inference therefore points to as regards what the world IS like. David Bohm took such an ontological approach. He also died believing that information stood alongside matter and energy as a fundamental component of nature. I am anxious to reconcile (at my surface level of understanding) his views with the emerging insights from Zeilinger's work. First posted 29 March 2005  Bohm's eastern influence was primarily Indian / Hindu, but I see strong parallels now with Taoism as well. An amazing man trying to unite worlds that we too easily assume are distinct, incommensurate and irreconcilable.
At the end of my post on Tim Palmer, I related his model to that of David Bohm. There's a lot more to say about Bohm, and this post will be my attempt to pull it together. David Bohm's name is associated with many things these days  his communist ideology (which cost him his academic post and nearly his freedom during the McCarthy witch hunts), his turn to Indian mysticism and close relationship with an Indian guru, his development of a new technique of dialogue for reaching more creative group solutions to problems and his call for a new scientific order. He did groundbreaking work in plasma physics and made important contributions to quantum theory (proposing the first EPR experiment, for instance), yet most of his work in quantum physics is viewed as outside the canon, ignored or embarrassingly dismissed by the physics community. Bohm always wanted to understand EVERYTHING, and not just in its details but also in its WHOLENESS. His scientific, mystic and social views were inextricably linked. The most useful metaphor for his model of the universe is that of the hologram. A hologram is sort of like a photograph, in that it is a visual representation of reality. But while a photograph captures only two dimensions, a hologram captures all three. When light is shone through a holographic plate, a three dimensional image is projected into the space before it. As you move around the image, you capture it from a different perspective, just as if it were the original object it represents. If the holographic image is of a person, you stand in front of it to see the face and chest; from the side you get a profile view; from the rear you see the back. Yet there is something I think is even more interesting: if you drop the holographic plate and break it, EACH resulting piece of the former plate still can serve to project the entire image. Shine light through a small piece, and you'll still get the full threedimensional image, just at a lower resolution. The smaller the piece of plate, the less welldefined the projected image. Contrast this with what happens when you rip a photograph in half: each half only shows you half of the image. So, each piece of a holographic plate contains information about the ENTIRE threedimensional image. How does this relate to the universe as a whole? Bohm believed that each particle in the universe contained information about the universe as a whole. I've put this sloppily, so let's look in greater detail at what he said. One of the greatest (perhaps the greatest) mystery in science is demonstrated by the double slit experiment. I've explained the experiment elsewhere, so I won't go further into it here. Suffice it to say that the experiment suggests that particles fired individually through the test apparatus 'know' how they would have interacted (interfered) with one another had they been sent through together. Physics has twisted itself into some amusing contortions (including the wellknown ostrich headinthesand trick) to account for this. Bohm believed that the particles themselves only 'knew' this because they were guided by a new field that he introduced, called the quantum potential. This quantum potential was holographic in its effects in that at any point in the universe, it contained information about the entire universe. Unlike other forces and fields in physics, its effects did not diminish with distance, so even very remote particles were in a sense linked by it. (Those steeped in quantum theory will recognise the link to the phenomenon of entanglement.) This potential was essentially a source of active information, intricately and infinitely enfolded (per chaotic nonlinear dynamic systems) into scales below our ability to detect it. This enfolded order that lay under the seemingly random behaviour of subatomic particles, Bohm called the implicate order, which he differentiated from our observable universe, the explicate order. The implicate order was deeper and more fundamental than the explicate one, but only bits of it could ever be unfolded at one time (hence Heisenberg's Uncertainty Principle). In a later, quantum field theoretical version of the same basic thinking, Bohm added another, yet deeper order, the super implicate order. In this model, the particle was replaced by the quantum potential as the fundamental building block of nature. Particles were just focused knots within the quantum potential itself, and the evolution of the quantum potential over time was guided by the superquantum potential. In fact, Bohm reckoned there could be (and probably were) an infinite number of these levels. Since the superquantum potential was sensitive to pseudoparticlelevel phenomena, a feedback loop arose, and this calls for another metaphor. If you think of a video game, the screen images themselves are the explicate order (the particles, etc that we see in our world), the computer programme that dictates how the screen image alters as the game is played is the implicate order (the quantum potential) and the person playing the game and sending signals to the computer programme is the superimplicate order (the superquantum potential). The loop is completed as the player adjusts his actions based on his perceptions on the screen. This superimplicate order is now the home of active information (but please don't see it, because of its analogy with the human player above, as an actual conscious thing). Bohm was able to express all of this mathematically and to relate it to the more conventional mathematical formulae of quantum mechanics. His theory predicts observed behaviour just as well as the conventional methods. Yet it never caught on. There are aesthetic grounds for this rejection, in that Bohm's interpretation gave a certain prominence to a particle's position (as opposed to its momentum). Penrose has said that Bohm's model essentially assumes that every measurement is a measurement of position. But the simplest explanation is that since the conventional view was already operationalised in the scientific community, and since Bohm's model made no different predictions than the conventional view, they should just stick with what they had. More cynically, you could say that physicists no longer cared about the ontological implications of the theories that provided their predictions. One thing that strikes me as odd is that Bohm himself did NOT view his system as mechanical (deterministic). He felt that the feedback loops (per the video game metaphor) opened room for contingency. I just cannot square this. Feedback loop or not, the dynamics are deterministic, even if noncomputable. Palmer's approach, which arrives at much the same place (active, holographlike information enfolded minutely and hidden from view) albeit with a bit less metaphysical baggage, does not shirk from this. What IS attractive about both  and let's remember that they are entirely consistent with experimental results  is the holism they bring to the universe. This holism brings nearly commonsensical answers to most of quantum theory's mysteries, and it does so in a way that does not violate the spirit of Einstein's relativity. Everything is connected, not in some newage way but ACTUALLY interrelated. Doesn't this just seem to FIT well with the notion of everything having started with the Big Bang? If the entire universe started in a quivering instability the size of a dime, it would be hard to imagine bits that were NOT related to the rest. We are all connected  to one another, to all living things, to everything that exists. A universe undivided. Originally posted 15 Mar 2005  Phew, I can't believe I got that much to grips with the technical discussion back when I was more 'into it'. As I've mentioned in previous posts (I recommend you read If You Think You Understand This, Then You Don't and Bell's Inequality and Bell Revisited before reading this post), I'm not so exercised now about whether the world is deterministic and local. It seems quite likely that it is at least nonlocal, which fits with my best intuition at this point anyway.
The Man Beginning with his 1995 paper, Tim Palmer, from the European Centre for MediumRange Weather Forecasting, questioned the binding force of Bell's Inequality and demonstrated that wholly deterministic (although noncomputably chaotic) nonlinear dynamical systems could produce the apparent randomness of quantum state measurement while keeping our understanding of the universe on a local and real footing. He has refined his thinking and presented it in further papers in 2004 and 2005. I think that he is onto something real and big. Through the happy chance of working with someone whose partner works with Tim at ECMWF, I got the opportunity to meet him and talk a bit about his thinking. Keep in mind that Tim's day job is in meteorological research, so his physics work is in his spare time. Although I clearly lack schooling in the range of mathematical tools necessary to follow all of the technical details, through reading his papers and talking for that hour or so, I've got a pretty good idea what he's up to. The core points There are two common and related themes to his physics work:
Riddled Basins Although the evolution of the state vector through time is a deterministic one, the reduction of the system to an observable state appears to be random. Conventional QM takes this indeterminacy as given. Palmer thinks that the apparent randomness hides a chaotic dynamic that is simply too messy to untangle, which makes his approach what is known as a 'hidden variables' one. Chaos theory uses the concepts of attractors and basins when speaking of how different initial conditions migrate via iterations of some nonlinear operation toward some resting place. A resting place is called an attractor, and the collection of initial states that migrates to that attractor is called its basin. What I have just said is, of course, a gross oversimplification. Not all nonlinear systems converge to an attractor at all. Some just explode towards infinity. Nor does an attractor necessarily constitute a single number at which the system settles forever. An attractor may be a cyclical one, whose cycle may involve simply flipping regularly between two numbers or may involve cycling through a sequence of numbers so long that it would not repeat in the history of the universe to date. Also, not all basins are defined by smooth outlines. An attractor's basin may be very messy indeed, with any point within the basin having other points arbitrarily close to it that DO NOT belong to the basin. Such basins are said to be riddled. Now imagine a system with two attractors: whose basins collectively cover the entire possible set of initial conditions; whose basins are of equal area (or volume, if the space is three dimensional) to one another; and whose basins are jointly riddled (that is to say, intertwined) as above. It is possible to construct such a system that is so riddled that (given truncation errors) it is impossible to compute algorithmically which basin a given set of initial conditions belongs to. Given the equal size of the basins, there is a 50% chance that any set of initial conditions belongs to either basin. It is also possible to construct this system in such a way that it is consistent with other aspects of the formalism of QM for the measurement of bivalent properties like spin, and Palmer shows this. There may be more work to do, but the point is that Palmer has shown that a deterministic system may exist that is consistent with QM. What about locality? But isn't such a system bounded by Bell's inequality, which is known to be violated by both QM prediction and experimental evidence? No, says Palmer, because Bell's proof makes an implicit assumption about certain counterfactual propositions having definite (yes or no) truth values. Where does this notion of counterfactual reasoning enter Bell's proof? Let's remember the experiment that tests it. Zero angular momentum electron pairs (Right and Left) are emitted from a special source. One device measures the spin of each Right electron along some axis in the plane that is orthogonal (perpendicular) to the electrons' path. Another device measures the spin of each Left electron along one of two axes, each of which constitutes a different rotation (say x for one and z for the other) from the axis of the Right device. Bell's inequality is then a relationship among the measurements taken at these three (R, Lx, Ly) orientations. Counterfactual assumption The important thing to remember, though, is that for any given pair of electrons, only TWO of these measurements can be taken (R & Lx, or R & Ly). The theorem makes the assumption that the measurements among many pairs of electrons can be lumped together and then relates correlations within the large set. So, in fact, the relationship observed for any GIVEN pair of electrons is one of two:
The elements in italics are the counterfactual ones. In reality, ONLY x OR z can be chosen as the orientation for the Left member of any given pair. The assumed measure of what it would have been were the other angle chosen is taken from the statistical behaviour of the pairs whose Left element was measured at the other angle. Determinism, Free Will, and the observer as part of the system What is the upshot of all of this? I want to (try to) go into a bit more of the technical detail in a minute, but it is possible to think about this initially at a philosophical level. IF the universe is deterministic in the philosophical sense, then everything that happens (everything that has ever happened and will ever happen) happens NECESSARILY. It COULD NOT have happened any other way. Palmer shows with his demonstration of a particular chaotic system that determinism is consistent with QM observations. So, in effect, we're saying that the observer only measured, say, R and Lx for a particular electron pair. And we're saying that the universe has evolved in such a way that  however free the observer felt himself to be in his choice of the L measurement orientation  he COULD NOT have chosen it to be y. So introducing a counterfactual proposition about what MIGHT have happened HAD he chosen y is meaningless. Even though it feels like a small hypothetical change in the context of a large universe, it is simply not within the set of possible states of the world. As uncomfortable as many feel with determinism, because of its implications for our pure notion of free will, this is hard to get around. Neither the electron pair nor the observer can be taken outside the universe itself. And if the evolution of that universe is deterministic (as it is if it can be modeled by a nonlinear dynamical system) then not only the spin measurements but also the orientations at which they are made follow necessarily from the initial conditions of the universe and the laws that govern is evolution. Over our heads Now, Tim Palmer expresses all of this in a much more disciplined way. He gives an example of a universe defined by a famous attractor, known as the Lorenz Attractor (named after the father of nonlinear dynamics, who discovered it). This attractor is defined by three differential equations on three variables. If the initial conditions of the universe sit on the attractor, and if these differential equations govern the universe's evolution, then the smallest of perturbations to one of the variables will move the system off of the attractor (given the attractor's fractal nature), thereby violating the laws of the universe. But Palmer needs to bridge a gap here. The wave function of quantum mechanics (defined by Schrodinger's equation) uses complex (i.e., using 'i', the square root of 1) linear dynamics. Palmer is talking about real (i.e. no square root of 1) NONlinear dynamics. How can his system do the work of Schrodinger's? At this point, it gets pretty hairy for us nonmathematicians. Palmer introduces a new definition of i as an operator on a sequence of real numbers. Quantum states can be defined by sets of these sequences, and Palmer shows how his i operator performs in a way analogous to the maths of the upward cascade of fluctuations in a turbulent flow (something from his meteorological world). The effect of these steps is to present a way of describing the state function in granular (like the quantum world itself) terms rather than in the continuous terms of the Hilbert space that is used in conventional QM. Applying this to the test of Bell's inequality, this means that we can't pick any angle in a continuum but are instead confined to a finite (but as large as we wish) set of angles. Palmer proves that there is no way that measurements for both the Lx and Ly angular differences from the R orientation can be simultaneously defined. All of this amounts to the more rigorous and mathematical proof of the point I made philosophically and sloppily in the section above. The bottom line is that any real physical state must be associated with a computable real number (even if the only way to compute it is to let nature 'integrate' it through a physical experiment!). Repercussions Where does this take us? If we reinterpret the wave function as a set of binary sequences as described above, we can think of the elements of those sequences as 'real' bits of quantum reality, which means that even in the absence of a measurement, we take the quantum state to have definite values rather than a superposition of possible values. Also, a sequence itself encodes information not just about the system it describes but also about that system's relationship to the whole. Palmer uses an analogy with the DNA in our bodies' cells. This hearkens back to the explicate and implicate order in David Bohm's interpretation of quantum theory. Look for more on Bohm in an upcoming post. I've just finished reading The Tao of Physics by Fritjof Capra. I won't undertake a general review of the book, but I can enthusiastically recommend it.
Instead, I want to concentrate on one specific point Capra discusses in the Afterward of his Second Edition. (The book was originally published in 1975, and the second edition was released seven years later.) Capra discusses the ramifications of the empirical results from tests of John Bell's inequality. As a staunch proponent of a deterministic, local, realist interpretation of quantum mechanics in an earlier day, I had come across Bell's Inequality before. I even wrote about it, which you can verify at the link above. I'm not going to recap that whole post here, so you may want to visit it before continuing. Capra helped me realise that I hadn't really got it, though. Bell set out a testable relationship that must hold if subatomic reality is both local and deterministic. Tests have been conducted, and the results are conclusive: quantum reality must be either nonlocal, nondeterministic or both. Einstein believed to his core that reality must be both local and deterministic. I paraphrase his words on each point:
For reality to be local means that no force or action can act with a speed faster than the speed of light. My pushing a button while standing on the sun and immediately changing the channel on my TV would violate this, since it takes light seven minutes to traverse that distance. For reality to be deterministic means that every effect has a cause. Taken to its logical conclusion, the notion is embodied in the clockwork universe. If one knew all the initial conditions at the 'start of time' and all of the 'laws' that the universe followed as it evolved from one moment to the next, then one could know all that would happen for all time. We use probabilities in everyday life (for instance, at the roulette wheel) because either our understanding of all the (local) forces at work or the exactness with which we can measure initial conditions is insufficient to calculate outcomes exactly. Quantum mechanics raises the possibility that even with perfect understanding and measurement, we still could not accurately predict concrete subatomic events. I was particularly exercised by the issue of determinism, so when I read about outcomes from tests of Bell's Inequality, I focused on that element of the interpretation. I admitted, sadly, that not every effect has a cause. I need not have done so, had I paused to think sufficiently about the second condition. It could be that tests failed the inequality because the causes were nonlocal. I could, in theory, have held on to determinism by accepting spooky action at a distance. But in a way, admitting nonlocality undermines predictability just as much as inherent nondeterminism does, because an observer cannot know and therefore account for forces at great remoteness in the universe that are impacting what he sees. Even if he had appropriate sensors throughout the cosmos, they could not transmit their data to him any faster than the speed of light. Meanwhile, the spooky action at a distance will have had immediate effect. As it happens, today, I believe that reality is not determined, local or objective. Most interpretations of quantum mechanics agree, but my angle is primarily from the philosophical viewpoint rather than the scientific. A worldview consistent with eastern wisdom traditions sees reality as an undivided whole, so the confirmation of nonlocality is no surprise. In a way, you can think of it as rotating causality 90 degrees in spacetime. Instead of explaining a current event by appealing to causes that preceded it, you sometimes have to explain it in terms of state of everything else right now. Sometimes, the best that we can do is say that something is the way it is because everything is the way it is. Any one 'thing' is like a puzzle piece, which, in order to fit into the overall puzzle, can have one and only one shape  the shape of the 'hole' left when every other piece is in place. There are limits to reductionism, and we have touched them. At least some truths are irreducible. Of course, this philosophical sleight of hand doesn't help with predicting the future. We just have to accept that, as we push the boundaries of our understanding of the universe, sometimes the best we can do is approximate or confine our answers to a range rather than a point. There are limits to our knowledge and the control we can exercise with it. First posted 31 Oct 2003. I definitely still think there is infinite diversity out there, including many versions of 'me'. I tend now to think of it more as the existence of every possible experience from every possible perspective.
See Scientific American: Parallel Universes [ COSMOLOGY ]; Not just a staple of science fiction, other universes are a direct implication of cosmological observations Buckle in, 'cause this one's a helluva ride  a real head spinner. You can read the whole article via the link above, but I'm dying to try to summarise it, to see how much of it I 'get'. The gist is that our universe is just part of a multiverse, or actually a part of four tiers of multiverses, with the result being that (based on simple probabilities) an infinite number of you's and me's are 'out there' living through every permutation of our lives. I hope only a few of the me's out there are suffering from my cold at the moment. Our Hubble Space The article starts by defining our universe as our visible universe. Since light travels at 300 million metres per second, and since the Big Bang happened about 14 billion years ago, our visible universe (or Hubble space) is a spherical space with a radius of about 10^26 metres. (The article says 4x10^25). This visible universe's radius grows (by definition) by one light year (roughly 10^16 metres) each year, and the light we see emanating from the edge of our Hubble space was emitted at the beginning of time. Level 1 Multiverse = The Universe The first level of multiverse, then, is the collection of Hubble spaces. This is what I have always considered and shall continue to call The Universe. If we assume (as current observations suggest) that the overall universe is infinite and that matter is roughly evenly spread throughout it, then there are an infinite number of these Hubble spaces, all with the same physical laws as ours. Now, pick one of those other Hubble spaces. What is the probability that the interaction of fundamental particles and forces over the lifetime of that space just happen to produce another you? Surely unbelievably small. But is it zero? It seems irrational to assume that the probability of another you in any other given Hubble space is absolutely zero, since you have already appeared once, after all. Let's say the chance is one in a gazillion, with a gazillion being the biggest number imaginable. Then there must be another you out there, because any nonzero probability, no matter how small, when applied to an infinite number of cases, will be realised. Saying that there must be other you's living through every possible permutation of your life is just a special case. We could simply say that anything that is possible (i.e. has nonzero probability) exists. So, somewhere out there is a you with 9 fingers, a you with a squeaky voice, a you who didn't propose to your wife, a you who likes Abba songs, a you who only carries 20p pieces in his pockets. There is you who slept late on your 19th birthday, not to mention a you whose life is exactly the same as yours in this Hubble space. Now you see just how infinite infinity is! Level 2 Multiverse = The Multiverse There are a couple of ways to think of a level 2 multiverse, which I will call just The Multiverse. At heart, The Multiverse (i.e. Level 2) is a collection of an infinite number of Universes (i.e. Level 1s), each of which can have very different physical 'constants'. We can think of these Universes as bubbles of nonstretching space within the eternally inflationary Multiverse. Alternatively, we can view The Multiverse as a cycle of continuous birth and destruction of Universes, perhaps with black holes as the agents of mutation and birth. From either angle, we could never communicate between Universes (or gain information about another one), because they are either moving apart from one another at faster than the speed of light or only 'touch' at singularities, through which no information can pass. Still, the existence of The Multiverse would explain the otherwise tricky and highly improbable fine tuning of our own Universe. If ours is just one of an infinite number, then it is no longer surprising that so many specific variables (density fluctuation, relative weights of elements, etc) have values just right for allowing life to emerge and evolve. As regards the multime's and multiyou's, if the infinite size of our Universe guaranteed that they were out there, then the existence of The Multiverse, containing an infinite number of Universes, really cements it! Level 3 Multiverse = Many Worlds (from quantum mechanics) A level 3 multiverse is another name for the infinite collection of worlds in the 'Many Worlds' interpretation of quantum mechanics. Each quantum event causes a split between worlds, with one proceeding along possible route 1 and the other along possible route 2. Each of those 'worlds' contains an entire Multiverse (i.e. Level 2). Since there are an infinite number of quantum events, there are an infinite number of such splits and an infinite number of worlds, one for each thread that winds its way forward through one 'choice' after another. I (as in this specific Doug in this specific level 1 and 2) only have access to, only experience one of those threads. The other me's see only their specific threads. But all threads exist. The me's in different Hubble spaces live separate but parallel lives in a different part of spacetime. The different me's within the many worlds (Level 3) are not separated from me in spatial terms but in dimensional terms within the overall wave function for the Level 3 multiverse. You can think of these worlds as being perpendicular worlds, as opposed to parallel ones. Jointly, they require an infinite number of dimensions, which the 'Hilbert Space' of the wave function has. Ouch, that hurts my head. Anyway, the existence of this level depends on whether the wave function's evolution through time is unitary (no, I don't know what that means), which is as yet uncertain but is consistent with observations and wider theory. In one sense, it doesn't matter, because if physics is unitary, then Level 3 adds nothing that doesn't already exist in Levels 1 and 2. No more possibilities are generated, just additional copies of ones that already exist. Level 4 Multiverse = All possible mathematical structures (or, a bridge too far) Just a few words on a final, highest level multiverse. The wave function of quantum theory and level 3 multiverse is a mathematical structure, and most physicists today see the universe as fundamentally mathematical. Why not, then, allow for an infinite number of level 3 multiverses corresponding to any imaginable set of (mathematical) laws? Fine with me, but you'll need to check with all the other multime's individually. Originally posted 14 Mar 2005. I'm not aware that anything has changed on this front, but I have to admit that I've not kept a close eye on it.
The big questions Is there a reality independent of subjective observation? Is the universe deterministic in a 'clockwork' sense, or is it irreducibly random at heart? Are the world's interactions local (unable to propagate at a speed greater than that of light, per Einstein), or is there 'spooky action at a distance'? EPR After Einstein, Podolsky and Rosen (EPR) put forward their paper arguing that quantum theory was as yet incomplete, these questions were central to physics. Einstein believed that our world is both real (in that its states have real values independent of observation) and local. EPR attempted to use a reductio ad absurdum argument by pointing out strange nonlocal effects implied within Neils Bohr's interpretation of quantum theory and claiming that this proved the incompleteness of the theory. Bohr seized on some loose language in the EPR paper to issue a forceful rebuttal. Einstein and Bohr's ensuing debate never reached a resolution, but the world (and certainly the mainstream physics community) eventually adopted Bohr's view. Why? An accidental killing In 1963, a gifted physicist named John S. Bell developed a mathematical proof of a testable inequality that must hold in a real,local and deterministic world. The predictions of existing quantum theory violated the inequality, so either quantum theory was wrong or the world was not a local deterministic one. Bell was himself in the Einstein camp and hoped that experimental evidence would settle the debate in favour of local realism. They did not. All experimental measurements have violated Bell's inequalities, vindicating Bohr's Copenhagen interpretation and seemingly proclaiming the death of locality. (Science is more willing to sacrifice locality than realism / determinism). General derivation Bell's derivation of the testable inequality has been hailed as one of the greatest feats of modern physics, yet it is relatively easy to understand. First, let's do it without worrying about the actual physics. Pick three characteristics that an object might have, and call them A, B and C (although they could be something like red, round and soft). Now, imagine that you observe a large number of these objects to see which ones have which characteristics. Some objects might have all three characteristics; some might have only one; some might have combinations of two. Before going further, let's develop a shorthand notation. To indicate that an object has a characteristic, we write a '+' after the characteristic's name (like A+). To say that an object does not have a characteristic, we write a '' after the characteristic's name (like B). So, an object that has characteristics A and C but lacks characteristic B would be described as (A+, B, C+). One thing that is definitely true is that in a large set of these objects, the number that are (A+, B) plus the number that are (B+, C) is greater than or equal to the number that are (A+, C), or: Number (A+, B) + Number (B+, C) >= Number (A+, C) There is a proof of this at the bottom of this post, but if you're willing to accept it for now, let's move on to apply it more specifically to the EPR question. Application to quantum world The objects we're now going to observe are electrons, and we're going to observe them in pairs, for reasons that become clearer in a minute. Electrons, like other subatomic particles, have a property called spin (which we'll measure as equalling either 1 or 1), and the A, B and C we're going to observe are the spin measurements at three different angles in the same plane. These pairs of electrons will be emitted by a special source that sends them in opposite directions to one another. The special source also ensures that their spins 'add' to zero. For example, if the right electron (R) has a spin of 1 at a given angle, then the left one (L) necessarily has a spin of 1 at that angle. The weird thing  and the thing that upset Einstein  is that according to quantum theory, between the time that the electron pair is emitted and the time at which some measurement is made, they constitute a single system that can be modelled by a single quantum state function. These electrons can travel very far from one another in this state. However, whenever one is measured, both it and its paired electron drop into a concrete state, IMMEDIATELY, not matter how distant they are from one another. And knowing the spin of the measured one allows you to know the spin of the distant one. This seems to violate the spirit of Einstein's relativity principle, which states that nothing can travel at speeds greater than that of light. This whole experiment is designed to test whether one electron does in fact impact the other in a nonlocal way. Next, we have to consider the measuring devices, one for R and one for L. Let's say that the right device measures each R's spin at an orientation that we'll call zero (0). The left one is a bit more fancy; it will measure each L's spin at one of two angles  one that makes an angle of size x with 0, or another that makes an angle of size z with 0. The left device can't measure any one L's spin at BOTH the x and z angles, because the first measurement will decouple the relationship that that particular L has with its R twin. Now, as long as one assumes, as Bell did, that the world is a realistic and local one, then one can substitute the following quantum state measurements into our simpler nonquantum one as follows: For A+, substitute R(1)  meaning that the right device measures the R electron of a given pair as having spin of 1. * For B, substitute Lx(1)  meaning that the left device measures the L electron of a given pair as having spin of 1 at angle x. * For B+, substitute Lx(1) * For C, substitute Lz(1)  meaning that the left device measures the L electron of a given pair as having a spin of 1 at angle z. From this, we can express Bell's Inequality as: Number [R(1), Lx(1)] + Number [Lx(1), Lz(1)] >= Number [R(1), Lz(1)] Because quantum theory implies different correlations among the electron pairs at the various angles of measurement, its predictions violate this inequality. Remember that Bell's only assumptions were of realism and locality. This implies that quantum theory is incompatible with at least one of these two assumptions. Devices like those I've described do exist, and the experiment has been run many times. In every case, the experimental results have violated Bell's Inequality, thereby supporting quantum theory and driving a stake through the heart of locality (and some say realism). But have we (and more importantly generations of the world's top physicists) missed something? The final word? This remained the final word until 1995, when a meteorologist named Tim Palmer used the understanding from his physics PhD as well as his deep knowledge of nonlinear dynamics from meteorology to show that, in addition to the assumptions of realism and locality, there is an implicit assumption about counterfactual definiteness embedded in Bell's proof and that that assumption may well not hold. It in effect calls into doubt the legitimacy of steps 1., 3. and 6 in the proof below. Look out for my upcoming post on Tim Palmer's work for more detail about his efforts to reground quantum theory in a deterministic and discrete nonlinear dynamics. Proof of Bell's Theorem (using 'N' for 'Number') 1. N (A+, B) = N (A+, B,C+) + N (A+, B, C); since an object must have the characteristic C or not have it. 2. So N (A+, B) >= N (A+, B, C); since N (A+, B, C+) cannot be smaller than zero. 3. N (B+, C) = N (A+, B+, C) + N (A, B+, C); similar reasoning to step 1. 4. So N (B+, C) >= N (A+, B+, C); similar reasoning to step 2. 5. So N (A+, B) + N (B+, C) >= N (A+, B, C) + N (A+, B+, C); adding inequalities 2. and 4. together 6. But N (A+, B, C) + N (A+, B+, C) = N (A+, C); similar reasoning to steps 1. and 3. 7. So N (A+, B) + N (B+, C) >= N (A+, C); which completes the proof 
AuthorI'm curious. I like looking beneath and behind the obvious, also looking for what is between me and the obvious, obscuring or distorting my view. Categories
All
Archives
October 2019
