Wednesday, February 11, 2015

Many Worlds: the "loss of coherence" and decoherence

What problem does the "many worlds interpretation" try to solve? The core idea is captured by the infamous Schrödinger's cat thought-experiment. The cat ends up in a superposition:

α |cat-alive> + β |cat-dead>           (where α, β are complex amplitudes)

but in reality we never observe such a superposition. What we see is either that the Geiger counter has triggered, releasing the poison and the cat is dead or there was no particle-emission and the cat remains alive. Making the act of measurement explicit, the observer becomes entangled with the cat-box on observation and joins the superposition thus (with updated amplitudes α' and β') :

α'∣Live cat> ⊗ ∣Observer sees live cat> + β'∣Dead cat>  ⊗ ∣Observer sees dead cat>.

The overall situation is still a superposition, but an Everettian would say that since the observer has 'split',  each 'copy' doesn't see a superposition but just one outcome. If world-splitting is not your thing, then you have to postulate (as an extra axiom) that 'measurement' somehow causes the superposition to collapse to one or the other outcome according to the probabilities |α'|2 and |β'|2.

We now focus more precisely on how the process of 'measurement' destroys superpositions - a topic called 'decoherence'. The formal treatment of decoherence involves graduate-level concepts and is formidably inaccessible, while analogies such as 'phase information leaking into the environment' are unhelpful at best. I will say more about a 'simplest possible model' another day.

However, something can be said even at undergraduate level, and for this we are indebted to non-Everettian Eric L. Michelsen and his excellent "Quirky Quantum Concepts: Physical, Conceptual, Geometric, and Pictorial Physics that Didn't Fit in Your Textbook (Undergraduate Lecture Notes in Physics)". This is available as a PDF but in a fit of guilt I bought it. An extract, [somewhat annotated] follows (page 51 and following).

---

"In theoretical QM, we usually focus on perfect systems, and pure states. We frequently say that a measurement “collapses” the quantum state vector to one agreeing with the measurement, and this is often a useful simplification of the measurement process. However, in practice, the measurement process is more complicated than that, because most measuring equipment, and all observers, are macroscopic. The “decohered” state is the norm; you must work hard to achieve even an approximately pure entangled state.

We show here that elementary QM can explain some of the features of real measurements, however, the full explanation of decoherence is beyond our scope. (The term “decoherence” has a specific meaning: the process of a system becoming entangled with its environment in irreversible ways, resulting in the loss of a consistent phase relationship between components of the system state. We therefore use the more general term “loss of coherence” for both decoherence and other processes.)

Most macroscopic measurements do not show quantum interference [as in the two-slit experiment]. Why not? One reason is that macroscopic bodies suffer unknowable, and unrepeatable energy interactions, i.e. they gain or lose an unknowable amount of energy due to uncontrollable interactions with their environments. In other words, they are subject to simple “noise.” This results in the loss of a consistent phase relationship between components of a superposition state. We discuss below how such a loss of consistent phase leads to classical probabilities.

Let us walk through a plausible measurement, and consider the elementary quantum mechanics involved. [The system pictured below shows the famous Stern-Gerlach experiment, which first demonstrated the quantization of angular momentum.]



Suppose we start with a particle which can be in either of two states, |s1> or |s2>, such as polarization (horizontal or vertical), or spin (up or down). A general particle state is then:

|ψ> = a|s1> + b|s2>     where a,b are complex coefficients and |s1>,  |s2>  are basis states.


This is called a coherent superposition, because a and b have definite phases. (This is in contrast to a mixed state or incoherent mixture, where a and b have unknown phases.) All that is required for loss of coherence is for the relative phases of a and b to become unknown. For simplicity, we take |s1> and |s2> to be energy eigenstates, and the particle is spread throughout our measurement system [i.e. it is in a spatial superposition].

According to the Schrödinger equation, every state time-evolves with a complex phase determined by its energy, then our 2-state system time evolves according to:

|ψ(t)> = ae-iE1t/|s1> + be-iE2t/|s2>.

Since the energies E1 and E2 are quantized, the complex phases multiplying |s1> and |s2> maintain a precise (aka coherent) relationship, though the relative phase varies with time.

When we measure the particle state, the state of the measuring device becomes entangled with the measured particle. Let |M1> and |M2> be states of the whole measuring system in which either detector 1 detected the particle, or detector 2. If we look directly at the indicator lights, we will observe only state 1 or state 2, but never both. This means |M1> and |M2> are orthogonal. As the measuring system first detects the particle, the combined state of the particle/measuring-device starts out as a coherent superposition: [this is the same as the Schrödinger's cat case above]

|Ψ> = c|M1>|s1> + d|M2>|s2>       where c, d are complex coefficients.

The combined system time evolves according to its new energies:

|Ψ(t)> = ce-i(E1 + EM1)t/|M1>|s1> + de-i(E2 + EM2)t/|M2>|s2>

If the energies of the two measuring device states fluctuate even a tiny bit, the two components of the superposition will rapidly wander into an unknown phase relation. They will lose coherence.

Every macroscopic system suffers unrepeatable and unknowable energy fluctuations due to its environment.

We estimate a typical coherence loss rate shortly.

[So what does this loss of phase coherence mean in practice?]. Let us examine the effects of various kinds of energy transfers between a system and its environment. In our two-path experiment, [I think he means that this is a variant experiment where we don't 'look at' (i.e. measure) the indicator lights 1 and 2 so allowing the interference pattern to emerge on the screen in the figure to the right] the interference pattern is built up over many trials, by recording detections on film. Now suppose one path suffers an energy transfer to/from its environment before recombining and interfering. There are four possibilities:

  1. The energy transfer is knowable and repeatable. Then one can predict and see an interference pattern in the usual way.
  2. The energy transfer is unknowable, but repeatable. Then we can record an interference pattern, and from it, determine the relative phases of the two paths (mod 2π), and therefore the relative energies (mod 2πħ/t) from (ΔE/)t.
  3. The energy transfer is knowable for each trial, but not repeatable. Essentially, each trial has its own position for the interference pattern. One can then divide the detection region into intervals of probability calculated for each trial, and then show consistency with QM predictions, but contrary to classical probability.
  4. The energy transfer is unknowable and unrepeatable. Then there will be no interference pattern, and repeated trials do not allow us to measure any quantum effects, since the phase is unknown on each trial. Therefore, the measurements are equivalent to classical probabilities: it is as if a single path was chosen randomly, and we simply don’t know which path it was.

This fourth condition, of unknowable and unrepeatable energy transfer, causes loss of coherence, the randomization of phase of components of a superposition. Loss of coherence makes measurements look like the system behaves according to classical probabilities, with no “wave” effects. Loss of coherence destroys the interference pattern when we try to measure through “which slit” a particle passes. Full loss of coherence leads to classical probabilities.

Our example process leading to loss of coherence follows directly from the Schrödinger equation and unknown energy transfers. There is no need to invoke any “spooky” quantum effects.

Note that even accounting for loss of coherence, quantum theory still requires the axiom of collapse of the wave-function upon observation. When a particle’s wave splits, then passes through both detector 1 and detector 2, and then loses coherence because of entanglement with a macroscopic measuring device, the system is still left in a superposition of both slits:

|Ψ(tafter)> = f|M1>|s1> + g|M2>|s2>

we just don’t know f or g. We can’t generate an interference pattern from multiple trials, because each trial has a different phase relation between f and g, putting the peaks and valleys of any hoped-for interference pattern in a random place on each trial. These shifts average over many trials to a uniform distribution. Nonetheless, each trial evolves in time by the Schrödinger equation, which still leaves the system in a superposition. Once we “see” the result, however, the unobserved component of the wave-function disappears, i.e. the wave-function collapses.

Collapse of the wave-function is outside the scope of the Schrödinger equation, but within the scope of QM, because collapse is a part of QM theory. It is one of our axioms. Some references confuse this issue: they try to avoid assuming such a collapse as an axiom, but cannot derive it from other axioms. From this, they conclude that QM is “incomplete.” In fact, what they have shown is that the axiom of collapse completes QM.

Note that once the measuring system fully loses coherence, we could just as well say that the wavefunction has then collapsed, because from then on the system follows classical probabilities (equivalent to a collapsed, but unknown, wave-function). However, we now show that a binary model of “collapse or not” cannot explain partial coherence.

Partial coherence: What if we start with a microscopic system but replace our microscopic atoms with mesoscopic things: bigger than microscopic, but smaller than macroscopic? Mesoscopic things might be a few hundred atoms. These are big enough to lose coherence much faster than single atoms, but still slowly enough that some amount of interference is observed. However, the interference pattern is weaker: the troughs are not as low, and the peaks are not as high. A superposition leading to a weak interference pattern is called partially coherent. We describe partial coherence in more detail in section 8.4. The simple model that the wave-function either collapsed or didn’t cannot describe the phenomenon of partial coherence.

The larger the mesoscopic system, the more uncontrollable interactions it has with its environment, the faster it loses coherence, and the less visible is any resulting interference pattern. We can estimate the time-scale of coherence loss from our example energy fluctuations as follows: a single 10 μm infrared photon is often radiated at room temperature. It has an energy of ~0.1 eV = 1.6 x 10–20 J. This corresponds to ω = E/ħ ~ 2 x 1014 rad/s. When the phase of the resulting system has shifted by an unknowable amount > ~2π, we can say the system has completely lost coherence. At this ω, that takes ~ 4 x 10–14 s. In other words, thermal radiation of a single IR photon causes complete loss of coherence in about 40 femtoseconds. In practice, other effects cause macroscopic systems to lose coherence in dramatically shorter times.

Summary: A measurement entangles a measuring device with the measured system. The entangled state of device and system time-evolves according to the Schrödinger equation. Macroscopic devices lose coherence, due to interactions with the environment. Lack of coherence prevents any interference pattern within the system. Therefore, measurement by a macroscopic device produces subsequent results that are classical, as if the system collapsed into a definite state upon measurement, but observers only “see” which state when they look at the measuring device. Any observation by a person is necessarily macroscopic, because people are big. Such an observation collapses the (incoherent) device/system/world state to that observed. Quantum interference can only be seen if it occurs before any entanglement with a macroscopic system (and therefore before any loss of coherence in the system).

The model of “collapse of the wave-function” is a binary concept: either the wave-function collapses or it doesn't. Such a model cannot account for the phenomenon of partial coherence. Loss of coherence is a continuous process, taking a fully coherent state through less and less partially coherent states and eventually to incoherent (aka “mixed”) states. Continuous loss of coherence fully explains partial coherence and the varying visibility of interference patterns.

Some quantum effects, such as the spectrum of atoms, do not rely on interference, and are therefore macroscopically observable. In fact, measurement of such effects led to the development of QM."