Entanglement And Retrocausal Signaling
Even though Albert Einstein was one of the founders of quantum theory, he was never comfortable with the randomness and lack of determinism implied by it. Over several years, he came up with many thought experiments to show that, if quantum mechanics (QM) wasn’t exactly wrong, it was at least incomplete. The pinnacle of the attack came in 1935 with the publication of “Can Quantum Mechanical Description Of Physical Reality Be Considered Complete?” by Einstein and two colleagues, Boris Podolsky and Nathan Rosen. As the original title rolls off the tongue like a Mack truck, this thought experiment is commonly referred to as the EPR Paradox, after the authors.
EPR showed that quantum mechanics (QM) predicted objects would have to communicate with each other instantaneously (i.e. much faster than light) which was forbidden by special relativity and as such, QM must be incomplete. Decades later, the connections between particles were shown to exist, turning Einstein’s argument against QM into possibly one of his greatest discoveries.
There is a lot of debate as to whether this connection can be used to send information, but there’s good reason to believe it’s possible. If so, not only could instantaneous communication become possible but signals could be sent backwards in time, allowing for some incredible applications, but also illustrating some very strange and frightening aspects of the foundations of our universe.
“Quantum Entanglement” is the term given to the connection between objects, predicted by the EPR paradox. Properties such as momentum or quantum spin can be linked, so that the objects can’t be treated separately, even if there is a lot of distance between them. Another common term is “nonlocality,” which refers to the idea that parts of a system are spatially separated but have linked responses to measurements.
A Brick And A Cinder Block In Outer Space
Imagine that a common, red brick and a cinder block are motionless in outer space (note: the color is unimportant). Between these is a compressed spring with a mechanism to trigger it to uncompress, pushing the brick and cinder block apart. Initially before trigger, the momentum of the “system” is zero since nothing is moving. Momentum is essentially mass times speed but where speed also includes direction, so momentum can be negative depending on the direction of motion. The brick and cinder block will fly away from each other after the spring is triggered, one in a “positive” direction and the other in the “negative” direction depending on how the coordinate system is defined. Momentum is always conserved, meaning the final momentum of the brick and cinder block will sum to the initial momentum before the trigger, which is zero. The brick, having a lower mass, will end up with a higher speed than the cinder block so the magnitudes of their momentums will be equal, although opposite in sign.
After these two fine examples of modern building materials have had time to separate a significant distance from each other, imagine someone measures the momentum (mass, speed, and direction) of the brick. The observer will immediately know the momentum of the cinder block to be the opposite, and if its mass is known, they will be able to calculate its velocity.
Although this is a simple and straightforward physics problem in Newtonian mechanics, matters are complicated in a very interesting way when quantum mechanical considerations are taken into account.
Quantum Events And Randomness
Normally in the macroscopic world, every effect has a cause: a trash can goes flying because somebody kicked it, the Earth is at a certain position because it’s in orbit, and its velocity took it to that point at the given time, or a plant grew up to be a beautiful sunflower because its seed was planted some months ago. If cause can’t be determined, it’s often due to the complexity of the processes creating the effect – too many things affect the outcome, or a slight change in one factor can have a huge effect on the system, as described by chaos theory. In the quantum world, many events are truly random, meaning that they happen with some probability, but that there’s no deeper structure that would make it possible to predict the event with certainty.
Some of the commonly known quantum events are emission of light, radioactive decay, and the decay of unstable particles into other particles. When light is created, an electron in an atom falls from an excited state – an orbit of more energy than the ground state – back down, releasing energy in the form of a photon (a “particle” of light – see picture below). The time it takes the electron to fall back to the ground state is random, usually on the order of a microsecond. For materials used in lasers, electrons will stay in excited states for much longer, about a millisecond, or three orders of magnitude larger, but the actual time for each electron to stay excited is still random. Radioactive decay involves the nuclei of atoms and the expulsion of nuclear particles. Predicting exactly when an atom will decay is impossible, and can only be described in probabilistic terms: half a sample of (a whole lot of) atoms will decay within a certain amount of time – the half life for the specific element. Unstable particles such as tau electrons or top quarks have average times to live before they decay into other particles, different for each type of particle. Some will live longer and some shorter, but the time for each particle is random.
Superposition And Shrodinger’s Cat
Intrinsic randomness is only the beginning of the weirdness of QM, however. Quantum events happen, initially, in all possible ways. A radioactive atom will decay after all possible times up to infinite, so that it is forever in both states of decayed and not decayed. However, when the atom or its ejected nucleon interact with something else, the state will change fully to either decayed or not decayed. In the language of QM, the “wave function collapses” into one state. It’s almost as if the universe is a lazy manager which avoids making a decision until it absolutely has to. Of course, interactions can put larger systems into undetermined states. A radioactive atom and all the neighboring atoms could be in a state of both not decayed and decayed, plus the effects due to which direction a nucleon was ejected could be in any of many states (left, right, up, down, everything in between). The particle or system is in a “superposition” of all possible states.
The wave function is an equation developed by Erwin Schrödinger which, when squared, gives the probability of some value. Applied to momentum, it might show a range of possibilities with varying probabilities for each, as an example, or, if applied to position, might show multiple possible positions. However, after measurement, the wave function will show one value, indicating that it has collapsed.
In the thought experiment of Schrodinger’s Cat, a cat is placed in a box along with a vial of poison. A hammer is set up so that it can break the vial, releasing the poison and killing the cat, depending on whether a radioactive sample decays and triggers the hammer to fall. After such a time that the sample has had a 50% chance of decaying, the cat is equally dead and alive, but it’s only when someone looks in the box, thus collapsing the wave function, that that cat will become fully alive or dead. Prior to observation, nobody knows whether the cat is alive or dead and in fact there is no answer. It’s not possible to see the cat in the superposition of dead and alive because once it has been observed, it has been observed, thus destroying the quantum state of the system (note that this makes it useless to actually perform the experiment, so don’t call PETA).
Exactly when and why the wave function collapses has long been a topic of heated arguments. Some physicists believe that the interaction of a system with sensing equipment and eventually a human brain causes the collapse; in other words, intelligence plays a key role of forcing the universe to be one particular state. Others believe that quantum superposition propagates across the entire universe so that it’s in all states all the time. Since people only perceive one reality, there are (infinite) copies of every person in an infinite multitude of universes, each perceiving one of the possible versions.
Another possibility has become popular more recently. As quantum particles and systems interact with the universe, their wave functions go out of phase with each other and “decohere.” It’s like this: if water waves are all in sync, they add up to a larger wave, but if they’re out of sync, they interfere with each other and add up to nearly nothing. One argument to be made for this point of view is that people are made of atoms and other stuff from the universe, so why would an observation by a person (an intelligent mind) have the super ability to collapse the universe into a single state while other things in the universe, made up of the same stuff, would not? People are also inside the universe, as opposed to being outside, so there would be nothing to keep the universe as a whole in a single state (unless the Flying Spaghetti Monster is using his noodle to tame it).
When objects interact, they can become linked to each other in that certain properties are “shared” between both. Imagine that two rubber balls are moving along in a superposition of several velocities when they collide. There will then be multiple possibilities for the final velocities of the balls, but each will depend on the other when finally measured – one ball will not be found to have been severely deflected while the other continues almost unaffected. To put it another way, there are still multiple possible outcomes and the balls will be in a superposition of all of them until measured, but each possible outcome specifies complementary velocities for each ball.
After the interaction, the wave functions for each ball are not independent. If a measurement is made on one ball, thus collapsing its wave function into a single value, the wave function for the other ball will also collapse. In effect, this means that properties such as momentum, position, and many others can be entangled so they add up to some value – they are complementary to each other.
As another example, imagine that a particle of matter collides with its anti-particle, annihilating both and producing two photons, probably gamma rays. The frequency, or wavelength, of both photons could be random, but the sum of energy of both photons must add up to the original energy of the two particles – 2mc^2 (and the sum of momentums for the created particles has to add up to the momentum of the original).
When the energy of one photon is measured, that photon “collapses” into a single value from the multiple possibilities. Since both are part of one system, the other photon immediately assumes a single value for energy. Both photons then have real values for their energy, and since the frequency of a photon is directly related to its energy, the photons also have definite values for frequency. This is a very important point: measuring one photon immediately affects the other, regardless of distance. The two photons could be on separate sides of the galaxy and, assuming neither had been absorbed by a brick along the way, observing one would affect the other instantly.
The Brick And Cinder Block Revisited
Returning to the brick, cinder block, spring, and trigger: the brick is moving in one direction, and the cinder block (along with the spring and trigger) is moving in the other direction, such that the momentums of each add up to the original momentum – zero. If the spring could be controlled such that it would push the brick and cinder block apart with more or less force, resulting in a different speed of separation, a quantum event could be used to select that value and would put the velocities of the brick and cinder block into a superposition of values. (A compressed spring holds energy, which would be imparted to the two objects and would specify the final velocity and kinetic energy. A more complicated system would be needed to set the velocities of the brick and cinder block to different values.)
At some later time, the momentum of the brick is measured – its mass and velocity – and collapses into a single value. Simultaneously, the momentum of the cinder block (along with the spring and trigger or other device) also collapses into a single value. It almost seems as if the two pieces are in instantaneous communication with each other, which is what Einstein objected to.
Doing this experiment with macroscopic objects such as bricks would probably not work. Interference between the pieces, between the different parts of the brick, or simply insufficient isolation from the rest of the universe would likely collapse the state prematurely. Entanglement can be achieved with particles, atoms, or even molecules in very carefully isolated systems. In other words, don’t try this with a home.
What Does It Matter?
A perfectly reasonable question to ask is why all this stuff about systems in superposition matters when observation will collapse it into an ordinary, non-quantum state. There are cases in which particles in superposition will behave differently before they’re detected, altering the final result. Prematurely observing the particles or system will change the results of the test, and this will be illustrated below.
At this point, it’s tempting to believe that particles really do have well-defined values for their properties regardless of what quantum mechanics says. Einstein called this “objective reality” and argued strongly for it (“I like to think the moon is still there when I’m not looking at it.”), believing that QM was incomplete in that it wasn’t able to predict properties even though particles had them. These are referred to as “hidden variables” theories. In the early 1960’s, John Bell published a theorem – Bell’s inequality – that could be used to show that, if the inequality were violated, particles couldn’t have defined properties before measurement. A few years later, experiments were performed showing that Bell’s inequality was, in fact, violated, and refinements have been made over decades to both the theory and experiments, all showing that quantum mechanics is correct.
For the brick experiment, a hidden variables theory would state that the brick and cinder block really did have values for momentum regardless of whether they had been measured. If this were the case, the correlation between the momentums would be nothing special – just conservation of momentum from the time when the spring pushed them apart.
Spin And Polarization
A very useful property for illustrating Bell’s inequality is particle spin. Spin is a measure of angular momentum for particles, although it’s definitely not the same as angular momentum in the classical sense – the picture above is misleading. First of all, amounts are quantized – to half-integer values for fermions (particles such as quarks, protons, neutrons, and leptons), and whole values for bosons (photons, gluons). Angular momentum, in this case, does not refer to a chunk of matter that is spinning and has some rotational inertia, since particles are, theoretically, geometrical points, and it doesn’t make sense to think of them as rotating or calculate rotational inertia in the traditional way. Also, if charged particles had volume, the charge would continuously undergo acceleration and would radiate energy. Quantum particles are, by definition, not classical and don’t behave in the classical ways.
Another interesting aspect of spin is that, when measured, it’s always one value or another but never in between. Using an electron as an example, it will be either spin-up or spin-down, resulting in the electron’s generating a magnetic field in one direction or the opposite (that an electron generates a magnetic field is a big reason for the name “spin” since a rotating charge would classically generate a magnetic field). The measurement actually causes the spin axis to line up with the measurement, either parallel or antiparallel. When an electron moves through a uniform magnetic field, it will curve in one direction or the other depending on its spin. The key point is that it will always curve by the same amount, one way or the other, but nothing in between. The value for spin is either up or down in relation to the magnetic field, not at an angle to it and as such, there are only ever two possible values for spin.
Angular momentum is still conserved in particle reactions. Certain types of particles have no angular momentum but decay into particles that do have it, so the angular momentum of the created particles has to add up to zero – the value for the parent particle. Spin of the child particles will initially be in a state of superposition of both possible values, but the spins of both particles are entangled. When spin is measured for one particle, it will become either up or down in the direction of the measurement and the other particle will automatically become the opposite value. Spin can be measured at any angle, and both particles will have opposite values relative to the angle of the measurement.
It should be noted that there are unstable particles with non-zero spin which decay, and the child particles can have the same values for spin in these cases. When spin is measured on one, the other particle will immediately take on the same value for spin, in the angle at which the measurement is made.
Photons also have values for spin but this is more commonly referred to as polarization. As photons travel in a direction, an electric field oscillates perpendicular to the direction of motion and a magnetic field oscillates perpendicular to the electric field.
A polarizing filter will allow photons with their electric fields parallel to the direction of the filter to pass through, while blocking photons with their electric fields orthogonal to the filter. This constitutes a measurement of spin, so the photons are limited to two possible values – parallel or orthogonal to the filter. These values are 90 degrees apart as opposed to electrons, whose spin values are 180 degrees apart.
A Simple Version Of Bell’s Inequality
Bell’s inequality is entirely based on logic and the derivation has nothing to do with physics. Imagine a set of objects, each of which can or can not have any combination of three different properties, A, B, and C. One simple version of Bell’s inequality states the following:
Count(A, not B) plus Count(B, not C) is greater than or equal to Number(A, not C)
As an example, say that the group is a mob of people and the three characteristics are height above 5’8”, female, and green eyes. Each property applies or does not apply to each person, and the quantities of people for each of the terms of the inequality can be counted. It’s easy to show why the inequality will always hold true with a simple table. Each of the characteristics covers part of the table: quality C, green eyes, is the upper row while not having quality C (^C pronounced “not C”), eyes of some other color, is the lower row. Quality B (female) has the left two columns, not B (male) has the right two columns. Quality A, tall, alternates each column.
In the table below, the different terms of the inequality are colored. The first term on the left, A not B, has a light red background and the second term, B not C, has a light blue background. The right side of the inequality, A not C, is the white text:
The reason the inequality will always hold true should be clear from the above table. Any people that are part of the collection A not C (tall with non-green eyes) are also in either of the other two groups (tall and male, or female with non-green eyes). For any collection of people or other (regular) objects with any three properties, this inequality will always be true.
To apply Bell’s inequality to electrons, three independent properties have to be measured. If the spin of an electron is measured along one axis, quantum mechanics says that the measurement changes the electron, so measuring along a different axis wouldn’t give an independent result. However, with entanglement, the spins along two different axes could be measured on different particles. Without entangling three particles, which has been done recently but is harder, a third property can’t be measured on one set. However, experiments can be done on many pairs of particles, getting the counts of each of the terms of the inequality and doing the calculation with statistically significant samples.
Applying the inequality to electron spins, entangled pairs are measured in combinations of different spin angles where each angle represents property A, B, or C. Once again, spin is either up or down in relation to the axis to which it’s measured, and the spins are correlated (either both the same or opposite) when measured along the same axis. When measured along different axes, however, the measurements are not the same and so constitute different properties.
When this experiment was done, it violated Bell’s inequality. Since the measurements were done over different sets of particles, there is a chance that the results were all flukes, but the numbers of particles were huge, like billions, and there have been many different experiments performed which all violate Bell’s inequality. Instead, the results follow the probabilities dictated by quantum mechanics.
What does this mean? When one particle in a pair is measured, it affects the measurement of the other particle in the pair so the right term of the inequality, A not C, is not included in the left two terms, A not B and B not C. In terms of the example with people, that’s like saying measuring someone’s height affects their eye color. With entangled pairs, making one measurement on one particle does affect a different measurement on another particle, and the measurements were done when the particles were far enough away from each other that a classical signal (i.e. one traveling at most, at the speed of light) would not have time to influence the other particle. Therefore, the particles can’t have had values for all possible directions of spin measurements before they were actually measured.
Taking a step back for a minute, consider what measuring spin means. It’s a measurement of angular momentum which, as stated earlier, is always conserved and because of that, it makes perfect sense that particle spins of pairs created from the same reaction (such as the decay of a parent particle into two electrons) would always be correlated. The addition of particles not having definite values before measurement is what makes the correlation interesting.
Since the experimental results violate Bell’s inequality, one or more of the assumptions for the proof are incorrect. What are the assumptions? First, it was assumed that particles – electrons – have properties whether those are measured, or not – physical reality exists. It was also assumed that information does not travel faster than light, so the electrons could not signal each other – locality, objects only affect each other by touching, or by exchanging things that transmit forces or information, all of which travel at or slower than light. The last assumption is that logic is valid. Quantum mechanics correctly predicts the results of these experiments – how many electrons will pass each of the filters – which indicates that QM also violates at least one of these assumptions.
Entanglement And Information
It would seem that entanglement breaks the light-speed barrier since entangled particles have to settle on a value for angular momentum at the time one of them is measured, and they have to act together to make sure conservation of momentum isn’t violated. The first thought is to make a modem to use entanglement to send information, but careful examination shows this isn’t easy. In fact, many physicists believe it’s not possible.
Why It Doesn’t Work
Imagine that you want to send information using the entangled electron pairs from the discussion on Bell’s theorem. Imagine that out in space is the Death Star emitting pairs of entangled electrons to distant observers. When Leia receives an electron, she checks whether it is spin-up at zero degrees – the direction perpendicular to the galactic plane following the right-hand rule (hopefully she’s in a spiral galaxy). Luke is set to also measure his electrons at zero degrees. Leia’s electron has a 50% chance of measuring spin-up and 50% chance spin-down. How does she get a message to Luke? Luke has the same odds of getting a spin-up electron if he measures his first. They know that each other got an opposite result. What if Leia decides to measure her electron to send a “1” or not measure it to send a “0”? Regardless of whether she reads the spin or not, Luke still has a 50/50 chance of getting a spin-up measurement. Leia would have more success sticking a post-it note to her dad and tossing him to Luke, even though delivery is neither quick nor assured.
Many physicists are uncomfortable with the idea of being able to send information faster than light (FTL). Since it automatically makes it possible for effect to precede cause in some reference frames, FTL communications open the universe to paradoxes. Proofs have been written that entanglement can’t carry information, but as Dr. John Cramer of the University of Washington notes, at least some of these are tautological – the assumptions, however subtly, contain the result. It may be that wishful thinking on the part of physicists not wanting to have to deal with paradoxes plays a deeper role in these proofs than was intended.
In the early 19th century, Thomas Young performed the double slit experiment. Light was shone through two close slits to see whether the waves of light coming for each slit would interfere with each other, which they did, producing a characteristic set of light and dark bands. At the time there was a long-standing argument as to whether light was made of waves or particles and the results of the experiment definitively supported the wave theory.
The heart of the debate was that, if light was made of waves, what were the waves “waving” in? Water waves are displacements of water and sound waves are transmitted through air as pressure waves, but there was no obvious medium for light to wave through. A common concept was that the luminiferous ether, an otherwise undetectable sort of fluid permeating the universe, was the sea that light was waving in. However, in the late 19th century the Michaelson-Morley experiment failed to detect any difference in the speed of light coming from different directions as it would with the Earth moving relative to the ether. After more experiments, the concept of the ether was dropped.
When Einstein demonstrated the photo-electric effect, the argument of whether light was composed of particles or waves swung back toward particles. What Einstein showed was that light had to be at least a certain frequency before it could dislodge electrons in certain materials to produce a current – these were basically early solar cells. Light of a lower frequency, corresponding to the red end of the spectrum, didn’t have enough energy even when the light source was highly intense. Higher frequency light, corresponding to the blue end of the spectrum, could dislodge electrons and produce a current.
If light was made of particles, how would it create the interference pattern? It turns out that the pattern isn’t generated by the interference of electromagnetic waves of photons but rather by the probability waves associated with their positions. When photons are created, it is a quantum effect, subject to probability. The directions that photons are emitted are (at least partially) random, and they go off in all possible directions, only settling down to a single direction when they are detected. When a photon goes through the slits, it’s in a superposition state such that it goes through both, and the probability waves associated with photons’ positions interfere with each other.
The double slit experiment has been done such that one photon is emitted at a time, and the pattern builds up over time as photons’ positions collapse into definite locations with the probability matching the interference pattern. It’s also been done with particles, including electrons, protons, and atoms, indicating that particles are also waves, just as photons (waves) have some particle-like properties. This effect is referred to as wave-particle duality.
If the double slit experiment is modified so that some form of detector senses when a photon goes through one slit or the other, the interference pattern disappears. This was initially attributed to slight alterations to the path of the photons by the detector, but that has since been shown not to be the case. The detector at one slit prematurely collapses the wave functions (position probability functions) of the photons to single values, so they can no longer interfere and produce the pattern.
Double-Slit Plus Entanglement
In 1998 an Austrian woman named Birgit Dopfer did an experiment for her doctoral thesis using entangled photons. It essentially created an interference pattern with one set of photons and a double-slit, then attempted to gain the which-way information (which slit the photons went through) by measuring the entangled partners.
To create the entangled pairs, an ultraviolet laser of 351.1nm was shone at a crystal of beta-barium-borate (BBO). The crystal absorbed some of the UV photons and would emit two sets of three concentric cones of light, each with different wavelengths. The two inner cones were twice the wavelength (half the frequency) of the original light, or 702.2nm and at the two points where those two cones crossed, pairs of entangled photons were generated.
One beam went to a double slit, where a detector could check whether there was an interference pattern. The number of photons generated at just the right place for the entangled pairs was already low, plus a lot never made it through the double slits but instead hit the opaque area. So the detector had to register individual photons with a “cascade” detector. Furthermore, the detector would measure events for some time, move slightly and measure events for another period of time, move again, etc. It would trace the distance along where the interference pattern would build up, counting photons at each spot. Eventually, the data would be assembled to see whether the interference was there.
The other beam went to a detector so that the momentum could be determined. However, a lens could focus photons such that they would always go to the same spot, regardless of whether their momentum indicated one slit or the other. In this way, the “which way” information could be erased.
Another important piece was a “coincidence” detector which would check for photons from both beams arriving at the same time. The purpose was to filter out non-entangled photons that could swamp out the entangled pairs and their behavior.
The experiment was a success. When the lens destroyed the which-way information, the interference pattern was measured on the far side of the double slit. Without the lens, the which-way information was preserved and the interference pattern disappeared.
These results seem to indicate that information is actually being transmitted. In this case, the detector was the double slit producing an interference pattern that depended on the stream of photons being in a superposition state, which in turn depends on whether a measurement was made to collapse the photons into definite positions. That measurement could be made on separate photons – the entangled beam – rather than with the photons going through the slits. When the entangled beam was measured, both they and their double-slit partners collapsed into single values for momentum.
There is considerable disagreement in the physics community as to whether information is being passed or something else is happening. One argument is that the coincidence detector plays a big role and without it, the experiment wouldn’t work – above and beyond its intended function of filtering out non-entangled photons. Later an argument will be presented that predicts the experiment and information transfer will still work (full disclosure: it may be wishful thinking on the author’s part).
Variations of the entangled double slit experiment move the detector and lens further from the BBO crystal, thus delaying the choice as to whether the which-way information is preserved until after the interference has been generated. Another experiment, called the quantum eraser, puts filters on the slits to, in principle, make the which-way information available and destroy the interference pattern. On the entangled beam, another polarization filter can be used to erase that information, thus restoring the interference. There is a version of the quantum eraser that delays the choice to erase the polarization and that, too, works.
Both of these show that measurements of the momentum – the which-way information – can be captured or erased after the other beam of photons has interacted with the double-slit and produced an interference pattern on the screen, or failed to. In essence, a future measurement affects the result of the double-slit experiment.
Rather than just move the detector on the entangled beam (not the beam at the double slit) backwards for additional delay, the entangled beam could go through an identical double-slit coupled to a pair of fiber-optic cables, increasing the delay to the order of tens of microseconds. Photons coming out of the fibers could be focused onto a single spot – erasing the which-way information – or they could go to a screen or detector, preserving the which-way information.
To really do experiments and to create applications that exploit the jumps back in time, the system needs to be fast and robust, far beyond what Dopfer’s experiment did. The main limiting factors are how long the which-way measurement can be delayed without losing data, how many entangled pairs can be generated per second, and how many photons it takes to determine whether the interference pattern is there. Obviously the coincidence detector would need to be eliminated while still limiting, if not eliminating completely, non-entangled photons. Switching the optics to preserve or destroy which-way information would need to be fast as well.
Delaying beams with fiber optics is a great improvement over moving a detector away from the source of photon pairs. Fibers aren’t perfect, though, and often, communication equipment will use repeaters to strengthen signals between stretches of cable. It may be possible to build repeaters that preserve entanglement as well.
The length of fiber will determine how long a delay is possible, given an acceptable error rate and luminosity of the entangled source. Ultimately enough photons have to be detected to discern interference vs. non-interference, which will factor in to how long the fibers can be.
Reading Interference Patterns
Obviously it’s important to be able to detect an interference or the lack of one quickly. In Dopfer’s experiment, a single photon detector was moved from one spot to the next and photons were counted over a relatively long period of time, on the order of a minute. To get a reading quickly, within microseconds, multiple detectors would be used across the width of the interference pattern, and the rate of entangled pair generation would need to increase.
The conversion rate of beta-barium-borate (BBO) is small, plus a lot of converted photons go to the light cones with no entangled pairs. Other types of crystals may exist now or will soon that have a higher production rate of entangled pairs. Dr. Cramer has a crystal that should produce a million pairs per second, and the pump laser is 404nm – a blue ray laser diode. Other sources, such as quantum dots, may be able to produce entangled pairs at much higher rates.
The detectors need to get enough photons to have a high chance of discerning between interference and non-interference. Using a regular detector for light as opposed to cascade detectors would reduce the cost greatly, but would also greatly increase how many entangled pairs were needed. Whether getting enough luminosity in the beams is even possible may dictate what detectors to use. Of course, using cascade detectors even if not strictly necessary will always mean faster detection, which may be worth the cost.
Enough photons need to be collected to have a good chance of accurately differentiating interference. A small number may be sufficient: if five photons are all well within the bright interference bands and none outside, a reasonable guess can be made that interference is present. A better method would be to analyze the interference vs. non-interference patterns to determine exactly where photons appear in only one case. The dark bands in the interference pattern would likely have a lot more light in non-interference. Similarly, interference will have light bands outside of the two-spot pattern of non-interference. So, careful placement of detectors could significantly reduce the amount of time necessary to discern patterns.
Double-Slit vs. Interferometer
Production of interference patterns with double slits is highly inefficient. Many photons run into the non-slit areas and are wasted. Another method of generating interference is to use an interferometer, which sends photons down alternate paths (they go down both paths due to superposition) and the partial probability waves recombine again and interfere. Since no photons are absorbed between or around slits, fewer overall are needed to generate enough of an interference pattern to be detected.
Reading And Writing Signals On A Computer
Tens or even hundreds of microseconds are not enough for a human to do anything useful. However, modern computers could do a significant amount of work. Still, slow detection could eat up the time delay, leaving the computer with little or negative time to do anything useful. If detection is fast enough, a computer could do something with the information it detected before it’s sent out.
Switching the optics to preserve or destroy the which-way information needs to be done quickly as well. A (very) fast LCD could block the output of one fiber, which should preserve the which-way information, and without it blocked, the photons would get focused onto one spot. It may also not be necessary to have an actual detector on the entangled beam since, in principle, the which-way information is there regardless of the detector. In Dopfer’s experiment, the detector on the entangled beam was to feed the coincidence detector, to filter out non-entangled photons.
All these considerations determine how quickly a signal can be captured and how often it will be wrong. Probability can be used to calculate how likely any factor is to cause an error, then combine all the effects into a signal probability. Increasing robustness can then proceed by determining which parts of the system can be improved for the greatest effect or which are overengineered and can be relaxed.
Predictions Of Experimental Results
Time has been a confusing subject all the way back to Socrates and the Greeks, and probably earlier. Newton thought of time as absolute and universal, marching unrelentingly forward for everyone equally, but Einstein showed that it passes differently depending on relative motion or in the presence of gravitational fields. The year-end Scientific American magazine for 2011 is dedicated to time. The articles point out that physicists do not know why time exists – it doesn’t come out of any physical theories or equations. In fact, time tends to cancel out of equations and disappear.
The possibility of sending signals back in time, even for very small amounts like 50 microseconds, offers us a way to probe the nature of time in ways that have never been possible before. Whether the past can be changed or is write-once memory is the most basic question. If it can be changed, the door is open to violating other laws of physics, and that could tell us whether those laws are truly fundamental or are emergent aspects of more basic laws.
Special relativity says that nothing can travel faster than light. As matter moves closer to the speed of light, its mass increases asymptotically to infinite and hence, it would take infinite energy to actually reach the speed of light. However, general relativity predicts the existence of wormholes, which would be shortcuts from one place to another through space that would not be part of the universe. There are some engineering problems with constructing wormholes but in principle it would be possible to use them to break the light speed barrier, so the applicability of the law that nothing can move faster than light is limited.
It has long been known that relativity and QM conflict with each other. The laws of relativity break down on the scales of the very small and QM doesn’t describe the large universe very well, certainly not predicting the results of relativity. Research into merging both theories is very active, spawning theories like string theory and loop quantum gravity, but these are still in their infancy.
Single Pass Experiments And Loops
Birgit Dopfer’s experiment, along with the Quantum Eraser and its variations are examples of “single-pass” experiments. They show the effects of entanglement and how events are correlated and could ultimately allow FTL communications. Quantum encryption is another type of single-pass experiment where entangled pairs provide a stream of random bits to two parties so they can encrypt and decrypt messages using the random stream.
The other class of experiment involves “loops” in time and encompasses, among other things, paradoxes. The most simple version would be a one-bit counter that, if it received a 0 it sends a 1 and if it receives a 1, sends a 1. This would be rather stupid, but if it could be extended to two bits, a counter from 0 to 3 could be constructed. The key concept is that some piece of data is sent back in time which then affects what is sent back in the “next pass.”
Consistent Laws Of Physics
One possible tool for predicting the results of experiments is to say that laws of physics can’t be violated. In the case of electron spin, conservation of angular momentum must hold, which is why the spin values for each electron have to be opposite (when they are the products of a 0-spin particle decaying), no matter which direction spin is measured in. In conjunction with Bell’s theorem appearing to prove that the electrons don’t have well defined values for spin prior to measurement, the “law” that nothing can travel faster than light has to be violated. So which law is the more fundamental law and which is the one that is only mostly true?
Although this principle will provide some insight into most experiments, it fails completely when applied to paradoxes. In the most basic case, if a 0 is received, a 1 is sent and vice versa. A temporal infinite loop is then created in which, in any one loop, the received value will not agree with the sent value and therefore, conservation of momentum will be violated. It would be really comforting to believe that somehow, setting up a paradox like this would be prevented. If not, and it’s possible to break such seemingly fundamental laws, one’s faith that the universe will continue to function smoothly according to the rules could be badly shaken. Einstein really disliked QM because, with fundamental events being truly random, his belief in an ordered, deterministic universe was wrong.
A similar example would be if it were possible to create naked singularities. With a normal black hole (the author realizes that “normal black hole” is a bit of an oxymoron), at a certain distance from it the escape velocity becomes greater than light, meaning no rocket could throw stuff out its back side fast enough to accelerate the rocket away from the black hole. It might be that objects could be pulled by gravity to greater than light speed, time would go backwards, and all manner of unpleasantness created. Fortunately for us, all this is hidden behind the event horizon (where the escape velocity reached light speed), which keeps the chaos from infecting the rest of the universe. If it were possible to create a naked singularity – a singularity without an event horizon – the fear is that the rest of space would be subjected to this breakdown of physical laws: effect could precede cause and would “break the universe.”
Quantum entanglement experiments pose the same threat in a smaller way, in that they guarantee that some laws will get broken – conservation of momentum for the basic paradox. Rather than fear the results, however, one can understand that they are an avenue to greater understanding of the universe – whether momentum is fundamental, the nature of time, etc. Entanglement illustrates communication between spatially separated objects. No known form of communication explains the connection, and physicists talk about “nonlocality” – the idea that even though two objects may be spatially distant, they are still, in essence, one object.
This implies that space doesn’t exactly separate things from one another completely. Indeed, entanglement and QM in general seem to hint at a whole reality underlying the universe that we know and (imperfectly) understand.
Experiments And Applications
Assuming that transmission of information using entanglement works, there are many experiments to test the limits and effects of entanglement. Applications would include FTL communication, and computers could be enhanced with back-in-time loops, opening the possibilities for a new class of applications of potentially unlimited computing power.
Demonstrating Faster-Than-Light Communications
This most obvious application depends on how quickly bits can be decoded vs. how far away from each other the signaler and receiver are. In itself, the demonstration of FTL communication would shake the entire world, particularly the physics community. Since faster than light effects automatically imply that effect can precede cause in some reference frames, a purely consistent universe, and logic itself, would be in danger.
Imagine what this would do for high-frequency trading in the stock markets. Brokerages already pay millions to co-locate servers in the same building with the computers that do the trading. There are proposals to lay fiber between the U.S. and Europe with air cores, since light travels slightly faster in air than in glass, which translates to several milliseconds advantage in trading. Instantaneous communication from anywhere in the world would be highly valuable, and extending the idea to the backwards-in-time would take high-speed trading to a completely new level.
Implied but not explicitly discussed is the idea of sending multiple bits across temporal links instead of just one on/off signal. For the basic paradox, in which the logical complement of a received bit is what gets sent, multiple bits could be sent instead, allowing for a cycling counter. Two bits would allow for numbers between 0 and 3, and three bits would extend that to 7. If a 2-bit counter were set up to always send a value of the received value plus one, or send 3 if receiving 3, then no matter what the input, the counter should read three at the end of the experiment – values would keep being sent back in time and incremented until the final value was reached. This would prove that history could be overwritten, but also that each “iteration” through time could be used to do some work.
Sending multiple bits depends heavily on how fast bits can be decoded vs. the total delta of time in the jump. The faster bits can be sent and received, the more bits can be used for a packet while still leaving some time for the computer to do actual work on the data in the packet. Extending the delay in the fiber optic by making it longer will also help, but may increase the likelihood of errors if photons are either lost or they lose entanglement over greater distances. For multi-mode fibers at least, there are different paths that photons can take through the fiber, resulting in different latencies which could damage synchronization of signals.
Temporal For-Next/While-Wend Loops
If packets can be big enough to hold a sufficient amount of information, temporal loops could be used to do partial or iterative calculations. In each loop, a partial result would be sent back along with a flag to indicate whether the calculation was done, and the problem could then be solved in, literally, zero time.
Using temporal loops to do calculations has an interesting consequence: energy is reused in each loop, and in the end, the result will appear before even one loop was executed (the last loop sends a result back and a flag indicating completion, which arrives in time before execution begins). Entropy is generated when energy becomes more diffuse in order to do work. For instance, creating memories in a computer ultimately comes from taking concentrated energy in the form of fossil fuels, nuclear material, or even the sun, and converting it to less-concentrated energy, specifically heat. The generation of the memory (organization) always results in higher entropy (disorganization), but by using temporal loops, order can be created without creating entropy. That would violate the second law of thermodynamics –entropy always increases. The second law is statistical, however, and in rare occasions, entropy may decrease, so the law is not fundamental.
One example would be to break encryption based on products of two prime numbers. This wouldn’t even require a computer but could instead be done with Boolean or programmable logic. The product of the two primes would be input, along with a counter that starts initially at 2. Logic would divide the product by the value of the counter, leaving the remainder, which would be checked against zero. If that result were true, the counter would be a factor. If the remainder were not zero, the counter would be incremented and sent back through the loop to divide the encryption number again.
Other, more complicated problems could be solved using this technique if those problems could be broken down such that partial data could be sent back within a packet. One possibility would be to do ray tracing for computer graphics. A ray could be traced a little further each iteration through the loop until it hit an object or went outside of the virtual world. Each ray would take zero time but the overhead of setting up the loops might still be significant. If, however, the image being built on the screen could be part of the packet data, all rays could be done in zero time. Sending back a screen full of data would take a lot of bits, though, so packet sizes would have to be very large.
Double Temporal Jumps
If two of these temporal jumps were constructed, another possibility opens up. Each of the jumps could be set up so that they’re offset by half a jump delta from each other. Packets would go one period back in time and be received, then half a period later the packets would be sent over the other jump. By switching back and forth between jumps, packets could be sent arbitrarily far back in time, limited by when the system was constructed and the error rate.
Assuming the system were robust enough, it would be a “temporal bus” in which packets could be sent back to specific points in time. Each packet, when placed on the bus, could have a data portion along with a number indicating how many jumps to go back. Each time it was transmitted, the jump counter would be decremented until it was at zero, at which point the data for the packet would be “delivered” to the computer. Multiple packets could be on the bus at the same time as long as the total bandwidth for a transmission across jumps was not exceeded.
Any sort of time travel opens the possibility for paradoxes. Science fiction has explored the idea of time travel and paradoxes fairly well, and has even, on occasion, done a good job of at least making a serious attempt to predict the results. Robert Forward wrote a book with time travel made possible by the discovery of matter with a negative mass thus making wormholes practical to build. By applying relativity carefully to a wormhole in which one end moves relative to the space around it, but does not move relative to the other mouth (due to their being connected by their own “space”), each mouth is connected to the universe at a different time.
Forward assumed that time-like loops were possible, but that they would iterate until a constant result was reached. This would be similar to creating a calculation loop in a spreadsheet that ended when recalculations didn’t produce any changes in the values. Feedback of this type is negative; it tends to decrease the changes over iterations and drive the result to a single, final value. However, positive feedback would also be possible. In that case, the changes between iterations grow, and no single result is ever reached. Economic systems have both positive and negative feedback. For example, if a price is too high for a product, few people will buy it, and the producer may be forced to lower the price. However, getting a good product to market or even just being the first can produce positive feedback which results in competitors having a very hard time breaking into a market. Microsoft produced Windows which, for all its faults, was easy for people who weren’t computer geeks to use, and, thus, captured a big part of the market. Even though Apple’s OS is easy to use, they’ve had a difficult time breaking into the OS market due to people’s familiarity with Windows.
The point being, time-like loops would likely create situations of both positive and negative feedback, making history an ever-changing story. Time and history would then be subject to the laws of evolution in that any history that was likely and could sustain itself against other changes would be the one to survive. It would take no time for the system to evolve into its final state if there even were a final state, since changes would happen over the same periods of time. If negative feedback won out, the result would be a consistent history, if not, well, how would we know?
Going back to a simpler case, take the simple paradox of a system always sending the complement of the bit it received. As noted earlier, there would be no way for the laws of physics, particularly conservation of momentum, to hold. In every cycle, the value sent would not match what had been received from one cycle earlier, so it would look like, even though the which-way information was detected, the wave function wouldn’t collapse. The converse is true – even though which-way information was not read, the wave function collapsed. Furthermore, there would be no “result.” Even after the experiment, the bits would continue to flip back and forth in the time period of the experiment, preventing a value from ever being reached and “remembered” in the future. What would we see when we looked at the result in the computer? Some possibilities include reading a random value each time we look, or possibly the value stored in memory wouldn’t be able to be read – digital logic would get stuck in between, although this doesn’t seem likely. Any history would show the laws of physics being broken and no data yet exists to predict the result.
Beyond the questions about what a computer or person would remember from the basic 1-bit paradox, the result of the read value could be connected to a mechanism to trigger some significant action. Receiving a 1 could control a lamp, resulting in the light level of a room being both bright and dark. Connecting it to a bomb could leave a house in both states of being whole and destroyed, simultaneously, which would be disconcerting. With just a little programming, an e-mail could be both sent and not sent to homeland security, calling the director a meatball, possibly affecting the actions of many people and changing the future into two very distinct versions.
Quantum entanglement definitely exists, and the experiments of Birgit Dopfer along with the quantum eraser seem to indicate that entanglement can carry information. Researchers are trying to figure out how to use entanglement to transfer the state of quantum bit – qbits – from one particle to another as part of quantum computers. Another area of research has been into biological systems, specifically chlorophyll used by plants. Light shouldn’t be able to reach the center of the cell quickly enough to be converted into energy and quantum entanglement has been suggested as the way to speed that up. If this turns out to be true, then entanglement is carrying energy, admittedly across a very short distance but also in a non-isolated system – the cells of the plant. If particles and systems could be kept sufficiently isolated, it may prove possible to transmit significant amounts of energy through entanglement. Sending energy to satellites or rockets could be highly useful. This, of course, is something that would definitely irritate Einstein.
The systems described above rely on being able to detect photons that are in a superposition state with respect to their momentum, vs. photons not in a superposition state. Are there other forms of superposition that could be detected, such as electron spin being both up and down? It seems likely that several forms of superposition would cause different effects, like the double slit experiment. If that’s the case, one of these other forms may be easier to produce entangled pairs for increasing the number of particles and the number of bits that can be sent. Photons are nice, especially since they go fast and can be guided by fiber optics, but other systems may eventually prove easier to work with.
If problems can be broken up into chunks where data is relatively low, those problems are candidates for using the infinite computing power of temporal loops. It seems like one would be able to get a lot of calculation without using extra power in a computer – the same power is used during each loop – and this would, at some point, violate the second law of thermodynamics. The second law, however, is a probabilistic law in that on extremely rare occasions, it may be violated. Entropy may decrease. So the second law isn’t exactly a law but more an emergent property of the way the universe works according to more basic laws. Certainly, infinite computing would have far reaching consequences both for applications and our understanding of the universe and time.
Quantum mechanics is weird. Many people have struggled to find meaning in the laws of QM, only to either give up and change to a different vocation or accept the rules as they are and merely apply them to problems, never daring to ponder their meaning and sacrifice their sanity (“shut up and calculate” is a common, if somewhat derisive, description for that school of thought). It seems, however, that QM is providing us an insight into a deeply different universe underlying our own. Understanding it completely may take a very long time but each experiment will reveal a little bit more, hopefully leading us to new experiments.