So, branes, eh? What's that all about?
To begin with, we need to cover what motivated string theory in the
first place, which will require a potted history of physics. The best
place to start here is with General Relativity.
General relativity is a theory of gravity. Newton's theory, which had
stood for 250 years or so, was taken to be the final answer, but it
faced a problem in 1905 when a paper that went under the unassuming
title of On the Electrodynamics of Moving Bodies was published in a German physics journal, The Annals of Physics.
This paper turned out to be a revolution in thought about the nature of
space and time. Nowadays, we simply call the theory the Special Theory
of Relativity. One of the interesting things about relativity is the
popular misconception that Einstein had formulated a theory in which
everything had suddenly become subjective, but this wasn't actually the
case. Einstein himself had wanted to call it invariance theory. What
he'd actually done was to make the subjective objective.
So, what was all the hoo-ha about, and why did a theory that had
nothing to do with gravity present a problem for Newton, whose theory of
gravity had withstood all scrutiny for more than two centuries? Well,
Newton's theory was based on some assumptions. Among these were the
assumptions that space and time were absolute and immutable. In other
words, if it's 9am on Monday 4th October 1672 in Cambridge, it's 9am
Monday 4th October 1672 on an as-yet-undiscovered planet in the
Andromeda galaxy and in the heart of a star in the Kalium galaxy
(strictly, although this picture has been fairly concretely undermined,
we still have a universal standard of time namely UTC, which is pretty
much Greenwich Meantime universalised).
Furthermore, because
simultaneity existed between all bodies in the universe, and because the
range of gravity was infinite, it meant that gravity propagated
instantaneously. This has some interesting implications, not least that,
if the sun were to pop out of existence right this second, Earth, along
with all the other bodies that are gravitationally bound to the sun,
would go careening instantly off into space, likely with a few
collisions along the way (although not as many as you might think; space
is big; really big; I mean, you might think it's a long way down the road to
the chemist's...)
Einstein's paper changed all that. The Special Theory of Relativity
comprehensively demolished the assumption that space and time were
immutable and absolute. Einstein saw the term for the speed of light in
Maxwell's equations for electromagnetism, a term that had been
introduced purely for mathematical consistency, as far as we can tell,
with no term for how the source or the observer might be moving, and ran
with the conclusion that light must travel at the same speed for all
observers, and tried to work out what that might mean.
The result was that space and time must move around, and stretch and
squeeze, in order to accommodate this constancy of light. It wasn't
about gravity, but now we had a new picture of space and time in which
neither space nor time existed independently but were different facets
of the same entity, spacetime, and Newton's theory was simply not
compatible with it, not least because it placed a limitation on the
speed at which gravity could propagate. What this means is that if, as
discussed above, the sun were to pop out of existence this instant,
Earth would happily continue in its orbit for some 8 minutes or so
before careening off while the curvature of space propagated.
This troubled Einstein, and for the next 10 years he worked on producing a theory of gravity that was compatible with this new picture of space and time. He said that the moment when it all made sense was when he thought about an elevator falling in its shaft, and the implication for an observer inside the elevator. He worked out that being immersed in a gravitational field and acceleration were basically the same thing (the equivalence principle). He extrapolated this to the General Theory of Relativity, which was published in 1915.
So now we had a successful theory of gravity, and people were really happy with it. People went and played with it for a bit, and some interesting results popped up. For example, Theodor Kaluza, an unknown German mathematician, was mucking about with the equations for GR, and decided to try them out in 5 dimensions. To his surprise, the result that fell out was Maxwell's field equations. This wasn't the first time something like this had happened; Gunnar Nordström, a Finnish physicist who'd independently formulated a theory of gravitation in terms of the geometry of curved spacetime, worked out in 1914 that gravity in 5 dimensions solved electromagnetism in 4, and was working toward unifying electromagnetism and gravity as it had appeared in his theory. This unification was dropped when General Relativity was published, because it comprehensively superseded Nordström's picture of gravity. A few years later, Kaluza's result was recast in a quantum setting by Oskar Klein (famous for the 'Klein bottle' picture of the shape of the cosmos). The upshot is that the idea of extra dimensions was firmly with us, albeit in a form that didn't fully re-surface for many years.
There was still a problem, though. Some work had been going on in a different field for some years, starting with Max Planck working on problems with black body radiation. He'd been trying to solve a problem with working out the energy in an oven. He'd begun by adding up all the frequencies of energy that should be contributing, and to his surprise, he discovered that the energy should be infinite. This was clearly bunk, or we'd never have had any use for microwaves. Clearly something was wrong, but what was it? After much mucking about with the equations, he realised something interesting, and it's all to do with how waves behave.
This troubled Einstein, and for the next 10 years he worked on producing a theory of gravity that was compatible with this new picture of space and time. He said that the moment when it all made sense was when he thought about an elevator falling in its shaft, and the implication for an observer inside the elevator. He worked out that being immersed in a gravitational field and acceleration were basically the same thing (the equivalence principle). He extrapolated this to the General Theory of Relativity, which was published in 1915.
So now we had a successful theory of gravity, and people were really happy with it. People went and played with it for a bit, and some interesting results popped up. For example, Theodor Kaluza, an unknown German mathematician, was mucking about with the equations for GR, and decided to try them out in 5 dimensions. To his surprise, the result that fell out was Maxwell's field equations. This wasn't the first time something like this had happened; Gunnar Nordström, a Finnish physicist who'd independently formulated a theory of gravitation in terms of the geometry of curved spacetime, worked out in 1914 that gravity in 5 dimensions solved electromagnetism in 4, and was working toward unifying electromagnetism and gravity as it had appeared in his theory. This unification was dropped when General Relativity was published, because it comprehensively superseded Nordström's picture of gravity. A few years later, Kaluza's result was recast in a quantum setting by Oskar Klein (famous for the 'Klein bottle' picture of the shape of the cosmos). The upshot is that the idea of extra dimensions was firmly with us, albeit in a form that didn't fully re-surface for many years.
There was still a problem, though. Some work had been going on in a different field for some years, starting with Max Planck working on problems with black body radiation. He'd been trying to solve a problem with working out the energy in an oven. He'd begun by adding up all the frequencies of energy that should be contributing, and to his surprise, he discovered that the energy should be infinite. This was clearly bunk, or we'd never have had any use for microwaves. Clearly something was wrong, but what was it? After much mucking about with the equations, he realised something interesting, and it's all to do with how waves behave.
Look at this picture. It shows a periodic sine wave. You can see that this wave cycle begins at one edge of the image and ends at the other. It also illustrates the zero-point, which is where the amplitude of the wave is zero. What Planck realised was that, if he included in his calculations only those frequencies of energy whose wave returned to the zero point exactly at the wall of the oven, the calculations worked out and gave the correct energies .
This principle allows any frequency in which the energy returns to the zero point, even if that point is halfway through a cycle.
He realised that
this meant that energy was quantised, which meant it came in discrete
units. If you couldn't get back to the zero line at the wall, you
couldn't join the party. This meant that any of the following were perfectly acceptable.
While the following are not:
This was the birth of Quantum Mechanics. Now, QM presents a bit of a
problem. Underpinning QM is a principle known as Heisenberg's
Uncertainty Principle, after Werner Heisenberg, who formulated it, and
who we'll be meeting again soon. This principle tells us
that, for any quantum entity, there are pairs of variables known as conjugate variables
that are related by a rule that revolves around what we can know about
them.
Here's the critical equation, our first:
Where Δ (delta) denotes uncertainty, p denotes momentum, x denotes position, and ħ (h-bar) is the reduced Planck constant. The Planck constant is given in joule seconds and has the value 6.626×10−34 J⋅s. The reduced Planck constant (also known as Dirac's constant) is obtained by dividing this result by 2π to give 1.055×10−34 J⋅s.*
The pair of conjugate variables most discussed is the momentum
and position of a particle, but there are many such pairs, such as the
value and rate of change of a field, angular momentum and orientation,
energy and time, etc. What the
equation tells us is that the uncertainty in momentum multiplied by the
uncertainty in position can never be less than this tiny number, ħ/2. In a nutshell, the more accurately we can pin down one of these
values, the less certain we can be about the other. My current favourite
illustration is a photograph. If we take a photograph of, say, a
housefly, with a high shutter speed, we can pin down the position of the
fly to extreme accuracy, but we can't know much about its momentum. If,
on the other hand, we use a slow shutter speed, we can get some sense
of how fast the fly is moving but, because of the blur, we can't tell a
great deal about its position.
All well and good, but why is this a problem for General Relativity? Well, General Relativity requires spacetime to be smooth and flat. The uncertainty principle, when applied to the conjugate variables 'value' and 'rate of change' for the field 'spacetime' tells us that, at the smallest scales, spacetime is anything but smooth and flat. It's a seething, roiling mess. Now, for the most part, this isn't an issue. Physicists working in QM generally stick to what they're doing, working with the very small, and physicists working with GR generally stick to what they're doing, working with the very large. Everybody knew there was a problem, but it wasn't causing any major issues. A few people toyed around with trying to get them to play nice together, but what almost invariably resulted was infinities. That's not necessarily a problem but, given that the outputs for these calculations were basically probabilities, it was clear that there was a problem, because a probability cannot exceed 1, let alone get to infinity. So peeps got on with their work, aware that there was a problem looming on the horizon, but not massively troubled by it.
Fast-forward to the 1940s and Werner Heisenberg. He was attempting to construct a theory of particle interactions that was independent of local notions of space and time, because Heisenberg thought that such notions were problematic at quantum scales, not least in the context of point particles. He employed an S-Matrix, which had been introduced by John Wheeler a few years previously. Heisenberg's calculations turned out not to be in accord with observations and in fact were off by miles, but it was clear that his approach might be useful to a quantum theory of gravity.
Fast forward again, this time to the 60s, and we see the emergence of string theory proper, as a theory of strong interactions. (interactions between hadrons; composite particles whose constituents are bound by the strong nuclear force). It was never very successful in this context, but it started the ball rolling.
String theory went through several revisions, and eventually emerged as a theory that all fundamental particles were actually little vibrating strings. The basic idea is extremely straightforward. We know that particles have mass, and that mass corresponds to energy. The idea underpinning string theory, then, is that these strings vibrate with different energies and patterns, each of which corresponds to a particular particle. It vibrates in one way, and it has a given mass and charge corresponding to one particle, a different vibration gives a different particle. One of the key things concerning these strings is that they have a minimum length, the Planck length. Two things got everybody excited about it, namely that one of the string vibrational configurations corresponds to a graviton, a boson thought to transmit gravity in the same way that the photon transmits the electromagnetic force, and that the minimum length imposed by the length of the strings was just enough to make spacetime smooth and flat enough for General Relativity to hold. This is why many physicists talk about it as the only contender for a quantum theory of gravity.
One of the early issues with string theory was that the name didn't actually fit very well, because there wasn't just one string theory, there were five. This was a cause of some consternation. Then, in 1995, Edward Witten, one of the pioneers of string theory, noticed something about the theories. There's a feature of each theory called the 'coupling constant'. When doing calculations in any of the theories, using a perturbative approach, when the coupling constant was large, the calculations were horrendously difficult. When the coupling constant was small, the calculations were considerably easier. What Witten noticed was that the different theories contained dualities, the result of which was that, where the coupling constant was large in one of the theories, it corresponded to a small coupling constant in one of the other theories. All of the theories were dual with each other (except one which, it turned out, was self-dual). This allowed theorists to unify all the string theories, along with a new framework called 11-dimensional supergravity, into a single framework, which became known as M-Theory.
One of the results that came up early on in this newly-unified framework was the suggestion that the constituent entities, the strings themselves, needn't be restricted to one dimension. Physicists started to play around with higher-dimensional versions of these strings, and came up with 'branes', as in 'membranes', which would a two-dimensional brane. This was generalised to branes of any number of dimensions, with this variable denoted 'p', giving p-branes. Also, they needn't be restricted in scale, so they could be any length down to the Planck length. This finally brings us to cosmology.
Physicists Paul Steinhardt, one of the pioneers of inflationary theory, and Neil Turok, then professor of mathematical physics at Cambridge, were playing around with the idea of branes, when an idea struck them; what if the universe we experience actually resides on a 3-brane?
What they came up with is the idea that the Big Bang was simply the collision of two 3-branes. The beauty of this idea is that it completely removes the singularity, known to be problematic since shortly after Hawking and Penrose first presented the singularity theorem in 1970 as already discussed. Moreover, it provides a ready explanation for all sorts of things.
So, in content, the theory basically says that the Big Bang was the collision of these two 3-branes that were (and are) separated by an additional dimension of space, but one that is so small that we can't detect it. The classic analogy employed for how this works is a garden hose seen from a distance. From a long way away, the hose looks 1-dimensional, but as you get closer, you can see that it has girth. The additional dimensions of M-Theory are the same, massively compactified, so small that they lie below our ability to detect them, not least because the most powerful particle accelerator we currently have, the Large Hadron Collider, can only probe to around 10−19 metres, while the Planck length is 10−34 metres. To probe to that scale would take a particle accelerator about the size of the solar system which, as Hawking put it, is unlikely to be built in the current economic climate.
Anyhoo, the energy input at the Big Bang was simply the collision of these branes. Strings can be open or closed. Open strings are tethered at the ends to the brane on which they reside, while closed strings can traverse the branes. This provides a ready candidate for a dark matter solution, because gravitons are closed strings, which means that everything is transparent to gravity, which matches our experience. In this framework, dark matter is simply ordinary matter residing on the adjacent brane. Photons are open strings, which is why we can't see anything on the other brane, because any photons over there are tethered to the brane. That's why the only interaction we can detect is via gravity.
Once the branes have collided, expansion proceeds in pretty much the same way as in inflationary theory. The thing that distinguishes between them is their explanation for the inhomogeneities in the CMBR. In inflation, these are caused by quantum fluctuations during the inflationary period getting stretched to macro scale. In the brane model, they're caused by the fact that the branes ripple slightly while on approach, meaning that some bits of the brane make contact before others. This has observable consequences that will allow sensitive experiments to distinguish between the two. The first is that, due to the nature of the explanation for the inhomogeneities in the CMBR, the gravitational waves are predicted to be blue-shifted in the brane model as compared to the inflationary model. Also, because of the way these inhomogeneities are generated in the brane model, the polarisation we discussed in the context of inflationary theory will not be observed. If we observe that B-mode polarisation in the CMBR, brane-worlds is falsified. If we observe the gravitational waves to be toward the red end of the spectrum, brane-worlds is falsified. If it's toward the blue end, inflationary theory is falsified.
I heartily recommend Turok and Steinhardt's book on the subject The Endless Universe: Beyond the Big Bang.
It's also worth noting at this point that the eternal inflationary theory is also rooted in string theory.
All well and good, but why is this a problem for General Relativity? Well, General Relativity requires spacetime to be smooth and flat. The uncertainty principle, when applied to the conjugate variables 'value' and 'rate of change' for the field 'spacetime' tells us that, at the smallest scales, spacetime is anything but smooth and flat. It's a seething, roiling mess. Now, for the most part, this isn't an issue. Physicists working in QM generally stick to what they're doing, working with the very small, and physicists working with GR generally stick to what they're doing, working with the very large. Everybody knew there was a problem, but it wasn't causing any major issues. A few people toyed around with trying to get them to play nice together, but what almost invariably resulted was infinities. That's not necessarily a problem but, given that the outputs for these calculations were basically probabilities, it was clear that there was a problem, because a probability cannot exceed 1, let alone get to infinity. So peeps got on with their work, aware that there was a problem looming on the horizon, but not massively troubled by it.
Fast-forward to the 1940s and Werner Heisenberg. He was attempting to construct a theory of particle interactions that was independent of local notions of space and time, because Heisenberg thought that such notions were problematic at quantum scales, not least in the context of point particles. He employed an S-Matrix, which had been introduced by John Wheeler a few years previously. Heisenberg's calculations turned out not to be in accord with observations and in fact were off by miles, but it was clear that his approach might be useful to a quantum theory of gravity.
Fast forward again, this time to the 60s, and we see the emergence of string theory proper, as a theory of strong interactions. (interactions between hadrons; composite particles whose constituents are bound by the strong nuclear force). It was never very successful in this context, but it started the ball rolling.
String theory went through several revisions, and eventually emerged as a theory that all fundamental particles were actually little vibrating strings. The basic idea is extremely straightforward. We know that particles have mass, and that mass corresponds to energy. The idea underpinning string theory, then, is that these strings vibrate with different energies and patterns, each of which corresponds to a particular particle. It vibrates in one way, and it has a given mass and charge corresponding to one particle, a different vibration gives a different particle. One of the key things concerning these strings is that they have a minimum length, the Planck length. Two things got everybody excited about it, namely that one of the string vibrational configurations corresponds to a graviton, a boson thought to transmit gravity in the same way that the photon transmits the electromagnetic force, and that the minimum length imposed by the length of the strings was just enough to make spacetime smooth and flat enough for General Relativity to hold. This is why many physicists talk about it as the only contender for a quantum theory of gravity.
One of the early issues with string theory was that the name didn't actually fit very well, because there wasn't just one string theory, there were five. This was a cause of some consternation. Then, in 1995, Edward Witten, one of the pioneers of string theory, noticed something about the theories. There's a feature of each theory called the 'coupling constant'. When doing calculations in any of the theories, using a perturbative approach, when the coupling constant was large, the calculations were horrendously difficult. When the coupling constant was small, the calculations were considerably easier. What Witten noticed was that the different theories contained dualities, the result of which was that, where the coupling constant was large in one of the theories, it corresponded to a small coupling constant in one of the other theories. All of the theories were dual with each other (except one which, it turned out, was self-dual). This allowed theorists to unify all the string theories, along with a new framework called 11-dimensional supergravity, into a single framework, which became known as M-Theory.
One of the results that came up early on in this newly-unified framework was the suggestion that the constituent entities, the strings themselves, needn't be restricted to one dimension. Physicists started to play around with higher-dimensional versions of these strings, and came up with 'branes', as in 'membranes', which would a two-dimensional brane. This was generalised to branes of any number of dimensions, with this variable denoted 'p', giving p-branes. Also, they needn't be restricted in scale, so they could be any length down to the Planck length. This finally brings us to cosmology.
Physicists Paul Steinhardt, one of the pioneers of inflationary theory, and Neil Turok, then professor of mathematical physics at Cambridge, were playing around with the idea of branes, when an idea struck them; what if the universe we experience actually resides on a 3-brane?
What they came up with is the idea that the Big Bang was simply the collision of two 3-branes. The beauty of this idea is that it completely removes the singularity, known to be problematic since shortly after Hawking and Penrose first presented the singularity theorem in 1970 as already discussed. Moreover, it provides a ready explanation for all sorts of things.
So, in content, the theory basically says that the Big Bang was the collision of these two 3-branes that were (and are) separated by an additional dimension of space, but one that is so small that we can't detect it. The classic analogy employed for how this works is a garden hose seen from a distance. From a long way away, the hose looks 1-dimensional, but as you get closer, you can see that it has girth. The additional dimensions of M-Theory are the same, massively compactified, so small that they lie below our ability to detect them, not least because the most powerful particle accelerator we currently have, the Large Hadron Collider, can only probe to around 10−19 metres, while the Planck length is 10−34 metres. To probe to that scale would take a particle accelerator about the size of the solar system which, as Hawking put it, is unlikely to be built in the current economic climate.
Anyhoo, the energy input at the Big Bang was simply the collision of these branes. Strings can be open or closed. Open strings are tethered at the ends to the brane on which they reside, while closed strings can traverse the branes. This provides a ready candidate for a dark matter solution, because gravitons are closed strings, which means that everything is transparent to gravity, which matches our experience. In this framework, dark matter is simply ordinary matter residing on the adjacent brane. Photons are open strings, which is why we can't see anything on the other brane, because any photons over there are tethered to the brane. That's why the only interaction we can detect is via gravity.
Once the branes have collided, expansion proceeds in pretty much the same way as in inflationary theory. The thing that distinguishes between them is their explanation for the inhomogeneities in the CMBR. In inflation, these are caused by quantum fluctuations during the inflationary period getting stretched to macro scale. In the brane model, they're caused by the fact that the branes ripple slightly while on approach, meaning that some bits of the brane make contact before others. This has observable consequences that will allow sensitive experiments to distinguish between the two. The first is that, due to the nature of the explanation for the inhomogeneities in the CMBR, the gravitational waves are predicted to be blue-shifted in the brane model as compared to the inflationary model. Also, because of the way these inhomogeneities are generated in the brane model, the polarisation we discussed in the context of inflationary theory will not be observed. If we observe that B-mode polarisation in the CMBR, brane-worlds is falsified. If we observe the gravitational waves to be toward the red end of the spectrum, brane-worlds is falsified. If it's toward the blue end, inflationary theory is falsified.
I heartily recommend Turok and Steinhardt's book on the subject The Endless Universe: Beyond the Big Bang.
It's also worth noting at this point that the eternal inflationary theory is also rooted in string theory.
That'll do for now, I think. Feel free to raise any questions.
*Some points on notation:
Because we're working with extremely large numbers, we'll use a condensed notation in which exponents are used, just like real physicists. Thus, where a 10 is followed by a positive exponent, it denotes the number of zeroes after the 1, so 1034 is 1 with 34 zeroes after it. Where 10 is followed by a negative exponent, it denotes the number of zeroes before the 1, including the zero to the left of the decimal point, so 10−34 is 0.0000000000000000000000000000000001.
Edit: Additional to include animation illustrating the answer to a question in the comments below.
As you can see, all three waves are moving at the same velocity, which we can take to be c. However, their peaks are passing our marker at different times. Those with more peaks in the time-scale carry with them more energy. This difference in energy, we perceive as colour. Einstein showed, with his 1905 paper dealing with the photoelectric effect, that increasing the intensity (this would be analogous to amplitude) in the light didn't trigger the effect, only increase in frequency (bluer).
Particles have mass which corresponds to their energy which is the frequency at which their respective strings vibrate. But what about massless particles like photons? Are they also made of strings? And if they are how can their frequency be measured when they have zero mass? A string has a minimum length which is the Planck length. But as that is the smallest detectable measurement currently possible could there be strings smaller than that which cannot be detected at this point in time? And as the Planck length is the smallest measurement currently possible then something that small can presumably be measured without necessarily having to build a collider the size of the solar system to measure it now
ReplyDeleteSome misconceptions there. Firstly, nobody's ever detected a string (if they had, all the objections to string theory would evaporate). Secondly, we can't currently probe to anything like the Planck length. Thirdly, photons don't have zero mass, they have zero rest mass, and this is important. Their mass is associated with their motion. You can think of it loosely like kinetic energy - although, if you run too far with this, it gets misleading very quickly.
DeleteThere's a clear correlation between energy and distance. This is fairly obvious if you think carefully about it. Think about one of those pinart toys, in which you impress your face into the pins on one side and an image appears on the other. The resolution attainable is directly related to the size of the pins. Now try to imagine probing one of these for structure, first by throwing tennis balls at it and recording the rebounds. You're not going to measure an awful lot of structure in there unless the size of the pins is comparable to that of the tennis balls. Now try some ping-pong balls, and you'll find that a little more structure is revealed, though not a huge amount. Now try again, this time with small ball bearings. Each time, the ball you're throwing at the pinart toy is smaller, until you get to the scale of the pins, at which point you get exactly the same resolution as you have in the toy. Go smaller still, and you'll even be able to discern the curvature on the top of each pin.
No we can move on to looking at smaller and smaller objects by the same method. There's an inverse relationship between energy and distance when we're talking about, for example, photons. Lower energy photons have longer wavelengths (infrared, etc), while higher energy photons have shorter wavelengths (x-rays, etc). This energy relationship is most directly expressed in quantum mechanics, in which the Planck scale measures are made concrete. You're aware, for example, that the Planck length (1.616 x 10^-35 metres) and the Planck time (5.391 x 10^-44 seconds) are both tiny, while the Planck mass is huge (2.176 x 10^-8 kg, compared to the proton mass at 1.673 x 10-27 kg). This is a direct result of the energy/distance relationship.
QM also tells us that the same relationships that hold with photons also hold for other particles thus, even when we're using electrons, lead ions, or whatever in our particle accelerators (we tend to use charged particles because they're much easier to accelerate, using magnets), the same energy/scale relationship holds. Thus, accelerating to higher energies in our particle accelerators effectively increases resolution, just as in the pinart example above.
Finally, the limitation at the Planck length isn't based on something we've detected, it's rooted in considerations of what happens to our physical theories when we start to get below that scale, namely that the solutions to our equations tend to infinity, meaning that it's unclear whether it's possible to glean any information from below that scale, or indeed whether any questions concerning such scales are even meaningful or coherent.
Ultimately, the questions rely on an intuitional approach to the topic that we're fully aware simply doesn't apply to this subject matter. It's middle-world thinking.
I did not know that the energy level of a photon varies according to its wavelength. I simply thought of it as a particle that can travel indefinitely in a vacuum at c with no deviation from that. Though I do know that wavelengths vary but just did not think this applied to visible light. I also know that most wavelengths are relatively tiny apart from radio waves whose wavelengths can be measured in kilometres as opposed to most other electromagnetic phenomena whose wavelengths can be measured in millimetres. I wonder if there is a reason for such a discrepancy
ReplyDeleteOK, had a think about this, and I think I can clear it up without a new post. I did try to make an animation showing this, but it turned out to be problematic on my archaic laptop.
DeleteIt's slightly misleading to think of the energy level of a photon as varying according to its wavelength (there's another problem with the way that you've expressed that, which reveals an erroneous understanding, but I'll come back to that shortly), the way you should think of it is as the energy and the wavelength being essentially the same thing.
The way you've expressed it above, as 'a particle that can travel indefinitely in a vacuum at c with no deviation from that' reveals that you're thinking slightly askew about the relationship. Light always travels at c in a vacuum, with no deviation, but the wavelength can vary hugely. I t most certainly does apply to visible light (which is no different to the rest of the electromagnetic spectrum, except that it's the frequency range that the opsins in our eyes respond to; as always, we have to be aware that light is a bit of a red herring here, being no more nor less than the thing we're aware of and can point to that propagates at c).
Have a look at the waves in the images above dealing with QM. All those images are the same dimensions, which makes life easier. You can actually think of those waves as moving at the same speed, but with different wavelengths, thus the top image, showing one complete wave-cycle, is travelling at exactly the same speed as all the other images. If you think of that image as passing an imaginary vertical line at the right-hand end of the image in a specific amount of time, and the other images doing exactly the same, and in the same amount of time, you can see how the speed of light can remain the same while the frequency varies. The other images are covering exactly the same distance in both space and time, but the time between peaks varies. Hope that clears that up.
Incidentally, your comment 'I also know that most wavelengths are relatively tiny apart from radio waves whose wavelengths can be measured in kilometres as opposed to most other electromagnetic phenomena whose wavelengths can be measured in millimetres' is incorrect. The electromagnetic spectrum covers the entire range of frequencies. Indeed, you can see the variation in visible light quite easily, because that's what we see as colour.
I've made a crude animation and tagged it to the end of the post illustrating what I was talking about here.
DeleteIs it known if the gravitational waves that were detected last year and confirmed this year were red shifted or blue shifted? Even if it is would it be of any use in falsifying or confirming the brane hypothesis? Since the black holes they originated from were only 1.3 billion light years away which is nowhere near the Cosmic Microwave Background Radiation which occurred just 380 000 years after the Big Bang
ReplyDeleteThis comment has been removed by the author.
DeleteSo there is a direct correlation between the wavelength of a photon and the colour it emits. This is evidence that light is not one colour but many namely red / orange / yellow / green / blue / indigo / violet as they appear on the electromagnetic spectrum of visible light. Although it was Newton who discovered this not Einstein when he split light up into its constituent parts by passing it through two prisms. This was after he pushed a bodkin into his own eye to see the effect more close up. Before then light was thought to only be white and nothing else
ReplyDeleteI wouldn't even say there is a correlation (remember that correlation is not causation), the colour emitted by a photon is a direct result of its wavelength.
DeleteAs for Newton, I recommend the biography of him by James Gleick.
I should have said causation not correlation. I have the Gleick but have not yet read it. I did not want to buy it initially because I thought it too slim given the subject matter. But after you raved about it that persuaded me to purchase it. Though the book I would love to get is Never At Rest by Robert Westfall which is regarded as the definitive Newton biography. I think however it is out of print which is somewhat surprising. Amazon have it on kindle but as I only buy physical books that is not much use to me
ReplyDelete