This essay was written for a Christ Church college essay competition seeking ‘to recognise quality and comprehensibility of English prose in a piece of technical writing’. It won.
Cosmology and astrophysics are different to many sciences because, whilst still firmly grounded in empirical fact, they not ‘experimental’ as such. Information in cosmology can only be obtained passively; we can only observe what it is that the Universe is doing. Given that space is a vacuum, this restricts us entirely to observing different kinds of radiation incident on the Earth from various celestial sources.
Questions about the Universe are also prone to unique statistical difficulties. Any particle collider or epidemiological study can rely on taking many readings to even out anomalies. Astronomers do not have such a luxury; there is only one Universe. A study concerned with analysing differences between small areas of sky or individual stars has plenty of individuals to sample. However, any attempt to study the whole Universe inevitably leaves you with just one real sample, supplemented in recent times by computer models. This inconvenience is known as the cosmic variance.
However, cosmology is not entirely without hope. Observational astronomy can do one thing which no other science can: it can look back in time. The time light takes to travel to us from distant objects means that we are quite literally seeing them as they were tens, thousands or even billions of years ago.
There is also a huge range of stars and galaxies out there waiting to be analysed. Almost every conceivable scenario is already set up: the hard part for an astronomer is not devising an experiment, but finding it.
Spectra rank amongst the most useful and interesting data we can obtain from our surroundings, and were responsible for the first compelling evidence of our current cosmological paradigm. They are analyses of incoming electromagnetic radiation by wavelength: light from cosmic objects is split by a device not dissimilar to a prism into its constituent colours and their relative intensity is analysed.
In the first years of the twentieth century, quantum mechanics and experimental atomic physics were allowing scientists to document atomic spectra to previously unknown levels of accuracy. Common elements such as hydrogen and helium were isolated in the lab and the light they gave off recorded. These spectra were compared with those obtained from galaxies.
The redshift of the galaxies was discovered independently by three astronomers in 1917: they observed that spectral lines were in a different position to those of elements measured on Earth, shifted, as the name implies, towards the red end of the spectrum.
Given that we assume atomic spectra are uniform throughout the Universe and that nothing in between is interfering in some systematic way with the radiation, the only explanation for an observed wavelength change is that the sources, in this case galaxies, are moving away from us, dragging the light waves along with it as they go, and thus stretching them towards the red end of the spectrum.
However, the significance of this discovery was unknown until the discoveries of Hubble put it into context. His first finding, announced in 1924, was that these galaxies, a subset of nebulae (originally a classification encompassing all extended astronomical objects), were in fact distinct in that they were beyond our galaxy, the Milky Way.
He then went on to measure galaxies’ redshift, this time trying to find a relationship between their speed of recession (which is directly proportional to the redshift for velocities small compared to the speed of light) and distance from us. His conclusion was that the further away a galaxy is, the faster it was retreating, related by a value now known as Hubble’s Constant.
This seemed to imply that the Universe is expanding. However, the dominant theory at the time asserted that it was in a steady state, and what ensued was not a sudden conversion to a new school of astronomy which accepted the expansion of the Universe, but massive debate.
The next great blow was dealt to steady-state cosmology in the 1960s, when Penzias and Wilson discovered the cosmic microwave background, or CMB. They had constructed a large, horn-shaped microwave antenna designed for radio astronomy but found that, no matter where they pointed it, there was interference from some kind of background noise.
After investigating various theories as to its origin, ranging from confirming that the signal was not of terrestrial origin by analysis of expected radio emissions from nearby New York to cleaning out the antenna after discovery of a pair of pigeons and their droppings inside it, the final conclusion was that the radiation bathing the device was this CMB: an electromagnetic echo from the early Universe.
Subsequent satellite-based observations, first by COBE (the ‘Cosmic Microwave Background Explorer’) in the early 1990s and, more recently, WMAP (‘Wilkinson Microwave Anisotropy Probe’) have significantly refined this observation. We now know that the CMB spectrum is very much like that of a radiating black body; the same shape as the spectrum given off by lump of iron glowing red-hot, but very much colder at around 2.7K (−270°C). That the background is like a very cold black body dovetails neatly into the Big Bang model of the Universe.
It is generally agreed that the Universe began as a tiny, perhaps point-like singularity some 13.7 billion years ago. Quite what form this hugely dense matter took physics is currently unable to speculate: the two theories which would be applicable in an area with such massive gravity (general relativity) and on such tiny scales (quantum mechanics) are, so far, mutually incompatible ways of describing reality. However, there is a greater consensus about the order of events subsequently.
The huge fireball of pure energy rapidly expanded, cooling as it did so, its constituents coming slowly to resemble the Universe as we see it today. Free protons and electrons formed, then the protons bound to one-another, some turning into neutrons by beta decay, in a process known as nucleosynthesis. This filled the Universe with light nuclei (mostly hydrogen and its isotopes, with a moderate amount of helium and tiny quantities of heavier elements such as lithium), but the Universe was still far too hot for atoms to coalesce from these free electrons and nuclei.
The heavy elements observed in the modern Universe must come from either this Big Bang nucleosynthesis or their subsequent production inside stars, and the close correlation between theory and observation of the gross chemical composition of the Universe is regarded as the third great empirical coup of Big Bang theory, along with the receding galaxies and the CMB mentioned earlier.
It was not until 300,000 years after the Big Bang that space had cooled enough for radiation to pass freely through it. The plasma of electrons and nuclei which had filled space until this point absorbed light very effectively, meaning that photons did not travel significant distances before being re-absorbed. However, once the matter cooled enough that the plasma could undergo recombination into conventional atoms, the largely-hydrogen Universe suddenly became transparent. It was emitting radiation like a hot body, and hence observed spectrum of the CMB fits in neatly.
However, it was a lot hotter than 2.7K: there are plenty of stable atoms on Earth and not a lot of plasma, and the average temperature here is some 288K! In fact, recombination occurred when the Universe was at around 3,000K—rather hot—so why does the observed radiation now correspond to a body so cold?
Hubble’s observations can explain this conundrum. Since the CMB is radiation emitted at the time the Universe became transparent it is the oldest light in the Universe. The CMB was produced at all points in space, but photons which were nearby have long since shot past; the only ones visible are those which were produced the sufficiently long ago that they are just arriving at Earth now. This necessitates that they are very far away—the CMB represents the furthest distance that anyone could see in the Universe, because before it was emitted, the Universe was opaque. Thus, according to Hubble’s data, this light should be massively redshifted due to its huge distance from us, and it is this which makes the CMB so very much ‘colder’ now than when it was emitted.
Just as an object glowing blue-hot is warmer than one glowing red-hot, massive redshift has taken the wavelength up, and thus the temperature associated with the radiation down, by a factor of about 1,000 such that it’s now quite cool: microwave-hot.
Thus, the three predictions made by Big Bang theory fit together into a complementary triplet of well-corroborated observations about what the modern Universe should look like.
After recombination, the Universe as we know it today began to take shape: slight density anomalies, which can be observed as slight fluctuations in the CMB, overrun by gravity, the dominant force in the Universe at large scales, collapsed to form the first stars and galaxies.
The Universe we see today is not so different from what it was like then; we are now seeing perhaps the second or third generation of stars since the first self-luminous bodies collapsed from the gas clouds, and the only major difference is the increased abundance of heavier elements, such as the carbon, nitrogen and oxygen so instrumental for life on Earth, created in supernovae as the first stars died.
So, elegantly intertwined observations give us a self-consistent picture of the Universe’s past: what does this tell us about its future? If the galaxies are receding and thus space is still riding the wave of the initial Big Bang explosion, where do things go from here?
This question is the domain of general relativity, the theory governing gravity on the huge scales of space and time over which the Universe can evolve. It was initially proposed in 1915 by Albert Einstein, taking his special theory of relativity which had so rocked what were thought to be the foundations of physics in the late nineteenth century, and applying it to accelerating objects and gravitation.
The equation relevant to the cosmos was derived from general relativity in 1922 by Friedmann, and is consequently known as the Friedmann equation. The fate of the Universe is mapped out in terms of just three mathematical quantities: one corresponding to the density of matter, one to the curvature of space, and one embodying the somewhat mysterious cosmological constant. Establishing the future of all space as we know it simply requires evaluation of the terms in this equation. Unfortunately, it turns out not to be as simple as that.
The density of matter has obvious implications for the fate of the Universe: if the stars and galaxies are heavy enough, their mutual gravitational attraction will overcome the galaxies’ current momentum and pull everything back together again. The curvature of space is a more complicated concept, but is ultimately determined by the density of matter. Thus, working out these two quantities simply requires that we know the average amount of stuff per unit volume in the cosmos.
In 1933, Zwicky, an astronomer at Caltech, was studying the motion of galaxies in the Coma Cluster. He estimated the mass of the cluster by adding up the galaxies’ luminosities and assuming they were comprised of stars similar to those well-documented in our galaxy. It was found that galaxies at the edge of this group were moving significantly faster than would be expected just on the basis of this visible matter, and thus Zwicky concluded that around four hundred times the visible mass was present in the cluster.
Similar experiments have since been performed investigating instead the motion of stars in nearby galaxies, and the same problem arises; though the visible matter is largely clumped at the galaxies’ centres, the stars within them move as though in a disc uniformly filled with mass.
There are several proposed explanations, including things such as alteration of the law of gravitation at long range, but the most popular and most publicised answer is simply that this extra matter is there, but we just can’t see it: in steps the much-vaunted dark matter.
Having inferred its existence, the gauntlet has been thrown down to theorists and experimentalists alike to predict what exotic substance might make up this dark matter. Whilst the search has not produced any concrete answers, it has spawned two of the silliest acronyms in physics: the matter sought is loosely grouped into MACHOs and WIMPs.
MACHO expands into the somewhat contrived MAssive Compact Halo Object; heavy bodies in galactic haloes which might make up some of this excess mass. Contenders are things like small black holes, which are extremely heavy, but give off no light of their own.
WIMPs are Weakly Interacting Massive Particles, subatomic particles which haven’t yet been detected because they don’t interact strongly with matter, but possess a nonzero mass, meaning that, in total, they might make up some of this mass deficit.
The recent discovery that neutrinos, which are certainly weakly interacting particles, have a tiny mass has been proposed as a possible solution. Billions pass through your body every second from the nuclear reactions in the Sun, and supernovae can produce thousands of times as many as the Sun will in its whole ten billion year lifespan in a single explosion. Could these all-permeating particles provide a possible candidate for the missing mass?
Unfortunately, neutrinos have serious flaws as the cure-all WIMP: their miniscule mass means that even the tiniest amount of energy imparted to them as they are created sends them hurtling through space at near the speed of light. They simply wouldn’t stay put for long enough for galaxies and clusters of galaxies to congregate around them. Also, despite being so very numerous, their tiny mass means that many estimates place all the neutrinos in the Universe somewhere too light to make up for the missing mass. Experiments are now underway to detect new particles which may better fill this rôle.
So, summation of the Universe’s constituent mass poses vast difficulties. However, the final term of the Friedmann equation, the cosmological constant, is perhaps more problematic still.
It was originally introduced by Einstein into his theory of relativity because he found one of its consequences distasteful. Einstein subscribed to the steady-state theory, and general relativity showed that the Universe should either be expanding or contracting, not in the equilibrium coveted by the proponents of steady-state. There was no way that it could remain at rest, because gravity would cause any momentarily stationary universe to collapse. Thus, he introduced this enigmatic term into his equations to provide a repulsive force to balance gravity and allow the Universe to continue forever static, as he thought it had done and would do for all eternity.
He was later to describe this ad hoc modification of his theory as “the biggest blunder of his life”. However, this fabricated fudge-factor has undergone something of a resurgence in popularity in modern cosmology and, in an ironic tribute to its original purpose, it is being employed anew by physicists desperately seeking a force to counter gravity with no real understanding as yet of what might be causing it.
A recent experiment sought to measure Hubble’s constant to new levels of accuracy. It did so by measuring the light emitted from a certain type of supernovae: the redshift indicated how fast they were moving away and their brightness, predicted by comparison to nearby events of this type, gave their distance from the observer. However, unlike Hubble’s study, which was limited by the technology of the time to comparatively local objects, it was found that at massive distances away, the values were no longer related by a constant Hubble constant!
The implication of this discovery is that some unknown force is causing the Universe’s expansion not to slow down but to accelerate—a disturbing concept for the original Friedmann equation, which only contained the density and curvature terms.
This led to the reintroduction of Einstein’s failed cosmological constant, and now there is the theoretical and experimental challenge of explaining this accelerating expansion. The so-called dark energy postulated to drive this acceleration is even more exotic than dark matter, and attempts to explain it so far have not even stretched to ridiculous acronyms.
Despite our uncertainty about what these ‘dark’ substances may be, observations of CMB fluctuations from WMAP and other data have put some constraints on the parameters describing the Universe: around 4% of it is conventional, visible matter, some 22% is dark matter and the remaining 74% is the utterly unexplained dark energy.
So, with 96% of the Universe composed of things which we have probably never observed, what can be said about its past and future? The two scenarios laid out by the Friedmann equation are collapse, a closed Universe ending in what is dubbed a Big Crunch, or expansion forever as either a flat or open Universe.
A Big Crunch would be very much like the origins of our Universe, described earlier, but in reverse, with matter being crushed into an ever-hotter, ever-smaller ball after an initial, leisurely collapse over a few billion years.
Expansion forever is little more optimistic; thermodynamics predicts that eventually anything still in the Universe will run out of usable energy, as entropy increases and all of creation is homogenised into a cold soup of iron nuclei, Universe-filling in extent. However, some theorists do not think the Universe would make it as far as the many, many billions of years necessary to undergo this thermodynamic heat death.
As the Universe accelerates, some posit, the distance at which parts of the Universe are moving at the speed of light relative to any given observer will decrease. Our current, very distant limit of vision will close in on us, galaxies winking out of visibility as they drop over the horizon and their light cannot travel fast enough to outrun the expansion of the intervening space. Eventually, this horizon will be small enough to rip apart individual galaxies, then planetary systems, then the individual stars and planets, and finally, an atomic nucleus will no longer be able to ‘see’ its attendant electrons, and matter as we know it will be destroyed forever.
So, modern cosmology can only speculate as to the future of the Universe. It can propose possible scenarios, but until we know more about this mysterious dark matter and dark energy, putting numbers to the terms in the Friedmann equation gives us little certainty about our fate.
We mustn’t forget, of course, that even this speculation is founded on our assumption of the truth of relativity—perhaps we simply need to wait until another young patent clerk will tear down the established physics and start the whole cycle again.