We live in a 3-dimensional Universe. Maybe the defining characteristic of celestial objects in general is how far away they were when they sent their portraits to us. It can additionally be argued that distance is the defining problem of astronomy. The spatial arrangements in 3 dimensions of objects we see on the sky, and the quantified relationship between them, sets the ground and marks the field of play.

There are a number of techniques in use, reducing in effectiveness as remoteness increases. Within the Solar System, we are somewhat spoiled for choice. Radar (bouncing a radio beam off a remote object and timing its return) is excellent, and not only gives us a phenomenally accurate measure of distance and relative velocity, but can be used also to scan surfaces and produce relief maps (for example, the surface of Venus).

However, it can be used only on objects that are in cosmological terms right here in the neighbourhood, not least (and not only) because of the turnabout time of a light signal (in terrestrial years, double the one-way distance in light years). A radar map of Andromeda would take about 5 million years to get back to us, and that is the closest spiral to Earth. Parallax, using the trigonometric properties of triangles, is also useful only at very close range. We can use triangulation (with satellite assistance) to measure distance out to about 1 kpc (kiloparsec, that is 3,260 light years). This may be regarded as the most reliable method for extra-solar objects too far for radar.

Then we have the period-luminosity relationship of Cepheid variables discovered in the early 1900’s by pioneering lady astronomer Henrietta Swan Leavitt. We can be reasonably confident of these measurements out to 10 kpc (32,600 LY), and that embraces the nearest galaxies. The period-luminosity method measures the time taken for a star to fluctuate versus its brightness. It involves determining the distance to standard candles by how strongly they shine. This is now an established and well-tested astrophysical method, but we should not overlook the fact that Hubble-type expansion theory applies exclusively beyond the scope of period-luminosity tests.

Less reliable methods include seeing where a star fits on the Hertzsprung-Russell diagram, and also the now controversial use of type 1A SNe as standard candles (I think we can by now safely assume that 1A SNe have been debunked as standard candles). Cepheids and supernovae were used to estimate the distance to our nearest system of galaxies, the Virgo cluster, about 10,000 kpc (32,600,000 light years) away.

It’s important to witness the growth of the distance ladder from an historical perspective, so that we can see how, for no good reason at all, we came to employ so-called redshift-velocity as an indicator of large-scale remoteness. The history of the art goes back to ancient times when a small group of visionaries realised the possibility of calculating distances beyond the reach of mechanical devices ( like rods, footsteps, day’s ride, and so on). These clairvoyant characters surmised that the extent of the Earth could well be measured, even beyond the horizon of vision.

That’s where it started: The Earth. This vast planet, so impossibly huge on the paltry scale of people, is where the science of lines, angles, and ratios representing objects in our field of view, later descriptively called geometry, was born. It came out of agricultural necessity on the river banks of Egypt. The Pharaoh’s surveyors, appropriately named rope pullers, established the principles in a practical sense, but left it there. Let’s not downplay the magnificence of their achievement. Nearly 5,000 years ago they built the great pyramids, and calculated the vertical height of the apex to an accuracy of centimetres. That’s testimony enough.

However, in order to extend its reach, geometry needed to be raised to the level of abstraction, of mathematics for mathematics’ sake. Unbeknown to them, the Babylonian number-crunchers had by several centuries beaten the Egyptians to a robust theory of numbers. The Babylonian mathematicians were the founders of pure mathematics, and their legacy is indelibly written into everything we do in the 21st century. Perhaps most significantly, they developed the sexagesimal base for numbers, and that is the foundation upon which our entire method of rotational measurement, and therefore the standard calibration of time, has been built.

Spin is arguably the most fundamental property of material structure, and those Babylonian scholars recognised that. They saw circles everywhere, and recognised, quite correctly, that uniform rotation is the pulse of the Universe. So they set about measuring it. They took the bold and very practical step of ignoring their fingers and toes. The system of numbers they came up with could represent a wide range of segments without using fractions. They divided circles into multiples and factors of 60, and the rest is history. We have circles of 360 degrees, days of 24 hours, and both have minutes and seconds in groups of 60.

That’s the background. The initial geometrical measurements by land surveyors and architects assumed a flat Earth as the rigid plane underpinning constructions. Locally, curvature was insignificant, and once again, we need look only as far as the pyramids for confirmation.

It was left to the great Greek philosophers of the 5th and 6th centuries BC, particularly those directing the grand library at Alexandria, to raise geometry to the abstraction of pure mathematics. They deduced that the Earth was spherical, and set about measuring it. After some effort, it was done: Erasthenes in 240 BC used a comparison of the shadows of two sticks at noon on the Summer Solstice, one in Alexandria, and the other some distance directly south at Syene. It was brilliant and meticulous; he arrived at a radius for the Earth of 6,400 km. That’s as near as dammit to the figure we have today. It was a truly remarkable achievement.

The Greeks had in fact deduced the geometric tools to calculate the Earth-Moon distance (in Earth radii), and thence, the earth-Sun distance. Such was the their purity of motive as mathematicians, however, that they disdained to do the sums. Nevertheless, the formulae for triangulation and parallax survived 2,000 years and are in use right now. We can today check the quantities obtained geometrically by using sophisticated satellite laser-ranging equipment, and we are thus able to verify the principles involved. The point I wish to emphasis here is that it was a consequence of comparison between two methods, each proven in practice in our terrestrial environment, that allowed us to adopt these techniques as rungs on the ladder, and equally importantly, allowed us to calibrate them for aberrations and systematic anomalies.

This is the full extent, the very limit, of physically tested methods. From here on out, we would rely on untested theoretical models to establish the remoteness of celestial objects.

The next rung on the distance ladder reverted to the oldest assumption about a source of light—that remoteness is an inverse function of brightness. It is a naïve principle, suggesting that bright stars are closer than dull stars as a rule. There are several problems to be overcome. Firstly, we needed to define luminosity and develop instruments to measure it. Luminosity is energy density, quantified as flux. Measuring flux with an instrument means that we measure locally, at one end of the process. Therefore we measure apparent luminosity, as perceived on Earth.

That raises the second issue: Perceived brightness. Optical brightness, registered by naked eye, is biased by human vision towards the yellow/green part of the spectrum. This is called V-magnitude. Intensity varies significantly across the range of wavebands, and we put a number to it by calculating the difference between blue (B) magnitude and visual (V) magnitude. This vitally important quantity is termed the colour index of a star, and is expressed B – V.

Using the approximately logarithmic, 6-point, reverse-order, apparent magnitude scale based on that devised by Hipparchus (the brightest being magnitude one, and the dullest, at the limit of eyesight, magnitude six), a classification of stars by brightness was set in place. The zero point of the scale was set on the star Vega, and that became the standard for the first magnitude.

The issue of extragalactic distance reached a zenith in 1920. The occasion was the famous “Great Debate” between prominent astronomers Heber Curtis and Harlow Shapley. It took place under the banner “The Scale of the Universe”, and tellingly, no conclusion was reached. The question they sought to answer was the distance to the so-called spiral nebulae, most of which were subsequently identified as offshore galaxies with enormous, gravitationally-bound stellar populations of their own.

In one of the most important declarations of intent in recent times, Eric Bell and colleagues laid out the mainstream approach to galaxy astrophysics. Model-dependent bias is clear from the outset, as are the assumptions upon which such studies will ostensibly be based over the next decade or so at least:

“In order to link galaxy populations at different redshifts, we must not only characterize their evolution in a systematic way, we must establish which physical processes are responsible for it…[]…Galaxy redshifts out to z ~ 1.4 can be obtained from optical spectra. At higher redshifts, the doublet [OII]l3727 is no longer accessible with standard optical spectrographs and one enters the so-called ‘redshift desert’ …[]…One cannot begin to study the evolution of galaxies unless one has some idea of the redshift at which they lie.”(arXiv:0903.3404)
Ah, but that’s exactly the problem. The galaxies do not lie at redshifts. It is, in a manner of speaking, the redshifts that lie.

(Taken, with the author’s kind permission, from the forthcoming blockbuster The Static Universe).

Leave a reply

required

Captcha * Time limit is exhausted. Please reload the CAPTCHA.