Back to Eternity
The nature of laws
Ineed to prepare your mind for a thought so absurd it might even be true. Science has progressed through revolutions, sometimes called ‘paradigm shifts’, when what had been taken to be common sense or was simply a prevailing attitude has been displaced by something seemingly closer to the truth. Aristotle provides one example, Copernicus another. Thus, Aristotle from his marble armchair reflected on the flight of arrows and inferred that they were pushed along by vortices in the air behind them. He also took the more general view, by noticing how carts were kept in motion by the constant exercise of effort by oxen, that motion had to be sustained by effort. Galileo and then Newton saw through the effects of atmosphere and mud and replaced Aristotle’s vision by its opposite, in which motion continues effortlessly unless quenched by effort. The air inhibits the flight of arrows, and although Aristotle could not know it, arrows fly even better in a vacuum where there can be no sustaining vortices. He ought to have noticed, but did not have the opportunity, that carts on ice instead of getting bogged down in mud do not need the pull of oxen to sustain their motion. Copernicus, as is well known, effected a cosmic revolution, and a profound simplification of understanding (a significant marker of
nearing the truth), when he rejected the common-sense daily revolution of the Sun about the Earth and settled on a central, stationary Sun and an orbiting, spinning Earth.
More subtle but no less far-reaching revisions were to come with the intellectual revolutions of the early twentieth century dwarfing the nearly contemporary political upheavals and those of a century or so earlier. The common-sense view that events can be regarded as simultaneous had to be abandoned after 1905 once Albert Einstein (1879–1955) had transformed our perception of space and time, blending them together into spacetime. That blending stirred time into space and space into time to an extent that depended on the speed of the observer. With time and space so entangled, no two observers in relative motion could agree on whether two events were simultaneous. This fundamental revision of the arena of our actions and perceptions might seem to be a heavy price to pay for approaching greater understanding, but it too turns out to yield a simplification of the mathematical description of the physical world: no longer did explanations of phenomena need to be cobbled together from the bricolage of the concepts of Newtonian physics, they emerged naturally from the melding of space with time.
At about the same time, the newly hatched class of quantum theorists transformed thought in another direction by showing that Newton had been deceived in yet another way and that even Einstein’s migration of Newtonian physics into his new arena of spacetime was fundamentally false. In that case, although he had scraped the effect of mud off movement, Newton had been trapped in the common-sense, farmyard-inspired vision that to specify a path it was necessary to consider both position and velocity. How aghast were the classical physicists, even those who had learned to
live contentedly in spacetime, when it turned out that this notion had to be discarded. In the popular mind the icon of that discarding is the uncertainty principle, formulated by Werner Heisenberg (1901–76) in 1927. The principle asserts that position and velocity cannot be known simultaneously, and so seemed to undermine all hope of understanding by eliminating what had been taken to be the underpinning of nature, or at least the underpinning of the description of nature. Later in the book I shall argue against the view that Heisenberg’s principle undermines the prospect of understanding and complete description.
There was apparently worse to come (but like many conceptual upsets, that worse was actually better in worse’s clothing). Common sense distinguished unhesitatingly between particles and waves. Particles were little knobbly jobs; waves waved. But in a revolution that shook matter to its core the distinction was found to be false. An early example was the discovery in 1897 of the electron by the physicist J. J. Thomson (1856–1940) with all the attributes of a particle, but then, in 1911, the demonstration by his son G. P. Thomson (1892–1975), among others, that on the contrary the electron had all the attributes of a wave. I like to imagine the father and son sitting icily silent at their breakfast.
More evidence accumulated. A ray of light, without doubt a wave of electromagnetic radiation, was found to share the attributes of a stream of particles. Particles undulated into waves according to the type of observation made on them; waves likewise corpusculated into particles. Once quantum mechanics had become established (in 1927), largely by Heisenberg monkishly isolated on an island and Erwin Schrödinger (1887–1961), as he reported, in an outpouring of erotic passion while on a mountain with a mistress, no longer could this fundamental distinction be preserved:
in a totally non-commonsensical way all entities—from electrons upwards—had a dual, blended nature. Duality had usurped identity.1
I could go on. The more the deep structure of the world is exposed, the less it seems that common sense—by which I mean intuition based on local, uncontrolled, casual experience of the everyday environment, essentially the undigested food of the gut rather than the intellectual food of assessing collective brains, in other words the controlled, detailed inspection of isolated fragments of the world (in short, experiments)—is a reliable source of information. More and more it seems that deeper understanding comes from sloughing off each layer of common sense (but of course, retaining rationality). With that in mind, and your mind I hope prepared to relinquish common sense as an investment in attaining future comprehension, I would like to displace one further aspect of common sense.
I would like to assert that not much happened at the Creation. I am aware, of course, of the compelling descriptions that have endowed this moment with awesome drama, for surely the birth of everything ought to have been cosmically dramatic? A giant cosmic cataclysm of an event. A spectacular universe-wide burst of amazing primordial activity. A terrific explosion rocking spacetime to its foundations. A pregnant fireball of searing, spaceburning intensity. Really, really big. The ‘Big Bang’, the name itself, evokes drama on a cosmic scale. Indeed, Fred Hoyle (1915–2001) introduced the term dismissively and sardonically in 1949, favouring his own theory of ongoing, continuous, serenely perpetual creation, eternal cosmogenesis, world without beginning entailing world without end. The Bang is viewed as a massive explosion filling all space, in fact creating all space and all time, and in the
tumult of heat, the entire universe expanding from a mere dot of unimaginable temperature and density into the far cooler and huge, still-expanding extent we regard as our cosmic home today. Add to that the current vogue for considering an ‘inflationary era’, when the universe doubled in size every tiny fraction of a second before reaching, after less than a blink of an eye, the middleaged relatively demure expansion, with temperatures of only a few million degrees, that began the era we know today.
Not much happening? Yes, it is a big step to think of all that hyperactivity, energy, and emergence of fundamental stuff in general as representing not much happening. But bear with me. I would like to explore the counter-intuitive thought that nothing much happened when the universe came into existence. I am not denying that the Big Bang banged in its dramatic way: there is so much evidence in favour of it, and considerable evidence in favour of the inflationary era, that it would be absurd to reject it as an account of the primeval universe just under 14 billion years ago. I am suggesting reinterpretation.
The motivation for this view is a step towards confronting one of the great conundrums of existence: how something can come from nothing without intervention. One role of science is to simplify our understanding of Nature by stripping away misleading attributes. The awesomeness of everyday complexity is replaced by the awesomeness of the interconnectedness of an inner simplicity. The wonder at the delights of the world survives, but it is augmented by the joy of discovering the underlying simplicity and its potency. Thus, it is much easier to comprehend Nature in the light of Darwinian natural selection than simply to lie back and marvel at the richness and complexity of the biosphere: his simple idea provides a framework for understanding even though the
complexity emerging from that framework may be profound. The wonder remains, and is perhaps intensified, that such a simple idea can explain so much. Einstein simplified our perception of gravitation by his generalization of his special theory of relativity: that generalization interpreted gravity as a consequence of spacetime being warped by the presence of massive bodies. His ‘general theory’ is a conceptual simplification even though his equations are extraordinarily difficult to solve. By weeding out the unnecessary and focusing on the core, science moves into a position from which it is more able to provide answers. To express the point more bluntly, by showing that not much happened at the Creation, it is more likely that science can resolve what actually did happen.
The weasel words in my stated aim, of course, are the ‘not much’. Quite frankly, I would like to replace the ‘not much’ by ‘absolutely nothing’. That is, I would like it to be the case that absolutely nothing happened at the Creation and that I can justify the claim. Gone activity, then gone agent. If absolutely nothing happened, then science would have nothing to explain, which would certainly simplify its task. It could even claim in retrospect that it had already been successful! Science has sometimes advanced by demonstrating that a question is meaningless, as in asking whether moving observers can agree on the simultaneity of events, leading to special relativity. Although it is not in the remit of science, the question of how many angels can dance on a pin head is eliminated if it can be shown in some manner or other that angels don’t exist, or at least through some physiological or anatomical defect are incapable of dancing. So the elimination of a question can be a legitimate way to provide an answer. That might be a step too far, and perceived as a dereliction of scholarly duty, a cheat, a typical
scientific cop-out—call the evasion what you will—for you to accept at this stage, so I shall confine my argument to an assertion that ‘not much’ happened when the universe came into existence, and in due course explain how much was not much.
The point of all this preamble is that the evidence I shall bring forward for not much happening is to show that the laws of nature stem from it. I shall argue that at least one class of natural law stems from not much happening on day dot. It seems to me that that amounts to powerful evidence for my view, for if the machinery of the world, the laws that govern behaviour, emerge from this view, then the need to have that elaborate hypothesis of a lawgiving agent, commonly known as a god, is avoided. The laws that do not emerge from indolence I shall argue spring from anarchy, the imposition of no laws. In the course of planning this account, I came to realize that anarchy might in some cases be too restrictive, but then I saved the concept in all its potency by allowing anarchy to form an alliance with ignorance. You will come to see what I mean. At one stage I shall even invoke ignorance as a power ful tool for achieving knowledge.
I must stress that in this account I have in mind only physical laws, the laws that govern tangible entities, balls, planets, things in general, stuff, intangible radiation, fundamental particles, and so on. I leave aside moral law, which some still ascribe to a god promoted to God, the supposed inexhaustible, incomprehensible, free-flowing fount of all goodness, the arbiter of good and evil, the rewarder of the sheep and forgiver of the goat. My position, for the sake of clarity, is that biological and social phenomena all emerge
from physical law, so if you were to mine down into my belief, you would find my view to be that an appeal to indolence and anarchy embraces all aspects of human behaviour. But I shall not develop that thought here.
So, what are laws of nature? What am I trying to explain by discovering their origin? Broadly speaking, a law of nature is a summary of experience about the behaviour of entities. They are sophistications of folklore, like ‘what goes up must come down’ and ‘a watched pot never boils’. Folklore is almost invariably wrong in some degree. What goes up won’t come down if you hurl it up so fast that it goes into orbit. Watched pots do boil eventually. Natural laws are typically improvements on folklore, as they have been assembled by making observations under controlled conditions, isolating the phenomenon they seek to explain from extraneous influence (the mud of Aristotle’s cart, for instance, the air around his arrow).
Natural laws are believed to be spatially universal and timeless. The ubiquity and perhaps eternal persistence of laws means that any law of nature is thought to be valid not only from one corner of a laboratory to another, but across continents and beyond, throughout the universe. Maybe they fail in regions where the concepts of space and time fail, as inside black holes, but where space and time are benign, a law valid here and now is a law valid there and then.
Laws are established in laboratories occupying a few cubic metres of the universe but are believed to apply to the entire universe. They are formulated in a period comparable to a human lifetime but are believed to apply to something like eternity. There
are grounds for these beliefs, but caution must be exercised in embracing them whole-heartedly.
On the little scale of human direct experience, a tiny fraction of the time for which the universe has existed and a vanishingly small fraction of its volume, laws have been found to be the same wherever and whenever they have been tested, on Earth at least. On the bigger scale of human experience they have been tested by the ability of astronomers to observe phenomena at huge distances from Earth, in other galaxies, and likewise far back in time. Unless distances in space and time are conspiring to trick us by somehow jointly cancelling deviations, no deviations from Earthestablished laws have been detected. On the short timescale of a few billion years of past time there is no reason to suspect that for a similar period into the future our current laws will change. Of course it might be the case that over the next few trillion years, or even at midnight tomorrow, currently hidden dimensions lurking but suspected in spacetime will uncurl to augment our handful of familiar dimensions and transform our well-trodden laws into unrecognizability. That we do not know, but one day—such is the power of a law of nature—we might be able to predict on the basis of laws we are establishing now. Laws have within them the seeds of their own replacement.
Almost all, not all, laws are approximations, even when they refer to entities that have been insulated from external, adventitious influence (mud). Let me bring into view here one figure from history and the first of the little simple laws that I shall use to introduce a variety of points. (Later I distinguish between big laws and little laws; this is a little law.) The very clever, inventive, and industrious Robert Hooke (1635–1703) proposed a law relating to the stretching of springs.2 As was not uncommon in that day, he
expressed it as an anagram in order to claim priority but at the same time buying time to explore its consequences without fear of being outpaced by others. Thus, in 1660 he wrote cryptically and alphabetically ceiiinosssttuv, only later revealing that he really meant ‘Ut tensio, sic vis’. In the more direct language of today, his law states that the restoring force exerted by a spring is proportional to how far it is pulled out or compressed. This law is a very good description of the behaviour of springs, not only actual springs but also of the springlike bonds between atoms in molecules and has some fascinating consequences wholly unsuspected by Hooke or even by his contemporary Newton. However, it is only an approximation, for if you pull the spring out a long way the proportionality fails even if you stop before it snaps: ceiiinnnoosssttuv. Nevertheless, Hooke’s law is a good guide to the behaviour of springs, provided it is kept in mind that it is valid only for small displacements. There might be some laws that are exact. A candidate is ‘the law of the conservation of energy’, which asserts that energy cannot be created or destroyed: it can be converted from one form to another, but what we have today is in total what we shall have forever and have had since forever in the past. So powerful is this law that it can be used to make discoveries. In the 1920s it was observed that energy seemed not to be conserved in a certain nuclear decay, and one set of proposals revolved around the suggestion that perhaps in such novel and hitherto unstudied events energy is not conserved. The alternative view, proposed by the Austrian theoretical physicist Wolfgang Pauli (1900–58) in 1930, was that energy is conserved but some had been carried away by an as yet unknown particle. Thus was stimulated the search for and in due course successful detection of the elementary particle now known as the neutrino. As we shall see, the law of the conservation of energy is
at the heart of the understandability of the universe in the sense that it lies at the root of causality, that one event can cause another, and therefore lies at the heart of all explanation. It will figure large in what is to come.
There are other laws that seem to have a similar status to Hooke’s law (that is, are approximations and pleasant to know because they help us to make predictions and understand matter) and others that resemble the conservation of energy (are not approximations but lie deep-rooted in the structure of explanation and understanding). That suggests to me that there are two classes of laws, which I shall term inlaws and outlaws. Inlaws are the very deep structural laws of the universe, primary legislation, the foundation of understanding, the bedrock of comprehension. The conservation of energy is, in my view, an inlaw, and although I hesitate to say it, perhaps the mother of all inlaws. Outlaws are their minor relatives, like Hooke’s and the others that will shortly come our way. They are secondary legislation, being little more than elaborations of inlaws. We can’t do without them, and in many cases science has progressed through their discovery, application, and interpretation. But they are corporals in an army with generals at its head.
There is a special type of law that I need to acknowledge and draw to your attention: a law that applies to nothing at all yet is very useful. I need to unravel this perplexing remark. As I have already said, outlaws are typically approximations. However, in some cases the approximation becomes better and better as the material it is purporting to describe becomes less and less abundant. Then, taking this progression of diminishing abundance to an extreme, the law becomes accurate (maybe exact) when the amount of material it describes has been reduced to zero.
Here, then, is a so-called ‘limiting law’, one that achieves complete accuracy in the limit of having nothing to describe.
Of course, the way I have presented it makes it sounds as though the law is vacuous, applicable only to nothing. But such limiting laws are of enormous utility, as you will see, for in effect they scrape the mud off their own internal workings. Let me give an example to clarify what I have in mind.
The Anglo-Irish aristocrat Robert Boyle (1627–91), working in a shed just off the High in Oxford in about 1660 (where University College stands but possibly on land owned by my own college, Lincoln), acting perhaps under the suggestion of his industrious assistant Richard Towneley and in collaboration with the aforementioned intellectually ubiquitous Robert Hooke, investigated what was then regarded as the ‘spring of the air’, its resistance to compression. He established a law of nature that seemed to account for the behaviour of the gas we know as air.3 That is, he found that for a given quantity of air, the product of the pressure it exerts and the volume it occupies is a constant. Increase the pressure, and down goes the volume, but the product of the pressure and the volume is a number that retains its initial value. Increase the pressure again, and down further goes the volume: and the product of the two retains its initial value. Thus Boyle’s law (which the French call Mariotte’s law), is that the product of pressure and volume is a constant for a given quantity of gas and, we would now add, at a set temperature.
The law is in fact an approximation. Squirt in some more gas, and the law is less well respected. Suck out some, and it gets better. Suck out some more, and it is better still. Suck out almost all and it is well-nigh perfect. You can see where this is going: suck out it all, and it is precise. Thus, Boyle’s law is a limiting law,
exactly applicable when there is so little gas present that it can be regarded as absent.
There are two points I need to make in this connection. First, we now understand (Boyle didn’t: could not have because it depends on knowing about molecules, and that knowledge lay in his future) why the accuracy of the law improves as the abundance of the material declines. I won’t go into it in detail, but essentially the deviations from the law stem from the interactions between the molecules. When they are so far apart (as in a sample that consists of only a wisp of gas, then these interactions are negligible and the molecules move chaotically independently of each other (the words ‘chaos’ and ‘gas’ stem from the same root: they are etymological cousins). The interactions are the internal mud that the reduction in the quantity of material wipes away to leave the clean perfection of chaos.
The second point is rather more important but closely related to the first. A limiting law identifies the essence of the substance, not the mud that its feet collect as it tramps through reality. Boyle’s law identifies the essence of perfect gassiness, the elimination of the interactions between molecules that confuse the issue for actual gases in real life, so-called ‘real gases’. A limiting law is the starting point for understanding the nature of the substance itself free from any accretion of details that distract and confuse. Limiting laws identify the perfection of behaviour of materials and will be the starting point for a number of our investigations.
Another important initial aspect of natural laws is that some are intrinsically mathematical and the others are adequately verbal. When I need to show an equation to substantiate a point, I shall send it to the Notes at the end of the book, to make it available for those who like to see the mechanism at work beneath the
words. The advantage of a mathematically expressed law (Einstein’s of general relativity, to choose an extreme example) is that it adds precision to the argument. In its place I shall make every effort to distil the essence of the argument. Indeed, it can be argued that the extraction of the verbal content, the physical interpretation, of an equation is an essential part of understanding what it means. In other words, a possible view is that not seeing the equation is a deeper form of comprehension.
Not all natural laws are mathematical, but even the ones that aren’t acquire greater power once expressed mathematically. One of the deepest questions that can be asked about natural laws, apart from their origin, is why mathematics appears to be such a perfect language for the description of Nature. Why does the real world of phenomena map so well on to this extreme product of the human mind? I have explored this question elsewhere, but it is so central to our perception and comprehension of the world that I shall return to it later (in Chapter 9). I suspect that all truly deep questions about the nature of physical reality (the only reality apart from the inventions of poets), such as the viability of mathematics as a description of Nature, probably have answers that are bound together in a common source and need to be considered in the round.
Finally, I need to say a few words about a variety of handmaidens of the laws of nature. As I have said, a law of nature is a summary of observations about the behaviour of entities. There are then two steps in accounting for a law. First, a hypothesis may be proposed. A hypothesis (from the Greek word for ‘foundation’, in the sense of groundwork) is simply a guess about the underlying reason for the observed behaviour. That guess might receive support from other
observations and so gradually mature into a theory (from the Greek for ‘speculation allied with contemplation’; the word shares its origin with ‘theatre’). A theory is a fully fledged hypothesis, with foundations perhaps embedded in other sources of knowledge and formulated in such a way as to be testable by comparison with further observations. In many cases the theory suggests predictions, which are then subjected to verification. In many cases, the theory is expressed mathematically and its consequences are teased out by logical deduction and manipulation (and interpretation) of the symbols. If at any stage a hypothesis or a theory conflicts with observation, then it is back to the drawing board, with a new hypothesis being required and then developed into a new theory.
Although I have set out this procedure—the cycle of observation followed by hypothesis maturing into theory, tested against experiment—as a kind of algorithm that scientists follow, the practice is rather different. The scientific method is a liberal polity and the gut plays an important role in the early stages of comprehension. Scientists have hunches, make intellectual leaps, certainly make mistakes, nick ideas from others, muddle through, and then just occasionally see the light. That is the actual scientific method, despite the idealizations of the philosophers of science. Their idealization is like a limiting law, identifying the essence of the scientific method stripped of its human mud, a human activity practised in the limit of the absence of humans and their frailties. The central criterion of acceptability, though, is maintained within the stew-pot of procedures: there is almost invariably the comparison of expected outcome with experimental observation. As Max Planck once said, ‘the only means of knowledge we have is experiment: the rest is speculation’.