In The Improbability Principle, the renowned statistician David J. Hand argues that extraordinarily rare events are anything but. In fact, they're commonplace. Not only that, we should all expect to experience a miracle roughly once every month.
But Hand is no believer in superstitions, prophecies, or the paranormal. His definition of "miracle" is thoroughly rational. No mystical or supernatural explanation is necessary to understand why someone is lucky enough to win the lottery twice, or is destined to be hit by lightning three times and still survive. All we need, Hand argues, is a firm grounding in a powerful set of laws: the laws of inevitability, of truly large numbers, of selection, of the probability lever, and of near enough.
Together, these constitute Hand's groundbreaking Improbability Principle. And together, they explain why we should not be so surprised to bump into a friend in a foreign country, or to come across the same unfamiliar word four times in one day. Hand wrestles with seemingly less explicable questions as well: what the Bible and Shakespeare have in common, why financial crashes are par for the course, and why lightning does strike the same place (and the same person) twice. Along the way, he teaches us how to use the Improbability Principle in our own lives―including how to cash in at a casino and how to recognize when a medicine is truly effective.
An irresistible adventure into the laws behind "chance" moments and a trusty guide for understanding the world and universe we live in, The Improbability Principle will transform how you think about serendipity and luck, whether it's in the world of business and finance or you're merely sitting in your backyard, tossing a ball into the air and wondering where it will land.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
David J. Hand is an emeritus professor of mathematics and a senior research investigator at Imperial College London. He is the former president of the Royal Statistical Society and the chief scientific adviser to Winton Capital Management, one of Europe's most successful algorithmic-trading hedge funds. He is the author of seven books, including The Information Generation: How Data Rules Our World and Statistics: A Very Short Introduction, and has published more than three hundred scientific papers. Hand lives in London, England.Excerpt. © Reprinted by permission. All rights reserved.:
Fortune brings in some boats that are not steer’d.
In the summer of 1972, the actor Anthony Hopkins was signed to play a leading role in a film based on George Feifer’s novel The Girl from Petrovka, so he traveled to London to buy a copy of the book. Unfortunately, none of the main London bookstores had a copy. Then, on his way home, waiting for an underground train at Leicester Square tube station, he saw a discarded book lying on the seat next to him. It was a copy of The Girl from Petrovka.
As if that was not coincidence enough, more was to follow. Later, when he had a chance to meet the author, Hopkins told him about this strange occurrence. Feifer was interested. He said that in November 1971 he had lent a friend a copy of the book—a uniquely annotated copy in which he had made notes on turning the British English into American English (“labour” to “labor,” and so on) for the publication of an American version—but his friend had lost the copy in Bayswater, London. A quick check of the annotations in the copy Hopkins had found showed that it was the very same copy that Feifer’s friend had mislaid.1
You have to ask: What’s the chance of that happening? One in a million? One in a billion? Either way, it begins to stretch the bounds of credibility. It hints at an explanation in terms of forces and influences of which we are unaware, bringing the book back in a circle to Hopkins and then to Feifer.
Here’s another striking incident, this time from the book Synchronicity, by the psychoanalyst Carl Jung. He writes: “The writer Wilhelm von Scholz … tells the story of a mother who took a photograph of her small son in the Black Forest. She left the film to be developed in Strassburg. But, owing to the outbreak of war, she was unable to fetch it and gave it up for lost. In 1916 she bought a film in Frankfurt in order to take a photograph of her daughter, who had been born in the meantime. When the film was developed, it was found to be doubly exposed: the picture underneath was the photograph she had taken of her son in 1914! The old film had not been developed and had somehow got into circulation again among the new films.”2
Most of us will have experienced coincidences rather like these—if not quite so extraordinary. They might be more akin to thinking of someone just before she phones you. Strangely enough, while I was writing part of this book, I had precisely this sort of experience. A colleague at work asked me if I could recommend some publications on a specific aspect of statistical methodology (the so-called “multivariate t-distribution”). The next day, I did a little research and managed to identify a book on exactly that topic by two statisticians, Samuel Kotz and Saralees Nadarajah. I had started to type an e-mail to my colleague, giving him the details of this book, when I was interrupted by a phone call from Canada. During the conversation, the caller happened to mention that Samuel Kotz had just died.
And so it goes on. On September 28, 2005, The Telegraph described how a golfer, Joan Cresswell, scored a hole in one with a fifty-yard shot at the thirteenth hole at the Barrow Golf Club in Cumbria in the UK. Surprising, you may think, but not outlandishly so—after all, holes in one do happen. But what if I tell you that, immediately afterward, a fellow golfer, the novice Margaret Williams, also scored a hole in one?3
There’s no getting away from it: sometimes events occur which seem so improbable, so unexpected, and so unlikely, they hint that there’s something about the universe we don’t understand. They make us wonder if the familiar laws of nature and causality, through which we run our everyday lives, occasionally break down. They certainly make us doubt that they can be explained by the accidental confluence of events, by the random throwing together of people and things. They almost suggest that something is exerting an invisible influence.
Often such occurrences merely startle us, and give us stories to tell. On my first trip to New Zealand, I settled down in a café, and noticed that the notepaper being used by one of the two strangers at the neighboring table was from my own university back in the UK. But at other times, these uncanny events can significantly alter lives—for the better, as with a New Jersey woman who won the lottery twice, or for the worse, as with Major Summerford, who was struck by lightning several times.
Humans are curious animals, so we naturally seek the underlying cause of strange coincidences. What was it that led two strangers from the same university to travel to the far side of the world and end up sitting at neighboring tables in the same café at exactly the same time? What was it that led the woman to pick those two winning sets of lottery numbers? What was it that brought huge electrostatic forces to hit Major Summerford time and time again? And what steered Anthony Hopkins and The Girl from Petrovka through space, and through time, to the same seat in the same underground station at the same moment?
Beyond that, of course, how can we take advantage of the causes underlying such coincidences? How can we manipulate them to our benefit?
So far all my examples have been very small-scale—at the personal level. But there are countless more-profound examples. Some seem to imply that not only the human race, but the very galaxies themselves wouldn’t exist if those very unlikely events hadn’t occurred. Some relate to how sequences of tiny random changes in our genetic constitution could end up producing something as complicated as a human being. Others relate to the distance of the earth from the sun, the existence of Jupiter, and even the values of the fundamental constants of physics. Again the question arises as to whether blind chance is a realistic explanation for these apparently staggeringly unlikely events, or whether there are in fact other influences and forces directing the course of events behind the scenes.
The answers to all these questions hinge on what I call the Improbability Principle. This asserts that extremely improbable events are commonplace. It’s a consequence of a collection of more fundamental laws, which all tie together to lead inevitably and inexorably to the occurrence of such extraordinarily unlikely events. These laws, this principle, tell us that the universe is in fact constructed so that these coincidences are unavoidable: the extraordinarily unlikely must happen; events of vanishingly small probability will occur. The Improbability Principle resolves the apparent contradiction between the sheer unlikeliness of such events, and the fact that they nevertheless keep on happening.
We’ll begin by looking at prescientific explanations. These often go far back into the mists of time. Although many people still hold to them, they predate the Baconian revolution: that is the idea that the way to understand the natural world is to collect data, conduct experiments, take observations, and use these as test beds through which to evaluate proposed explanations for what’s going on. Prescientific notions predate the rigorous evaluation of the effectiveness of explanations through scientific methods. But explanations which have not been or cannot be tested can have no real force: they are simply anecdotes, or stories, with the same status as a child’s bedtime tale about Santa Claus or the tooth fairy. They serve the purpose of reassuring or placating those who are unwilling or unable to make the effort to dig deeper, but they don’t lead to understanding.
Understanding comes from deeper investigation. In this deeper investigation, thinkers—researchers, philosophers, scientists—have sought to devise “laws” that describe the way nature works. These laws are shorthand summaries encapsulating in simple form what observation shows about how the universe behaves. They are abstractions. For example, the progress of an object falling from a tall building is described by Newton’s Second Law of Motion, which says that the acceleration of a body is proportional to the force acting on it. Natural laws seek to get to the heart of phenomena, stripping away the superfluous, crystallizing the essence. The laws are developed by matching predictions with observations, that is, with data. If a law says that increasing the temperature of an enclosed volume of gas will increase its pressure, is this what actually happens, is this what the data show? If a law says that increasing the voltage will increase the current, is this what we actually see?
We’ve been extraordinarily successful in understanding nature by applying this process of matching data to explanation. The modern world, the cumulation of the awesome achievements of humanity’s science and technology, is a testament to the power of such descriptions.
Of course, some people seem to think that understanding a phenomenon takes away its mystery. This is true in the sense that understanding means removing obscurity, obfuscation, ambiguity, and confusion. But a grasp of the cause of the colors of the rainbow doesn’t detract from its wonder. What such a grasp brings is a more profound appreciation, and indeed awe, of the beauty underlying the phenomenon being studied. It shows us how all the pieces come together to give us the amazing world we live in.
Borel’s Law: Sufficiently Unlikely Events Are Impossible
Émile Borel was an eminent French mathematician, born in 1871. He was a pioneer of some of the more mathematical aspects of probability (of so-called measure theory), and several mathematical objects and concepts are named after him—such as Borel measure, Borel sets, the Borel-Cantelli lemma, and the Heine-Borel theorem. In 1943 he wrote a nonmathematical introduction to probability called Les probabilités et la vie, translated as Probabilities and Life. As well as illustrating some of the properties and applications of probability, in this book he introduced what he called the single law of chance, nowadays often simply called Borel’s law. This law says, “ Events with a sufficiently small probability never occur.”4
Clearly, the Improbability Principle looks as if it is at odds with Borel’s law. The Improbability Principle says that events with a very small probability keep on happening, while Borel’s law says they never happen. What is going on?
Now, your first reaction on reading Borel’s law may well have been the same as mine when I first came across it: Surely it’s nonsense? After all, you might think (as I did) that events with very small probability certainly occur, just not very often. That’s the whole point about probability, and about small probabilities in particular. But when I read further into Borel’s book, I saw that he meant something rather more subtle.
He illustrated what he intended by referring to the classic example of the monkeys who, randomly hitting the keys of a typewriter, happen by chance to produce the complete works of Shakespeare.5 In Borel’s words: “Such is the sort of event which, though its impossibility may not be rationally demonstrable, is, however, so unlikely that no sensible person will hesitate to declare it actually impossible. If someone affirmed having observed such an event we would be sure that he is deceiving us or has himself been the victim of fraud.”6
So Borel is relating “very small probabilities” to human scales, and that’s what he means: in human terms the probability is so small that it would be irrational to expect ever to see it happen; it should be regarded as impossible. And, indeed, after stating his “single law of chance” (which, you’ll recall, was that events with a sufficiently small probability never occur) he added the comment, “or, at least, we must act, in all circumstances, as if they were impossible” [his italics].7
He gave a further illustration later in his book: “For every Parisian who circulates for one day, the probability of being killed in the course of the day in a traffic accident is about one-millionth. If, in order to avoid this slight risk, a man renounced all external activity and cloistered himself in his house, or imposed such confinement on his wife or his son, he would be considered mad.”8
Other thinkers had said similar things. For example, in the 1760s Jean d’Alembert had questioned whether it is possible to observe a very long run of occurrences of an event in a sequence in which occurrence and nonoccurrence are equally probable. A century before Borel, in 1843, in his book Exposition de la théorie des chances et des probabilités, Antoine-Augustin Cournot had discussed the actual, as opposed to theoretical, probability of a perfect cone balancing on its vertex.9 The phrase “practical certainty” has been associated with Cournot, being contrasted with “physical certainty.” Indeed, the idea that “It is a practical certainty that an event with very small probability will not happen” is sometimes called Cournot’s principle. Later, in the 1930s, the philosopher Karl Popper wrote, in his book The Logic of Scientific Discovery, “the rule that extreme improbabilities have to be neglected … agrees with the demand for scientific objectivity.”10
Given the other illustrious thinkers who have described similar concepts, we might ask why it’s Borel’s name that’s generally attached to the idea. The answer probably lies in Stigler’s law of eponymy. This law says that “no scientific law is named after its original discoverer” (and then has the corollary, “including this one”).
There’s an analogy between Borel’s law and the points, lines, and planes we learn about when we study geometry in school. We learn that these geometric objects are mathematical abstractions, and that they don’t exist in the real world. They are merely convenient simplifications—which we can then think about and mentally manipulate, and hence draw conclusions about the real-world objects we’re representing with them. Similarly, it’s a mathematical ideal that, although incredibly small probabilities are not actually zero, they may be treated as if they were zero because, in real practical human terms, events with sufficiently small probability never occur. That’s Borel’s law.
Here’s Borel again: “It must be well understood that the single law of chance carries with it a certainty of another nature than mathematical certainty, but that certainty is comparable to one which leads us to accept the existence of an historical character, or of a city situated at the antipodes, of Louis XIV or of Melbourne; it is comparable even to the certainty which we attribute to the existence of the external world.”11
Borel goes on to give a scale showing what might be meant by a probability being “sufficiently small” that an event would never occur. Here are (slightly paraphrased) versions of the definitions he gives of the points on his scale. In each case, I’ve tried to convey the sizes of the numbers involved by giving some examples.
Probabilities which are negligible on the human scale are smaller than about one in a million. The probability of being dealt a royal flush in poker is about 1 in 650,000, almost twice a one-in-a-million chance. There are just over thirty million seconds in a year, so, in terms of Borel’s scale, if you and I each randomly pick a second in which to do something, the chance that we will do it at the same time is negligible on the human scale.
Probabilities which are negligible on the terrestrial scale are s...
Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.
Descrizione libro Scientific American , Straus And Giro Farrar, 2014. Gebundene Ausgabe. Condizione libro: Neu. Neu Neuware, Importqualität, auf Lager, Versand per Büchersendung - 'An eye-opening and engrossing look at rare moments, why they occur, and how they shape our world' David J. Hand is an emeritus professor of mathematics and senior research investigator at Imperial College, London. He is the former president of the Royal Statistical Society and the chief scientific adviser to Winton Capital, Europe's most successful algorithmic trading hedge fund. He is the author of seven books, including The Information Generation and Statistics: A Very Short Introduction, and has published more than three hundred scientific papers. He lives in London, England. 269 pp. Englisch. Codice libro della libreria INF1000322386
Descrizione libro Scientific American / Farrar, Straus and Giroux, 2014. Hardcover. Condizione libro: New. book. Codice libro della libreria M0374175349
Descrizione libro Scientific American / Farrar, 2014. Hardcover. Condizione libro: New. Never used!. Codice libro della libreria P110374175349
Descrizione libro Scientific American / Farrar, Straus and Giroux, 2014. Hardcover. Condizione libro: New. Codice libro della libreria DADAX0374175349
Descrizione libro Scientific American / Farrar, Straus and Giroux, 2014. Hardcover. Condizione libro: New. New with remainder mark. Codice libro della libreria 1408170008
Descrizione libro Condizione libro: Brand New. Book Condition: Brand New. Codice libro della libreria 97803741753441.0