Superforecasting. The Art and Science of Prediction
Philip Tetlock and Dan Gardner
Penguin Random House. First published 2015.
Philip Tetlock is a professor at the Wharton School of Management in USA. So right away we have an idea what the target audience is. I don't, I hope, fit the profile of a typical member of that audience, but all the same the book made interesting reading, even if the take-away lessons (a book of this sort is obviously built around such take-away lessons) were not particularly exciting.
Tetlock and his collaborators ran a series of sociological experiments over a period of many years. They recruited thousands of volunteers from all walks and stations of life - well, not all; surely the uppermost and the very lowest branches of society would be vastly under-represented - and asked them to make predictions about all sorts of real-world events, such as the situation in West Asia, or the price of oil, or the possibility of conflict in Africa, or North Korea going nuclear, etc., etc. The events were tracked, and the predictions were matched against the actual outcomes. As far as I could make out from the book, the questions were formulated as binaries - yes or no. But the answers, especially in the later, more sophisticated rounds, had probabilities attached to them. The experimenters devised ways of rigorously and quantitatively evaluating the answers and could score each of the participants, on how well their predictions differed from random. They could thus identify some participants who did very well, over a period of time. They went back to these people, whom they dubbed 'superforecasters', studied their habits and came up with a list of common characteristics that could help anyone, so they say, make more meaningful predictions about anything at all.
As would be expected in such a book, each rule is given a chapter to itself. One or two personal anecdotes, sometimes involving a superforecaster, introduces each chapter. Then the rule is stated and explained, with a great deal of padding. (After all, without the stories and the repetitive explanations, this would not be a book, only a rather extended review article in a scientific journal.) At the end of the book, authors helpfully summarize the take-away lessons, which I quote below.
1. 'Triage', i.e. work only on questions that you think are possible to answer.
2. 'Break seemingly impossible problems into tractable sub-problems'.
3. 'Strike the right balance between inside and outside views', i.e. those of experts in the field and those of complete lay-persons. Note the key word here is 'right', and each question will have its own 'right balance'.
4. 'Strike the right balance between under-reacting and over-reacting to evidence'.
5. 'Look for clashing causal forces at work for each problem'.
6. 'Strive to distinguish as many degrees of doubt as the problem permits, but no more'.
7. 'Strike the right balance between under- and overconfidence, between prudence and decisiveness'.
8. 'Look for errors behind your mistakes, but beware of rear-view mirror hindsight biases'.
9. 'Bring out the best in others and let others bring out the best in you'.
10. 'Master the error-balancing bicycle', i.e. practice predictions by using errors to correct just the right amount.
11. 'Don't treat commandments as commandments', i.e. know when to go beyond these rules.
These eleven rules or commandments are almost all of them pure management-speak. There is however some truth in them, and while following them is unlikely to dramatically improve one's personal or professional life, or one's finances, reading this book is not a bad way to spend a day or two.