Superforecasting by Philip E. Tetlock and Dan Gardner

Superforecasting by Philip E. Tetlock and Dan Gardner

The Art and Science of Prediction

Superforecasting by Philip E. Tetlock and Dan Gardner

Buy book - Superforecasting by Philip E. Tetlock and Dan Gardner

What exactly is the subject of the Superforecasting book?

Superforecasting (2015), based on decades of study and the findings of a large, government-sponsored forecasting tournament, explains how to improve the accuracy of your forecasts, whether you're attempting to foresee changes in the stock market, politics, or your everyday life.

Who is the target audience for the Superforecasting book?

  • Those that are interested in learning how forecasting works
  • Thinkers who are able to think critically
  • Businesspeople that wish to enhance their forecasting abilities

Who are Philip E. Tetlock and Dan Gardner, and what do they do?

Phil Tetlock, the Annenberg University Professor at the University of Pennsylvania, is a political scientist and psychologist who specializes in political psychology. He is the founder and director of the Good Judgment Project, a forecasting research that has resulted in more than 200 papers published in peer-reviewed journals.
Dan Gardner is a journalist, author, and speaker who lives in New York City. Besides being the author of the well acclaimed books Risk: The Science and Politics of Fear and Future Babble, Gardner has also spoken on a variety of topics throughout the world for governments and companies like Google and Siemens.

What exactly is in it for me? Learn how to create accurate predictions by watching this video.

 Forecasts and predictions are made on a wide range of topics, including the weather, the stock market, next year's budget, and who will win this weekend's football game, among many others. However, these are not the only topics about which we make predictions. As a result of our obsession with predicting, we get upset when events do not unfold in the manner in which we had anticipated them. So, can predictions be made that are more accurate than they are today? They have the ability. Within a few months, we'll be able to produce superforecasts that are trimmed and realigned with each new piece of information, and then evaluated and improved after the predicted event has occurred. In these notes, we'll look at the difficult but fascinating skill of producing the ultimate predictions, which is both hard and interesting.

Here you'll learn why the former CEO of Microsoft anticipated the iPhone's market share; how a forecaster foretold the autopsy of Yasser Arafat; and why groups of forecasters are more effective than individuals in predicting the future.

Forecasting has certain limits, but that should not be used as an excuse to reject it.

 Forecasting is something we do on a regular basis, whether we're planning our next career move or making a financial investment decision. In essence, our predictions are a reflection of our hopes for what the future will bring. Forecasting, on the other hand, is restricted since even small occurrences may have unanticipated effects. We live in a complicated world where even a single individual may cause catastrophic consequences. Take, for example, the Arab Spring. Mohamed Bouazizi, a Tunisian street seller, set himself on fire because he was humiliated by corrupt police officers. This was the beginning of a chain reaction.

A theoretical reason exists for why it is difficult to anticipate such occurrences in the first place. When it comes to nonlinear systems like the Earth's atmosphere, even minute changes may have a significant influence, according to American meteorologist Edward Lorenz. Chaos theory (also known as the butterfly effect) is the theory that explains this phenomenon. If the wind's direction changes by less than a fraction of a degree, the long-term weather patterns may be dramatically altered, according to some estimates. To put it another way, the flap of a butterfly's wing in Brazil may trigger a tornado to rip across Texas.

However, just because predicting has its limits, we should not abandon it entirely. Take, for example, Edward Lorenz's area of meteorology. When weather predictions are issued a few days in advance, they may be considered reasonably accurate. Why? For the simple reason that weather forecasters assess the accuracy of their predictions after the event. They get a better knowledge of how the weather works as a result of comparing their prediction with the actual weather conditions. But the issue with this approach is that individuals in other areas seldom evaluate the accuracy of their predictions! To enhance our forecasting, we must first increase its accuracy, and then we must become more serious about comparing what we predicted with what really takes place. And it necessitates a genuine commitment to measurement.

Avoid using ambiguous language and strive to be as specific as possible.

 If you think about it, measuring predictions seems like a no-brainer: gather the forecasts, evaluate their correctness, do the computations, and voila! However, it is not that simple at all. The significance of the original prediction must be understood before it can be determined whether or not it was accurate. Consider the case of Microsoft CEO Steve Ballmer, who predicted that the iPhone would fail to gain a substantial proportion of the market in April of that year. When you consider the magnitude of Apple's market capitalization, Ballmer's prediction seemed ludicrous, and people actually laughed at him. Another point of emphasis was the fact that Apple owned 42 percent of the US smartphone market, which is an obviously substantial share of the overall industry. But wait a minute, let's listen to what he really said.

He said that, yes, the iPhone might make a lot of revenue, but that it would never be able to capture a substantial portion of the worldwide cell-phone market (his prediction: between two and three percent). Instead, the software developed by his firm, Microsoft, would grow to dominate the market. And, to a greater or lesser extent, this forecast came true. In the third quarter of 2013, according to Garner IT statistics, the iPhone's worldwide share of mobile phone sales hovered around six percent, which is much more than what Ballmer anticipated - but not that much. In the meanwhile, Microsoft's software was being utilized in the vast majority of mobile phones sold across the globe at the time. Forecasts should also avoid using ambiguous language and instead rely on numerical data to improve accuracy.

When predicting, it is customary to use ambiguous terms such as "could," "may," or "likely." However, research has shown that individuals attach various interpretations to phrases such as these. In order to communicate probability correctly, forecasters should use percentages or other numerical measures to describe the likelihood of an event. When American intelligence agencies such as the NSA and the CIA stated that Saddam Hussein was concealing weapons of mass destruction, the allegation was shown to be false, it was a catastrophic failure for the United States government. If these intelligence agencies had computed with more accuracy and applied percentages, the United States may not have attacked Iraq in 2003. The odds of Iraq possessing WMDs were 60 percent, but there was still a 40 percent possibility that Saddam didn't have any — a weak rationale for going to war, to put it mildly –

If you wish to increase the accuracy of your predictions, keep track of your results.

 So, how can we prevent making catastrophic mistakes like those that occurred with the WMDs? Clearly, we need to improve the accuracy of our predictions. Let's have a look at some of the methods for doing this. The most effective method is to maintain score. To accomplish this, the author's research team created the government-sponsored Good Judgment Project, which drew thousands of volunteers who answered more than one million questions over the period of four years, resulting in the publication of the book. The researchers believed that by utilizing scoring, they would be able to increase forecast accuracy.

Questions such as "Will Tunisia's president escape to a comfortable exile in the next month?" and "Will the euro fall below $1.20 in the next twelve months?" were answered by participants. Afterwards, each forecaster gave a likelihood rating to each participant's forecast, modified it as necessary after reading pertinent news, and, when the predicted time came around, assigned each prediction a Brier score, which indicated how accurate the forecast was. The Brier score, which was named after Glenn W. Brier, is the most often used way of determining the accuracy of a prediction. The lower the number, the more accurate the prediction; for example, a flawless forecast receives a score of one hundred and fifty-one. A random estimate will result in a Brier score of 0.5, while a forecast that is totally incorrect will result in a maximum Brier score of 2.0.

The question that is being asked has an impact on how to interpret the Brier score. Despite the fact that you have a Brier score of 0.2, which seems to be excellent, your prediction may turn out to be disastrous! Let's pretend we're making weather predictions. If the weather in Phoenix, Arizona is constantly hot and sunny, a forecaster could simply anticipate hot and sunny weather and get a Brier score of zero, which is obviously better than a score of 0.2. When it comes to forecasting the weather in Springfield, Missouri, which is known for its unpredictable weather, you would be considered a world-class meteorologist even if your score was just 0.02.

Superforecasters begin by breaking down issues into smaller pieces in order to better understand them.

 Is it true that all superforecasters are brilliant thinkers who have access to top-secret intelligence? No, not at all. So, how can they make such precise forecasts about the future, you may wonder. In order to solve a topic, a superforecaster must first break down apparently intractable difficulties into manageable sub-problems. This is referred to as Fermi-style reasoning. Enrico Fermi, a scientist who played a key role in the development of the atomic bomb, was able to predict with remarkable precision things like, for example, the number of piano tuners in Chicago, despite the fact that he did not have a single piece of information at his disposal.

He accomplished this by distinguishing between the knowable and the unknown, which is the first step taken by superforecasters. For example, when Yasser Arafat, the head of the Palestine Liberation Organization, died of an unexplained reason, many people speculated that he had been poisoned. But this was not the case. Then, in 2012, researchers discovered dangerously high amounts of polonium-210 — a radioactive substance that may be deadly if inhaled – in his possessions. It was because of this finding that the theory that he had been poisoned gained traction, and his corpse was excavated and examined in both France and Switzerland. When asked if scientists would discover increased amounts of polonium in Yasser Arafat's body as part of the Good Judgment Project, forecasters responded affirmatively. Bill Flack, a volunteer forecaster, addressed the issue in the manner of Enrico Fermi, breaking down the facts.

In the first place, Flack discovered that Polonium decays quickly, which meant that if Arafat had been poisoned, there was a good possibility that the polonium would not be identified in his bones, given that he passed away in 2004. Flack conducted study on polonium testing and came to the conclusion that it could be detected in certain circumstances. Later, Flack considered the possibility that Arafat had Palestinian adversaries who might have poisoned him, as well as the possibility that the postmortem report had been tainted in order to blame Israel for his death. He predicted that polonium would be discovered in Arafat's body with a 60 percent probability. He was right. As a result, Flack began by establishing the fundamentals before moving on to the more complex assumptions, which is exactly what a good forecaster would do.

Start with the outer view and then switch to the inner view for a more precise prediction.

 Because every scenario is different, you should avoid making snap decisions and passing judgment on a case too soon. In order to tackle any issue effectively, it is necessary to adopt an objective perspective, which involves determining what the base rate is. That is, however, not entirely clear. To illustrate, consider the situation of an Italian family that lives in a small home in the United States of America. They have two jobs: the father is a bookkeeper and the mother works part-time at a childcare facility together. In addition to themselves, their child's grandma also lives in the home with them.

It's possible that if you were asked what the odds were that this Italian family would acquire a pet, you'd attempt to find out by instantly grabbing on to the characteristics of the family or their living circumstances. However, you would not qualify as a superforecaster in such case! A superforecaster would not begin by examining the specifics. Instead, she would start by finding out what proportion, or "base rate," of American homes own a pet. She would then go from there. With the help of Google, you might find out what percentage of the population this is in a couple of seconds. This is the view from the outside. After you've done that, you'll be able to see things from the inside. This will provide you with information that will allow you to modify the base rate appropriately.

Starting with the outside perspective of the Italian family provides a first estimate: there is a 62 percent probability that the family has a pet, according to the example. After that, you get more precise and modify the number you've chosen. For example, you might look at the percentage of Italian households in America who keep a pet. The notion of anchoring is at the heart of the rationale for the outside perspective. An anchor is the first figure that is drawn before any modifications are made. If, on the other hand, you start with the smaller details, your forecast is far more likely to be thousands of miles away from any anchor or exact figure.

Continue to stay up to date even after reaching your original conclusion, and make adjustments to your predictions in light of new facts.

 Once the process has begun, we've seen how superforecasters get things started, but once you've made your first forecast, you can't simply sit back and see whether you were correct. Any new piece of knowledge necessitates the updating and modification of your previous judgment. Do you remember Bill Flack? After predicting that polonium would be found in Yasser Arafat's body, he maintained a watch on the news and revised his prediction whenever he felt it was necessary, according to the latest information. The Swiss study team then claimed that more testing was needed and that the findings would be announced later, despite the fact that Flack's initial prediction had been made years earlier. Because Flack had done extensive study on polonium, he was aware that the team had discovered polonium and that further tests were needed to determine the source of the polonium. As a result, Flack increased his prediction to 65 percent.

As it turned out, the Swiss team did discover polonium in Arafat's body, resulting in Fleck's final Brier score of 0.36 points. Given the complexity of the question, this is an outstanding performance. You must, however, exercise caution. Although new knowledge may be beneficial, it can also be harmful if it is misinterpreted. According to one example, the Intelligence Advanced Research Projects Activity (IARPA) of the United States government inquired if there would be less Arctic sea ice on September 15, 2014, than there had been the previous year. Doug Lorch, a superforecaster, came to the conclusion that there was a 55-percent probability that the response would be affirmative. Lorch, on the other hand, received a month-old report from the Sea Ice Forecast Network that influenced him sufficiently to raise his prediction from 90 percent to 95 percent, a significant shift based on a single piece of information.

When September 15, 2014, eventually arrived, there was more Arctic ice than there had been the previous year. Lorch's first prediction gave this a 45-percent chance of occurring, but following his revision, the likelihood dropped to a paltry five percent. It is necessary to separate delicate nuances from unnecessary information in order to do skillful updating. Don't be scared to alter your opinion, but think carefully about whether fresh knowledge is helpful or not before making a decision.

Working in groups may be beneficial in predicting, but only if done correctly.

 Perhaps you are familiar with the phrase "groupthink." The phrase "team spirit" was created by psychologist Irving Janis, who theorized that individuals in small groups may generate team spirit by subconsciously generating common illusions that interfere with critical reasoning. Interference is caused by individuals who are afraid of conflict and instead just agree with one another. However, deviating from the norm is a source of genuine value. Independent speech and thinking are great advantages in any team environment, but more so in sports. As a result, the study team at The Good Judgment Project chose to investigate whether or not cooperation might improve accuracy. The way they accomplished this was by developing online forums via which forecasters allocated to various groups could interact with one another.

At the outset, the study team offered insights on group dynamics and warned the online groups against falling into the trap of groupthink. The first year's findings came in, and they showed that, on average, those who worked in groups were 23 percent more accurate than those who worked alone. The second year, the study team decided to put superforecasters in groups rather than ordinary forecasters, and they discovered that they outperformed the usual groups by a significant margin. However, the dynamics of the group were also influenced. Elaine Rich, a superforecaster, expressed dissatisfaction with the outcome. Everyone was very courteous, and there was little critical debate of opposing views or counter-arguments. In an attempt to remedy the situation, the organizations went above and above to demonstrate that they accepted constructive feedback.

Precision questioning, which pushes individuals to reconsider their arguments, is another technique for improving collaboration performance. This is not a new concept, of course, since great instructors have been practicing precise questioning since the time of Socrates and the Greeks. Precision inquiry entails delving further into the specifics of an argument, for as by asking for the meaning of a certain word. Even if there are strong differences of opinion on the subject, this interrogation exposes the reasoning behind the conclusion, which opens the door to additional research.

Summary of the book Superforecasting in its entirety.

The most important lesson in this book is that superforecasting is not limited to computers or to geniuses. A trainable talent, it entails evidence collecting, scorekeeping, keeping oneself up to speed on new facts, and having the ability to be patient. Advice that can be put into action: Keeping up with the latest developments puts you one step ahead of the competition. Superforecasters keep themselves up to speed on news that is important to their predictions on a far more frequent basis than regular forecasters. One suggestion for keeping an eye on changes is to set up notifications for yourself, such as via the use of Google Alerts, to keep you informed. These will notify you as soon as fresh information on the subject at hand is made available by sending you an email. Further reading is recommended: Mark Buchanan makes a forecast. Forecast is a criticism of contemporary economic theory that exposes the main faults in the theory. Mark Buchanan, a physicist, takes a careful look at the fundamental scientific assumptions that underpin our economic knowledge and, using keen analytical abilities, demonstrates how they are incorrect. In the second section of the book, Buchanan discusses a number of scientific breakthroughs that, in his opinion, would ultimately aid in the improvement of contemporary economic theory.

Buy book - Superforecasting by Philip E. Tetlock and Dan Gardner

Written by BrookPad Team based on Superforecasting by Philip E. Tetlock and Dan Gardner

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.