“Experts” are Often Wrong!

Experts favored the jejuno-ileal bypass, the stomach stapling and the lap band. Most everyone now agrees these are all wrong (poor choices)
When they’re wrong, they’re rarely held accountable, and they rarely admit it, either.
Bariatric Surgery if filled with controversy. With a diversity of opinion there are still are relatively large number of surgeons that state their opinions with great conviction and certainty. Objective benchmarks of surgeons opinions over the past 35 years shows consistent and persistent errors in Bariatric surgeons judgment and expert opinions

Surgeons delayed adoption breast conserving surgery
Despite expectations that the use of BCS would climb during the years immediately after the 1985 publication of the 5-year results of the NSABP B-06 trial, the overall use of BCS increased little, if at all, between 1985 and 1990.

“Unfortunately, many surgeons continue to apply much more stringent criteria when recommending BCT than those in published guidelines.”

For over a decade many leading British surgeons failed to recognize the merit of the antiseptic system, and much acrimonious criticism was directed at Lister and his method. When he visited the United States in 1876 to deliver an address at the International Medical Congress in Philadelphia, he was not received with any enthusiasm. The Americans were slow to accept Listerism, and as late as the meeting of the American Surgical Association in 1882, the Lancet reported that “Anti-Listerians were in the majority; . . . they relied for support upon the statements of others. . . . Surely it is too late in the day (for them) to contest the truth of the germ theory.” Levi Cooper Lane, who began his surgical career prior to Listerism, never fully accommodated to the restrictions imposed by the antiseptic and aseptic methods and gave as the reason: “You can’t teach an old dog new tricks.”

Erring is an inevitable part of being human. We are finite animals for whom probability is as close as we can come to certainty (even though certainty is what we want). Since life demands that we make decisions based on what we think will happen in the future, it is simply inevitable that some of these will be wrong. That is not and should not be a recipe for skepticism, which is a lazy attempt to fend off error. The author argues that the only way to crack down on error, paradoxically, is to admit its inevitability. Being aware of the mistakes we make that lead to error is the only way to curb it: recognize that fallibility is a part of life (not stupidity), make an effort to ‘hear the other side,’ phrase your predictions provisionally and treat them as such. The more we realize that error is a human quality that leads to opportunities of growth, the more we can, to some degree or other, embrace it as part of who we are.

Amplify’d from www.newyorker.com

Books

Everybody’s An Expert

Putting predictions to the test.

by Louis Menand

December 5, 2005

Prediction is one of the pleasures of life. Conversation would wither without it. “It won’t last. She’ll dump him in a month.” If you’re wrong, no one will call you on it, because being right or wrong isn’t really the point. The point is that you think he’s not worthy of her, and the prediction is just a way of enhancing your judgment with a pleasant prevision of doom. Unless you’re putting money on it, nothing is at stake except your reputation for wisdom in matters of the heart. If a month goes by and they’re still together, the deadline can be extended without penalty. “She’ll leave him, trust me. It’s only a matter of time.” They get married: “Funny things happen. You never know.” You still weren’t wrong. Either the marriage is a bad one—you erred in the right direction—or you got beaten by a low-probability outcome.

It is the somewhat gratifying lesson of Philip Tetlock’s new book, “Expert Political Judgment: How Good Is It? How Can We Know?” (Princeton; $35), that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones.

“Expert Political Judgment” is not a work of media criticism. Tetlock is a psychologist—he teaches at Berkeley—and his conclusions are based on a long-term study that he began twenty years ago. He picked two hundred and eighty-four people who made their living “commenting or offering advice on political and economic trends,” and he started asking them to assess the probability that various things would or would not come to pass, both in the areas of the world in which they specialized and in areas about which they were not expert. Would there be a nonviolent end to apartheid in South Africa? Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? Would Canada disintegrate? (Many experts believed that it would, on the ground that Quebec would succeed in seceding.) And so on. By the end of the study, in 2003, the experts had made 82,361 forecasts. Tetlock also asked questions designed to determine how they reached their judgments, how they reacted when their predictions proved to be wrong, how they evaluated new information that did not support their views, and how they assessed the probability that rival theories and predictions were accurate.

Tetlock got a statistical handle on his task by putting most of the forecasting questions into a “three possible futures” form. The respondents were asked to rate the probability of three alternative outcomes: the persistence of the status quo, more of something (political freedom, economic growth), or less of something (repression, recession). And he measured his experts on two dimensions: how good they were at guessing probabilities (did all the things they said had an x per cent chance of happening happen x per cent of the time?), and how accurate they were at predicting specific outcomes. The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes—if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.

Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight.”

People who are not experts in the psychology of expertise are likely (I predict) to find Tetlock’s results a surprise and a matter for concern. For psychologists, though, nothing could be less surprising. “Expert Political Judgment” is just one of more than a hundred studies that have pitted experts against statistical or actuarial formulas, and in almost all of those studies the people either do no better than the formulas or do worse. In one study, college counsellors were given information about a group of high-school students and asked to predict their freshman grades in college. The counsellors had access to test scores, grades, the results of personality and vocational tests, and personal statements from the students, whom they were also permitted to interview. Predictions that were produced by a formula using just test scores and grades were more accurate. There are also many studies showing that expertise and experience do not make someone a better reader of the evidence. In one, data from a test used to diagnose brain damage were given to a group of clinical psychologists and their secretaries. The psychologists’ diagnoses were no better than the secretaries’.

Read more at www.newyorker.com

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s