Expert Political Judgment: How Good Is It? How Can We Know? - Softcover

Tetlock, Philip E.

 
9780691175973: Expert Political Judgment: How Good Is It? How Can We Know?

Inhaltsangabe

Since its original publication, Expert Political Judgment by New York Times bestselling author Philip Tetlock has established itself as a contemporary classic in the literature on evaluating expert opinion. Now with a new preface in which Tetlock discusses the latest research in the field, the book explores what constitutes good judgment in predicting future events and looks at why experts are often wrong in their forecasts.

Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.

Über die Autorin bzw. den Autor

Philip E. Tetlock is Mitchell Professor of Leadership at the University of California, Berkeley. His books include Counterfactual Thought Experiments in World Politics (Princeton).

Von der hinteren Coverseite

"This book is a landmark in both content and style of argument. It is a major advance in our understanding of expert judgment in the vitally important and almost impossible task of political and strategic forecasting."--Daniel Kahneman, Princeton University, Nobel Laureate in Economics

"It is the somewhat gratifying lesson of Philip Tetlock's new book … that people who make prediction their business … are no better than the rest of us."--Louis Menand, New Yorker"The definitive work on this question." --Gavyn Davies, Financial Times

"[This] book … marshals powerful evidence to make [its] case.Expert Political Judgment… summarizes the results of a truly amazing research project…. The question that screams out from the data is why the world keeps believing that ‘experts' exist at all."--Geoffrey Colvin, Fortune

"This is a marvelous book--fascinating and important. It provides a stimulating and often profound discussion, not only of what sort of people tend to be better predictors than others, but of what we mean by good judgment and the nature of objectivity. It examines the tensions between holding to beliefs that have served us well and responding rapidly to new information. Unusual in its breadth and reach, the subtlety and sophistication of its analysis, and the fair-mindedness of the alternative perspectives it provides, it is a must-read for all those interested in how political judgments are formed."--Robert Jervis, Columbia University

"This book is just what one would expect from America's most influential political psychologist: Intelligent, important, and closely argued. Both science and policy are brilliantly illuminated by Tetlock's fascinating arguments."--Daniel Gilbert, Harvard University

Auszug. © Genehmigter Nachdruck. Alle Rechte vorbehalten.

Expert Political Judgment

How Good Is It? How Can We Know?

By Philip E. Tetlock

PRINCETON UNIVERSITY PRESS

Copyright © 2005 Princeton University Press
All rights reserved.
ISBN: 978-0-691-17597-3

Contents

Acknowledgments, ix,
Preface, xi,
Preface to the 2017 Edition, xvii,
CHAPTER 1 Quantifying the Unquantifiable, 1,
CHAPTER 2 The Ego-deflating Challenge of Radical Skepticism, 25,
CHAPTER 3 Knowing the Limits of One's Knowledge: Foxes Have Better Calibration and Discrimination Scores than Hedgehogs, 67,
CHAPTER 4 Honoring Reputational Bets: Foxes Are Better Bayesians than Hedgehogs, 121,
CHAPTER 5 Contemplating Counterfactuals: Foxes Are More Willing than Hedgehogs to Entertain Self-subversive Scenarios, 144,
CHAPTER 6 The Hedgehogs Strike Back, 164,
CHAPTER 7 Are We Open-minded Enough to Acknowledge the Limits of Open-mindedness?, 189,
CHAPTER 8 Exploring the Limits on Objectivity and Accountability, 216,
Methodological Appendix, 239,
Technical Appendix Phillip Rescober and Philip E. Tetlock, 273,
Index, 313,


CHAPTER 1

Quantifying the Unquantifiable

I do not pretend to start with precise questions. I do not think you can start with anything precise. You have to achieve such precision as you can, as you go along.

— Bertrand Russell


Every day, countless experts offer innumerable opinions in a dizzying array of forums. Cynics groan that expert communities seem ready at hand for virtually any issue in the political spotlight — communities from which governments or their critics can mobilize platoons of pundits to make prepackaged cases on a moment's notice.

Although there is nothing odd about experts playing prominent roles in debates, it is odd to keep score, to track expert performance against explicit benchmarks of accuracy and rigor. And that is what I have struggled to do in twenty years of research of soliciting and scoring experts' judgments on a wide range of issues. The key term is "struggled." For, if it were easy to set standards for judging judgment that would be honored across the opinion spectrum and not glibly dismissed as another sneaky effort to seize the high ground for a favorite cause, someone would have patented the process long ago.


The current squabble over "intelligence failures" preceding the American invasion of Iraq is the latest illustration of why some esteemed colleagues doubted the feasibility of this project all along and why I felt it essential to push forward anyway. As I write, supporters of the invasion are on the defensive: their boldest predictions of weapons of mass destruction and of minimal resistance have not been borne out.

But are hawks under an obligation — the debating equivalent of Marquis of Queensbury rules — to concede they were wrong? The majority are defiant. Some say they will yet be proved right: weapons will be found — so, be patient — or that Baathists snuck the weapons into Syria — so, broaden the search. Others concede that yes, we overestimated Saddam's arsenal, but we made the right mistake. Given what we knew back then — the fragmentary but ominous indicators of Saddam's intentions — it was prudent to over- rather than underestimate him. Yet others argue that ends justify means: removing Saddam will yield enormous long-term benefits if we just stay the course. The know-it-all doves display a double failure of moral imagination. Looking back, they do not see how terribly things would have turned out in the counterfactual world in which Saddam remained ensconced in power (and France wielded de facto veto power over American security policy). Looking forward, they do not see how wonderfully things will turn out: freedom, peace, and prosperity flourishing in lieu of tyranny, war, and misery.

The belief system defenses deployed in the Iraq debate bear suspicious similarities to those deployed in other controversies sprinkled throughout this book. But documenting defenses, and the fierce conviction behind them, serves a deeper purpose. It highlights why, if we want to stop running into ideological impasses rooted in each side's insistence on scoring its own performance, we need to start thinking more deeply about how we think. We need methods of calibrating expert performance that transcend partisan bickering and check our species' deep-rooted penchant for self-justification.

The next two sections of this chapter wrestle with the complexities of the process of setting standards for judging judgment. The final section previews what we discover when we apply these standards to experts in the field, asking them to predict outcomes around the world and to comment on their own and rivals' successes and failures. These regional forecasting exercises generate winners and losers, but they are not clustered along the lines that partisans of the left or right, or of fashionable academic schools of thought, expected. What experts think matters far less than how they think. If we want realistic odds on what will happen next, coupled to a willingness to admit mistakes, we are better off turning to experts who embody the intellectual traits of Isaiah Berlin's prototypical fox — those who "know many little things," draw from an eclectic array of traditions, and accept ambiguity and contradiction as inevitable features of life — than we are turning to Berlin's hedgehogs — those who "know one big thing," toil devotedly within one tradition, and reach for formulaic solutions to ill-defined problems. The net result is a double irony: a perversely inverse relationship between my prime exhibit indicators of good judgment and the qualities the media prizes in pundits — the tenacity required to prevail in ideological combat — and the qualities science prizes in scientists — the tenacity required to reduce superficial complexity to underlying simplicity.


Here Lurk (The Social Science Equivalent of) Dragons

It is a curious thing. Almost all of us think we possess it in healthy measure. Many of us think we are so blessed that we have an obligation to share it. But even the savvy professionals recruited from academia, government, and think tanks to participate in the studies collected here have a struggle defining it. When pressed for a precise answer, a disconcerting number fell back on Potter Stewart's famous definition of pornography: "I know it when I see it." And, of those participants who ventured beyond the transparently tautological, a goodly number offered definitions that were in deep, even irreconcilable, conflict. However we set up the spectrum of opinion — liberals versus conservatives, realists versus idealists, doomsters versus boomsters — we found little agreement on either who had it or what it was.

The elusive it is good political judgment. And some reviewers warned that, of all the domains I could have chosen — many, like medicine or finance, endowed with incontrovertible criteria for assessing accuracy — I showed suspect scientific judgment in choosing good political judgment. In their view, I could scarcely have chosen a topic more hopelessly subjective and less suitable for scientific analysis. Future professional gatekeepers should do a better job stopping scientific interlopers, such as the author, from wasting everyone's time — perhaps by posting the admonitory sign that medieval mapmakers used to stop explorers from sailing off the earth: hic sunt...

„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.

Weitere beliebte Ausgaben desselben Titels