Asunto(s)
Compensación y Reparación , Enfermedad Iatrogénica , Vacunas/efectos adversos , Canadá , HumanosRESUMEN
Across a wide range of tasks, research has shown that people make poor probabilistic predictions of future events. Recently, the U.S. Intelligence Community sponsored a series of forecasting tournaments designed to explore the best strategies for generating accurate subjective probability estimates of geopolitical events. In this article, we describe the winning strategy: culling off top performers each year and assigning them into elite teams of superforecasters. Defying expectations of regression toward the mean 2 years in a row, superforecasters maintained high accuracy across hundreds of questions and a wide array of topics. We find support for four mutually reinforcing explanations of superforecaster performance: (a) cognitive abilities and styles, (b) task-specific skills, (c) motivation and commitment, and (d) enriched environments. These findings suggest that superforecasters are partly discovered and partly created-and that the high-performance incentives of tournaments highlight aspects of human judgment that would not come to light in laboratory paradigms focused on typical performance.
Asunto(s)
Predicción , Juicio , Área Bajo la Curva , Cognición , Ambiente , Predicción/métodos , Humanos , Aprendizaje , Modelos Psicológicos , Motivación , Probabilidad , Curva ROC , TiempoAsunto(s)
Esquemas de Inmunización , Vacunas contra la Influenza/administración & dosificación , Gripe Humana/prevención & control , Canadá , Humanos , Subtipo H3N2 del Virus de la Influenza A/inmunología , Vacunas contra la Influenza/inmunología , Gripe Humana/inmunología , Resultado del TratamientoAsunto(s)
Información de Salud al Consumidor/organización & administración , Enciclopedias como Asunto , Promoción de la Salud/organización & administración , Difusión de la Información/métodos , Internet , Salud Global , Humanos , Educación del Paciente como Asunto , Desarrollo de Programa , Salud PúblicaRESUMEN
Five university-based research groups competed to recruit forecasters, elicit their predictions, and aggregate those predictions to assign the most accurate probabilities to events in a 2-year geopolitical forecasting tournament. Our group tested and found support for three psychological drivers of accuracy: training, teaming, and tracking. Probability training corrected cognitive biases, encouraged forecasters to use reference classes, and provided forecasters with heuristics, such as averaging when multiple estimates were available. Teaming allowed forecasters to share information and discuss the rationales behind their beliefs. Tracking placed the highest performers (top 2% from Year 1) in elite teams that worked together. Results showed that probability training, team collaboration, and tracking improved both calibration and resolution. Forecasting is often viewed as a statistical problem, but forecasts can be improved with behavioral interventions. Training, teaming, and tracking are psychological interventions that dramatically increased the accuracy of forecasts. Statistical algorithms (reported elsewhere) improved the accuracy of the aggregation. Putting both statistics and psychology to work produced the best forecasts 2 years in a row.