This example is certainly not the first such failure, and sadly, it will not be the last. The Tet Offensive, the misidentification of a weapons of mass destruction program in Iraq, and 9-11 are among U.S. failures that had similarly grave consequences. There are hundreds of examples, big and small, that have vexed intelligence communities around the world. Each is unique, but most come down to human, social, and cultural shortcomings.
Strategic surprises, like the crisis unfolding in Gaza, do offer critical opportunities for learning, however, and our paper explores how an alternative approach might have helped avoid several critical gaps that contributed to the intelligence failure. We add this caveat: Israel has yet to conduct a full investigation of the intelligence failure surrounding 7 October; readers should note that some of the reports and accounts used in this paper may prove to be inaccurate or only part of a fuller story.
We begin by highlighting how difficult it is for analysts and policymakers to challenge their frameworks and models of the world through rigorous review and revision. Drilling down more specifically on the real-world strategic surprise of 7 October, we identify how broadly held assumptions about Hamas’ capabilities and intentions drove senior Israeli leaders to discount or dismiss actual signals and warnings of a potential attack. We close with a counterfactual exercise to consider how an approach using a forecasting method called crowdsourced strategic forecasting might have helped Israel avoid those particular errors with the goal of providing methods to help guard against future strategic surprise.
Crowdsourced strategic forecasting is the process of soliciting ongoing, quantitative forecasts (e.g. probabilities) and qualitative rationales about the likelihood of future events and risks from a large, organizationally and demographically diverse group of people, and then aggregating them into a “crowd” forecast. The outputs of this process can help analysts and decision makers by prompting them to consider multiple scenarios that challenge their assumptions, craft trackable forecast questions that will inform the likelihood of alternative scenarios, identify areas of consensus and disagreements among different organizations and cohorts, flag minority views and “weak signals,” and provide feedback by measuring the accuracy of individual and collective forecasts.