I joined with my friend, Adam Siegel, whose company, Cultivate, ran the ICPM, to write this piece about why these forecasting tools help produce more rigorous analysis and more clarity on assessments for both the analysts who contribute and the policymakers who read them. There is so much buzz around the Polymarket, if you wonder If government and businesses could benefit from tailored solutions, the answer is yes.
In the run up to the U.S. deciding whether to join Israel in bombing Iranian nuclear sites, there was reporting about disagreements between the U.S. intelligence community (IC), the White House, and the Israeli Prime Minister, on whether Iran was actively working to build a nuclear bomb. Netanyahu claimed his own intelligence said they absolutely were, and were just a few months away from completion, but U.S. Intelligence was saying something different.
This reported discrepancy underscores the inherent value that existed in the U.S. Intelligence Community’s Prediction Market (ICPM), the internal forecasting capability powered by Cultivate, which operated for more than a decade until its cancellation in 2020. While think tanks like CSIS and Perry World House at the University of Pennsylvania have recommended a return to an internal forecasting capability inside the U.S. Government, no internal, active effort exists today to our knowledge.
To understand how the ICPM was used as part of the IC’s analysis, we first need to understand how the IC is set up organizationally to process analysis and ultimately send it to the President and other principal decision makers.
Typically when people think of the IC, they just think of the CIA, but there are actually 18 intelligence agencies in the U.S. Government. After 9/11, the government created a new role: the Director of National Intelligence (DNI) to better coordinate intelligence and cooperation among the agencies after the IC failed to provide enough of a tactical warning to prevent 9/11, in large part because of poor information sharing. The new Office of the Director of National Intelligence (ODNI) inherited the National Intelligence Council (NIC), the IC’s internal think tank and its sole interagency analytic production office. Since its inception in 1979, the NIC has been producing National Intelligence Estimates and other assessments to provide IC-coordinated views to inform the President, senior policymakers and the military.
“National Intelligence Officers”who have deep subject-matter expertise lead these assessments for their assigned regional or functional areas. Such analysis is expected to be the gold standard for analytic tradecraft and rigor. Engagement with experts outside the government is a core part of this rigor, designed to broaden IC knowledge and insight.
While the NIC has always led the way in the tradecraft that underpins IC analysis, several spectacular and regrettable intelligence failures underscore that it is not perfect, and clarity in communicating analytic judgments has never been the IC’s strong suit. The challenges, as have been researched, is that articulated judgments in analysis that go to stakeholders have a lack of specificity. Probabilistic words like “probably” and “likely” have wide and varying meanings to both readers and to the analysts who coordinated and agreed to them.
During the ICPM’s run, which turned out to be trailblazing in many respects to today’s widely embraced crowd-sourced forecasting and prediction markets, several NIOs tried to use the ICPM to strengthen both the rigor behind the judgments and the clarity in communicating them. By posing forecast questions to experts across IC agencies and across time zones, the ICPM allowed NIOs to ferret out and clarify analytic consensus and disagreements, the reasoning and intelligence underpinning their forecasts, and early warning or signals that only a minority of analysts were picking up.
For example, a typical judgment might say “x country is likely/unlikely to agree to give up its nuclear program.” But analysts who approved this language and its readers have no clarity about what is meant by “likely” or “unlikely.” With ICPM, an NIO could boost analytic rigor and clarity by asking analysts for their numerical forecast on a particular question and their rationale for it. An NIO might then lead discussions about the extent that intelligence reporting and reasoning supported the forecasts – or disagreed with them, and why.
Taking this theoretical example further, an NIO, under pressure to deliver an assessment about a country’s willingness to give up their nuclear program, would solicit a numeric probability from each agency, instead of coordinating written submissions using vague probabilistic language. Immediately an NIO might see that most agencies were hovering around a similar probability, but a handful were clustering around a very different view. With that information they were able to do two things: 1) quantify and represent the contrarian viewpoint, a critical and often under-reported aspect of analysis for policymakers, and 2) pull apart the basis for the contrarian view and whether and how it might be correct, which could inform more creative and successful policies.
The ICPM was ultimately decommissioned in 2020 for reasons that seemed bureaucratic, cultural, and structural. Nevertheless, the concept of the ICPM and the lessons learned from it live on at the RAND Forecasting Initiative and through Cultivate’s work with allied nations outside the US.
In intelligence analysis, as in business or financial analysis, there is rarely certainty. Yet leaders (and organizations) who have the humility and rigor to quantify their beliefs, identify areas of consensus and disagreement, and use the results for more rigorous analysis and debate to reduce uncertainty, are far more likely to have the advantage over their competitors. They just need to be willing to ask in the first place.