Contacts

Making Risk Models More Understandable

, by Andrea Costa
A new approach to assessing uncertainty

When making decisions in high-stakes environments—whether it’s planning a space mission, preparing for natural disasters, or responding to a pandemic—experts rely on complex computer models to predict risks and outcomes. These models process vast amounts of data and simulate different scenarios, but they often function as "black boxes": they produce results without making it clear how different factors influence those results. This lack of transparency can be a major problem, especially when decisions affect lives, safety, and financial investments.

Emanuele Borgonovo and Antonio De Rosa of Bocconi’s Department of Decision Sciences, set out to address this challenge. Their new paper, “Direction of impact for explainable risk assessment modeling”, written with Manel Baucells (University of Virginia), Elmar Plischke (Institute of Resource Ecology, Helmholtz-Zentrum Dresden), John Barr and Herschel Rabitz (both of Princeton University) focuses on improving the way risk models are interpreted, making it easier for decision-makers to understand which factors have the biggest impact on an outcome. Instead of just trusting the numbers a model generates, they explore different graphical tools—visual representations that help analysts see how changes in input variables (such as the likelihood of a technical failure in a spacecraft or the transmission rate of a virus) affect the overall risk assessment.

The study examines several popular methods used to visualize these relationships, testing whether they provide clear and reliable insights. Some traditional techniques, like tornado diagrams, have been widely used in risk analysis for years. These diagrams show how much an outcome changes when each variable shifts between its highest and lowest values. However, the researchers found that these methods can be misleading when dealing with more complex models, where multiple factors interact in unpredictable ways.

To find a more reliable approach, they turn to methods from the world of machine learning and artificial intelligence. One of the most promising tools they examine is the Partial Dependence (PD) function, which reveals the average effect of a particular variable on the model’s outcome. Unlike simpler tools, PD functions remain reliable even when dealing with intricate relationships between variables, making them particularly useful for risk assessment. However, the study also highlights situations where analysts should be cautious—for example, when models have been trained on limited data, which can lead to misleading extrapolations.

To illustrate these ideas in action, the researchers apply their methods to two real-world case studies. The first involves NASA's probabilistic safety assessment models, which help evaluate the risks associated with lunar space missions. By using advanced visualization techniques, they show how decision-makers can gain clearer insights into which technical factors pose the greatest risks. The second application looks at an epidemiological model from the early phase of the COVID-19 pandemic, analyzing how different factors influenced disease spread. In both cases, the study demonstrates how the right analytical approach can make risk models more transparent, helping experts make better-informed decisions.

The findings of this research are particularly relevant in an era where machine learning and AI are playing an increasingly large role in risk assessment. While these technologies offer powerful new tools, they also introduce new challenges—especially when it comes to making their predictions interpretable.

Borgonovo Emanuele

EMANUELE BORGONOVO

Bocconi University
Department of Decision Sciences

ANTONIO DE ROSA

Bocconi University
Department of Decision Sciences