Assessing the accuracy of predictions

The calibration curve for the accuracy of of probabilistic forecast is aggregated into a metrics of decisiveness, accuracy and robustness.  The y-axis is the forecasted probability. The x-axis is a histogram of a source used for testing of the decision algorithm.  The green and orange curves represent a histogram of the model-predicted versus source-measured probabilities.  The large brown bubble is the average accuracy of the forecast as measured by the geometric mean of the probabilities.  The smaller bubble above the accuracy indicates the decisiveness as measured by the arithmetic mean of the probabilities.  The bubble below the accuracy indicates the robustness as measured by the -2/3 generalized mean of the probabilities.  On the left is an algorithm which is under-confident, i.e. its forecasted probabilities are closer to 0.5 than actually occurs in the test sample.  The right graph is of an over-confident algorithm in which the forecasted probabilities are closer to 0 and 1 than the test sample distribution.

Entropy 201719(6), 286; doi:10.3390/e19060286

Over Under Confident Model

View all posts