California Investment Network


Recent Blog


Pitching Help Desk


Testimonials

"This is to inform you that I have already obtained all the investment funds that I need to launch my project. I thank you for doing all you have done for me. I am thrilled beyond measure. Apparently I have a better idea than even I knew."
Jerry Johnston - Mega Clean

 BLOG >> Recent

Evaluating Forecasts [Future
Posted on March 10, 2016 @ 07:19:00 AM by Paul Meagher

This is my third blog related to the book Superforecasters: The Art and Science of Prediction (2015). In my last blog I discussed the importance of updating forecasts rather than just making a forecast and waiting until the forecasted outcome occurs or not. This naturally leads to the question of how we should evaluate our updated forecasts in light of agreements or discrepancies between the predicted and the expected outcomes. That is what this blog will attempt to do.

The forecasting example I have chosen to focus on is predicting what my book expenses will be for 2016. I came up with an exact estimate of $1920 but pointed out that assigning a probability to a point estimate is tricky and not very useful. Instead it is more useful to specify a prediction interval [$1920 +- $60] and assign a probability to how likely it is that the forecasted outcome will fall within that interval (80% probability). Now we have a forecast that is sufficiently specified that we can begin to evaluate our forecasting ability.

We can evaluate our financial forecasting ability in terms of whether the probability we assign to an outcome accurately reflects the level of uncertainty we should have in that outcome. If you assign an outcome a high probability (100%) and it doesn't happen then you should be penalized more than if you assigned it a lower probability (60%). You are overconfident in our forecasting ability and when we score your forecast the math should reflect this. If you assign a high probability to an outcome and the outcome happens, then you shouldn't be penalized very much. The way our scoring system will work is that a higher score is bad and a score close to 0 is good. A high score measures the amount of penalty you incur for a poorly calibrated forecast. To feel the pain of a bad forecast we can multiplying the penalty score by $100 and the result would determine how much money you have to pay out for a bad forecast.

Before I get into the math for assessing how "calibrated" your estimates are, I should point out that this math does not address another aspect of our forecast that we can also evaluate in this case, namely, how good the "resolution" of our forecast is. Currently I am predicting that my 2016 book expenses will be $1920 +- $60, however, as the end of 2016 approaches I might decide to increase the resolution of that forecast to $1920 +- $30 (I might also change the midpoint) if it looks like I am still on track and that my forecast might be only off by the cost of 1 book (rather than 2). When we narrow the range of our financial forecasts and the outcome falls within the range then a scoring system should tell us that we have better resolving power in our forecasts.

The scoring system that I will propose will address calibration and resolution and has the virtue that it is very simple and can be applied using mental arithmetic. Some scoring systems can be so complicated that you need to sit down with a computer to use them. David V. Lindley has a nice discussion of Quadratic Scoring in his book Making Decisions (1991). The way Quadratic Scoring works is that you assign a probability to an outcome and if that outcome happens you score it using the equation (1-p)2 where p is your forecast probability. If the predicted outcome does not happen, then you use the equation p2. In both cases, a number less than 1 will result so Lindley advocates multiplying the value returned by 100.

So, if it turns out that my estimated book expenses for 2016 falls within the interval [$1920 +- $60] and I estimated the probability to be 0.80 (80%) then to compute my penalty for not saying this outcome had a 100% probability, I use the equation (1-p)2 = (1-.8).2 = .22 = 0.04. Now if I multiply that by 100 I get a penalty score of 4. One way to interpret this is that I only have to pay out $4 dollars for my forecast because it was fairly good. Notice that if my probability was .9 (90%) my payout would be even less ($1), but if it was .6 (60%) it would be quite a bit bigger at $36. So not being confident when I should be results in a bigger penalty.

Conversely, if my estimated book expenses for 2016 didn't fall within the interval [$1960 +- $60] and I estimated the probability to be 0.80 (80%) then to compute my penalty I use the second equation which is p2 = .82 = .64. Now multiply this by 100 and I get a penalty score of $64 that I have to payout. If my probability estimate was lower, say .60 (60%), then my penalty would be .62 = .36 x 100 = $36. So if I'm not so confident when I'm wrong that is better than being confident.

The quadratic scoring rule is summarized in this table:


Source: David Lindley, Making Decisions (1991), p. 24

I hope you will agree that the Quadratic Scoring Rule usefully reflects how penalties should be calculated when we compare our forecasted outcomes to actual outcomes. It measures how "calibrated" our probability assignments are to whether the events they predict actually happen. In cases where we are not predicting numerical outcomes this scoring system would be all we need to evaluate the goodness of our forecasts. Our prediction problem, however, is a numerical prediction problem so we also need to concern ourselves with how good the resolution of our forecast is.

Intuitively if our prediction interval is smaller and the actual outcome falls within this range then we consider this a better forecast than one that involves a prediction interval that is wider. My proposal is simply to measure the size of your range and add it to your quadratic score. So if my prediction interval is [$1920 +- $60] with 80% confidence and I am correct then my overall score is 4 (see previous calculation) plus the range which is 120. Lets convert this all to dollars and our overall penalty is $4 + $120 = $124. If we narrow our prediction interval to $1920 +- $30 then we get $4 + $60 = $64 as our penalty score.

In an ideal world we would make exact forecasts (+- 0 as our range) with complete confidence (100%) and the forecasted outcomes would happen exactly as predicted. In this universe our penalty scores would be 0. In the real world, however, our predictions often have calibration or resolution issues so most predictions involve a penalty score to some extent. It might help to think of this as a cost you have to pay to someone because your predictions are not as perfect as they could be.

With this scoring system you can check in on your forecasts at some midway point to see how you are doing. If you update your forecast what you are looking for is a reduced penalty score when you check up on your forecast again. How much your penalty score improves tells you if your updates are on the right track. Generally your penalty scores should go down if you update your forecasts on a regular basis like Superforecasters do. Superforecasters are quite interested in evaluating how their forecasts are progressing and using some simple math like this helps them figure out how well they are doing.

A book that is on my priority list to read is Simple Rules: How to Thrive In a Complex World (2015). They argue that it is often a mistake to use complex rules to solve complex problems (which forecasting problems often are). They document how simple rules are often effective substitutes and can be used more flexibly. It is possible to be more sophisticated in how we evaluate forecasts but this sophistication comes at a price - the inability to quickly and easily evaluate forecasts in the real world. We often don't need extra sophistication if our goal is to easily evaluate forecasts in order to get some useful feedback and produce better forecasts. I would challenge you to come up with a simpler method for evaluating financial forecasts that is as useful.

If you want to learn more about the motivations, applications and techniques for forecasting, I would recommend the open textbook Forecasting: Principles and Practice.

Permalink 

 Archive 
 

Archive


 November 2023 [1]
 June 2023 [1]
 May 2023 [1]
 April 2023 [1]
 March 2023 [6]
 February 2023 [1]
 November 2022 [2]
 October 2022 [2]
 August 2022 [2]
 May 2022 [2]
 April 2022 [4]
 March 2022 [1]
 February 2022 [1]
 January 2022 [2]
 December 2021 [1]
 November 2021 [2]
 October 2021 [1]
 July 2021 [1]
 June 2021 [1]
 May 2021 [3]
 April 2021 [3]
 March 2021 [4]
 February 2021 [1]
 January 2021 [1]
 December 2020 [2]
 November 2020 [1]
 August 2020 [1]
 June 2020 [4]
 May 2020 [1]
 April 2020 [2]
 March 2020 [2]
 February 2020 [1]
 January 2020 [2]
 December 2019 [1]
 November 2019 [2]
 October 2019 [2]
 September 2019 [1]
 July 2019 [1]
 June 2019 [2]
 May 2019 [3]
 April 2019 [5]
 March 2019 [4]
 February 2019 [3]
 January 2019 [3]
 December 2018 [4]
 November 2018 [2]
 September 2018 [2]
 August 2018 [1]
 July 2018 [1]
 June 2018 [1]
 May 2018 [5]
 April 2018 [4]
 March 2018 [2]
 February 2018 [4]
 January 2018 [4]
 December 2017 [2]
 November 2017 [6]
 October 2017 [6]
 September 2017 [6]
 August 2017 [2]
 July 2017 [2]
 June 2017 [5]
 May 2017 [7]
 April 2017 [6]
 March 2017 [8]
 February 2017 [7]
 January 2017 [9]
 December 2016 [7]
 November 2016 [7]
 October 2016 [5]
 September 2016 [5]
 August 2016 [4]
 July 2016 [6]
 June 2016 [5]
 May 2016 [10]
 April 2016 [12]
 March 2016 [10]
 February 2016 [11]
 January 2016 [12]
 December 2015 [6]
 November 2015 [8]
 October 2015 [12]
 September 2015 [10]
 August 2015 [14]
 July 2015 [9]
 June 2015 [9]
 May 2015 [10]
 April 2015 [9]
 March 2015 [8]
 February 2015 [8]
 January 2015 [5]
 December 2014 [11]
 November 2014 [10]
 October 2014 [10]
 September 2014 [8]
 August 2014 [7]
 July 2014 [5]
 June 2014 [7]
 May 2014 [6]
 April 2014 [3]
 March 2014 [8]
 February 2014 [6]
 January 2014 [5]
 December 2013 [5]
 November 2013 [3]
 October 2013 [4]
 September 2013 [11]
 August 2013 [4]
 July 2013 [8]
 June 2013 [10]
 May 2013 [14]
 April 2013 [12]
 March 2013 [11]
 February 2013 [19]
 January 2013 [20]
 December 2012 [5]
 November 2012 [1]
 October 2012 [3]
 September 2012 [1]
 August 2012 [1]
 July 2012 [1]
 June 2012 [2]


Categories


 Agriculture [77]
 Bayesian Inference [14]
 Books [18]
 Business Models [24]
 Causal Inference [2]
 Creativity [7]
 Decision Making [17]
 Decision Trees [8]
 Definitions [1]
 Design [38]
 Eco-Green [4]
 Economics [14]
 Education [10]
 Energy [0]
 Entrepreneurship [74]
 Events [7]
 Farming [21]
 Finance [30]
 Future [15]
 Growth [19]
 Investing [25]
 Lean Startup [10]
 Leisure [5]
 Lens Model [9]
 Making [1]
 Management [12]
 Motivation [3]
 Nature [22]
 Patents & Trademarks [1]
 Permaculture [36]
 Psychology [2]
 Real Estate [5]
 Robots [1]
 Selling [12]
 Site News [17]
 Startups [12]
 Statistics [3]
 Systems Thinking [3]
 Trends [11]
 Useful Links [3]
 Valuation [1]
 Venture Capital [5]
 Video [2]
 Writing [2]