California Investment Network


Recent Blog


Pitching Help Desk


Testimonials

"I made several great connections through your network. In fact, I was able to over fund my project. I also listed with another network that cost 3X as much and the leads were nowhere near as solid as the investors I met through this network. I will definitely only be using this network in the future. "
Jason A.

 BLOG >> March 2016

Simple Rules History [Agriculture
Posted on March 31, 2016 @ 08:15:00 AM by Paul Meagher

In my last blog I introduced the idea of simple rules and much of that blog focused on Herb Simon's thoughts on why simple rules are necessary (e.g., bounded rationality, limited cognitive capacity, pervasive uncertainty).

What can be confusing for people studying simple rules, or cognitive heuristics, is that you will encounter two different research programs on the use of heuristics in reasoning that offer different assessments on the value of heuristic reasoning.

One research program starts with Nobel laureate Herb Simon and his views on the importance of heuristics for achieving adaptation to the uncertain environment we find ourselves in. This could be called the "Ecological Rationality" research program.

Another research program starts with Nobel laureate Daniel Kahneman and his partner Amos Tversky and focuses on another aspect of heuristics, namely, that they can be fallible because when we use them we don't perform as well as normative rational models confronted with the same information. This could be called the "Heuristics and Biases" research program.

So in one research program (i.e., Ecological Rationality) you have heuristics portrayed as adaptive, and sometimes optimal, way to deploy our limited cognitive resources to address everyday uncertainties.

In another research program (i.e., Heuristics & Biases) heuristics are portrayed as a major source of fallacious reasoning that we might be able to correct by becoming aware of these heuristic biases.

These are very different ways to regard the value of heuristics and recent successful business books on behavioral economics have been more focused on showing the downsides to heuristics rather that the upsides. They teach us about heuristics so that we can be wary of supposedly common heuristic reasoning biases.

The notion of Simple Rules is potentially a way to avoid some of the baggage associated with the term heuristics but it is clearly a research program inspired by Herb Simon's framing of the role of heuristics in problem solving as more adaptive than flawed.

The book Simple Rules (2015) is an important contribution to the "Ecological Rationality" research program, especially as it pertains to business and entrepreneurial challenges. Always looming in the background when I read this stuff is the research program of Gerg Gigerenzer who has been one of the main torch bearer's for Herb Simon's bounded rationality research program. Their recent paper Heuristics as adaptive decision strategies in management (2014, PDF download) offers a nice account of how the Herb Simon research program has played out in the field of management.

So the point of this blog is to highlight the different influential research programs that have grown up around the notion of heuristic reasoning and which research program the Simple Rules approach best relates to (i.e., the bounded rationality or ecological rationality research programs). You can get very confused if you search out research on heuristics and you don't know this history.

It should also be noted, however, that in alot of the cognitive literature on heuristics these reasoning strategies are often viewed as baked into the hardware of our brains whereas Simple Rules are more like high level rules that we consciously formulate and chose to follow or not. They are also often more domain specific than the notion of cognitive heuristics is. There are interesting aspects of the Simple Rules notion that makes them not quite the same as the traditional notion of cognitive heuristics and that is one reason why the notion of Simple Rules potentially interests me - it may provide a more expanded way to account for productive competence in business than just relying upon Gerd's notion of fast and frugal reasoning strategies that are not as domain specific or consciously adopted.

I'll bring this discussion down to earth in my next blog when we discuss the use of Simple Rules as a way to approach thinking about how a business strategy should be formulated.

Permalink 

Introducing Simple Rules [Decision Making
Posted on March 29, 2016 @ 07:26:00 AM by Paul Meagher

In today's blog I want to set some groundwork for future blogs related to a Simple Rules (2015) book that I'm currently reading. You can can also read the article Simple Rules for a Complex World for a synopsis of some of the ideas contained in the book. The book advocates the use of domain-specific simple rules to manage decision making in those domains. For example, instead of trying to compute an optimal diet using food databases and combinatorial algorithms, you could also use Micheal Pollan's simple rule to Eat food. Not too much. Mostly plants. Following the latter rule would likely lead to as much or more success in deciding what to eat than using some diet optimization technique.

The impetus to use simple rules is that the world is complex and simple rules often capture the most significant features to pay attention to. Often they can be shown to be effective if not optimal according to some criterion. Often the optimal decision is not clear and/or we don't have the computational resources to figure it out. Defining and attending to simple domain specific rules can help us to make adaptive decisions in many aspects of our lives.

Herb Simon won a Nobel Prize in Economics in part because he criticized a foundational assumption of economics that humans are rational actors attempting to make optimal decisions, the so-called "Rational Man" assumption. One of his best critiques of this "Rational Man" viewpoint can be found in chapter 2 his book The Sciences of the Artificial (3rd Ed., 1996). That chapter is titled "Economic Rationality: Adaptive Artifice".

One of Simon's arguments against the rational man assumption involves a critique of Game Theory as a method to compute an optimal strategic move in business or other interactions. One problem you can run into if you set two big-brained computers against each other is the problem of mutual outguessing. If I think A is going to do X, but A knows that I know she might do X, then I should instead do Y, but A might also anticipate this, so perhaps I should do Z and so on. This chain of reasoning can go on indefinitely when two competitive big-brained computers are trying to find an optimal strategic move. Simon drew the following conclusion from this mutual outguessing problem in Game Theory:

Market institutions are workable (but not optimal) well beyond that range of situations [monopoly and perfect competition] because the limits of human abilities to compute possible scenarios of complex interaction prevent an infinite regress of mutual outguessing. Game theory's most valuable contribution has been to show that rationality is effectively undefinable when competitive actors have unlimited computational capabilities for outguessing each other, but that the problem does not arrive as acutely in a world, like the real world, of bounded rationality.

If we are not the optimizing machine that the Rational Man image from economics suggests, then how do we go about solving problems in the real world? Simon calls this the problem of "Adaptive Rationality" and he makes the following suggestions and observations:

If the adaptation of both the business firm and biological species to their respective environments are instances of heuristic search, hence of local optimization or satisficing, we still have to account for the mechanisms that bring the adaptation about. In biology the mechanism is located in the genes and their success in reproducing themselves. What is the gene's counterpart in the business firm?

Nelson and Winter suggest that business firms accomplish most of their work through standard operating procedures - algorithms for making daily decisions that become routinized and are handed down from one generation of executives and employees to the next. Evolution derives from all the processes that produce innovation and change in these algorithms. The fitness test is the profitability and growth rate of the firm. Profitable firms grow by the reinvestment of their profits and their attractiveness for new investment.

Nelson and Winter observe that in economic evolution, in contrast to biological evolution, successful algorithms may be borrowed by one firm from another. Thus the hypothesized system is Lamarkian, because any new idea can be incorporated in operating procedures as soon as its success is observed, and hence successful mutation can be transferred between firms. Transfer is of course not costless, but involves learning costs for the adopting firm. It may also be impeded by patent protection and commercial secrecy. Nevertheless, processes of the kinds just described play a large role in the gradual evolution of an economic system composed of business firms. (p. 48).

The purpose of this blog is provide some context for the idea of simple rules. Simple rules, bounded rationality, and satisficing can be contrasted with the vision of humans as fully informed and always optimizing - the rational man viewpoint. I do not want to completely dismiss the rational man viewpoint as we do in fact have broad range of useful analytic techniques for computing optimal outcomes (inventory management, route planning, scheduling, etc...); however, the rational man viewpoint can be taken too far if we view it as being able to account for or guide all our economic decision making. Given our bounded rationality and the complexity of the decisions we have to make daily, it makes sense to seek out and rely upon simple rules as a method to achieve "economic adaptation". Indeed, sometimes these simple rules perform as well or better than complex optimizing rules (see Naive Diversification vs Optimization). Finally, Herb Simon made some interesting observations about the importance of standard operating procedures and the Lamarkian nature of business evolution that offer some ideas on what Simple Rules in business might consist of (standard operating procedures, routines) and how they might spread and evolve over time (copying and mutation).

Permalink 

March 2016 Book Order [Books
Posted on March 25, 2016 @ 06:42:00 AM by Paul Meagher

I'm closely monitoring my book buying habits to see how my 2016 book expense forecast will fare. Tracking and managing my 2016 book expense budget is easier if I batch order my books each month rather than buying them impulsively. I just put in my book order for March and decided to blog about them.

Three of the books I ordered are books I borrowed from a public or university library and have now decided that I want a personal copy to refer to and mark up if necessary. The Risk Savvy book mentioned below is a new one that I haven't read before and was not available for borrowing. It is relevant to my current interests in heuristics, risk, and decision making.

The 4 books I ordered and the rationale I used for each purchase are given below.

1) The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses (2011) by Eric Ries.

I read this influential book on entrepreurship and refer to Lean Startup Theory often in my blogs or my thinking. This seemed like an essential reference to have in my library.

2) Artificial Intelligence: A Modern Approach (3Rd Ed., 2015) by Stuart J. Russell and Peter Norvig.

This is the standard classroom text for learning AI. I read sections of the second edition and a paperback edition of the third edition was available for around $40 with shipping. The price has been going up lately so decided to pull the trigger. Good reference book to have in my library for formal approaches to cognitive science topics like planning, problem solving, reasoning, and decision making.

3) Risk Savvy: How to Make Good Decisions (2015) by Gerd Gigerenzer.

I've read research papers and books by Gerd Gigerenzer in the past. He is a leading researcher on the use of heuristics in reasoning. I'm very interested in heuristic reasoning and especially coming from a viewpoint where the heuristics are regarded as adaptive rather than a flawed form of reasoning (which alot of currently popular books in behavioral economics often portray it as). Also interested to explore the role of risk in decision making coming from a source that has done alot of research on decision making.

4) Gathering Moss: A Natural and Cultural History of Mosses (2003) by Robin Wall Kimmerer.

I got about half way through this book before I had to return it back to the library. The science writing is some of the best I've ever encountered and it is filled with wisdom about life and the science of mosses. She discusses her own research on mosses which exhibits both ingenuity and funny observations and experiences. I want to read the last half of the book so I can learn more about moss varieties and ecology and to have names for some of the moss varieties I encounter.

Here is a 2 minute long video of a Moss Heaven spot that I like to visit on my walks. Footage taken in the fall after deciduous leaves & needles were shed from trees. The white moss (Cladonia rangiferina) at the first of the video is actually not a moss but a lichen. I took the video to document 4 different types of mosses that I thought were present here. The humidity from the stream below makes this an ideal environment for mosses to thrive. The air quality here is also good (more oxygenated and more negative ions because of the stream turbulence below) which is why the white moss (or reindeer lichen) grow here. Reindeer lichen is considered a biological indicator of good air quality.

Permalink 

Predicting a Unicorn Startup [Startups
Posted on March 23, 2016 @ 02:40:00 AM by Paul Meagher

Y Combinator is currently the top business incubator in the world so it is a worthwhile exercise to study the W2016 round of startups featured at their recent Demo Day launch event.

Tech Crunch is covering the Y Combinator Demo Day launch event and you can read about the 60 startups that presented at Demo Day 1. As you read through them you might ask yourself which of these companies will be the most successful? You might also ask yourself if there is a "Unicorn Startup" in this batch? In Silicon Valley, a company the size of Google, Facebook, Twitter, LinkedIn and so on is called a "Unicorn Startup" or simply a "Unicorn" - a mythical and rare creature mentioned in this interview. Silicon Valley investors are often hoping to find the next Unicorn. A Unicorn can be defined as a company with the potential to achieve a 1 billion dollar valuation in the next 6 years. Which company in this batch might be the next Unicorn?

In a recent series of blogs (see Financial Forecasting, Updating Forecasts, Evaluating Forecasts) I discussed Philip Tetlock and Dan Gardner's book Superforecasting: The Art & Science of Prediction (2015). One of the take home messages from that book is that if you want to become a better forecaster you need to make testable forecasts and then evaluate how well those forecasts fare. The quadratic scoring technique is a simple and flexible technique that can be used to evaluate forecasts accuracy and to guide you to becoming a better forecaster.

The Y Combinator Demo Day events provide an ongoing opportunity to hone your forecasting skills in the domain of startup investing. Here is how it might work. Pick one or more startups from this batch that might be a Unicorn startup. Now assign a probability score to your forecast. Evaluate that forecast when the next Y Combinator Demo Day event happens and make new Unicorn forecasts and adjust your previous Unicorn forecasts as necessary. Do this for awhile and you will improve your knowledge of and skill in Unicorn forecasting.

I think there may be a Unicorn hiding in this batch of startups. To make a defensible prediction I would have to read more about these companies but my gut suggests that Magic Instruments might be a possible Unicorn. Magic Instruments is redesigning the guitar interface to make it easier to learn and use on an ongoing basis.

Magic Instruments is charging a reasonable price for the guitar ($299). They have a revenue model that also includes ongoing fees from customers ($6 monthly subscription service for chords and lyrics of popular songs). The trade name, Magic Instruments, is very fitting. Finally, the success of Guitar Hero established that the market for electronic guitars and software can reach up to 6 billion dollars. On the downside, it is one thing to claim to be able to redesign the guitar interface, it is another to actually pull it off. There are competition and patenting issues that are unknowns. I don't know anything about the management and what the financial situation of the company is. Really not the best situation to make a prediction but for the purposes of this blog I'll go with my gut and test how good that method is.

So what probability should I assign to Magic Instruments being a Unicorn? If I think they might be a Unicorn then from a semantics point of view the starting point for my assignment should be above 50% to make it a non-trivial prediction that can be evaluated. I'm not that confident in this forecast so I'll say it is 60% likely to occur (i.e., p(MI = U) = 0.6). I won't say I'm 56% confident because claiming such accuracy would be misleading. I'm implicitly claiming accuracy to within 10 percentage points at this stage.

Predicting Unicorns from Y Combinator Demo Day events is a fun pastime that could also be a teachable moment for would be private investors on picking the next big thing.

At last count, VentureBeat listed 229 unicorn startups with a disproportionately large number (101) located in California (image below from VentureBeat article). A non-trivial portion of that number were incubated by Y Combinator so this method of predicting unicorns has a decent chance of succeeding.

Permalink 

Decision Tree Diagrams [Decision Trees
Posted on March 21, 2016 @ 06:20:00 AM by Paul Meagher

I've been reading quite a bit about decision making lately and one common recommendation for making better decisions is to diagram your decision using decision trees.

In Reid Hastie & Robyn M. Dawes textbook Rational Choice in an Uncertain World: The Psychology of Judgement and Decision Making (2nd Edition, 2010) they use the convention that decision alternatives are indicated by a small square and event alternatives are indicated by a small circle. I used the online diagramming tool draw.io to create the skeleton of a basic decision tree diagram:

In a fully specified decision tree diagram I would also have labels along each horizontal branch, probabilities along each event branch, and consequence values at the terminal end of each event branch (see this previous blog for an example). I won't be going into such details in this blog because I want to focus on the first step in diagramming a decision tree; namely, drawing the skeletal structure of a decision tree.

If you want to diagram decisions it is useful to have a tool that makes it easy to do so. A piece a paper is a good option, but I wanted to use a piece of software to create decision tree diagrams exactly as they appear in the textbook mentioned above. The draw.io tool fits these requirements. In a previous blog I demonstrated a more complex approach to creating decision trees. In this blog I wanted to find a more accessible approach that anyone could use without installing a bunch of software.

Decision trees have lots of symmetrical branches and drawing them was the main challenge I had in making decision tree skeletons. Duplicating and dragging horizontal or vertical lines were the main actions I ended up using to create the symmetrical decision tree diagram above. The left column in the figure above shows the "General" and the "Misc" symbol libraries that I used to create the square, circle, and line shapes. By default they are available to use when you access the draw.io online application and you just need to click the library name to open it and access the diagramming shapes you want to use.

It would be possible to print off a decision tree template like this so you can write labels and numbers onto it. It would be a ready-made template for a common form of decision problem, namely, a decision with 2 possible choices and 2 possible event outcomes that will determine the expected consequences for each choice.

In this blog my goal was to suggest some software you can use to create decision tree diagrams (draw.io) and how you can go about creating nice symmetrical tree diagrams with it. There are probably more efficient techniques to use than what I'm suggesting as I just started playing around with draw.io for this purpose and the duplicate and drag approach was the first approach that worked ok. We can use decision tree diagrams to explore a wide range issues in decision making and in future blogs we'll explore some of these issues.

Here are some useful tips that will get you up to speed quickly using draw.io.

Permalink 

Design & Making [Design
Posted on March 17, 2016 @ 04:48:00 AM by Paul Meagher

Nobel Laureate Herb Simon wrote an influential book on design (and other topics) called The Sciences of the Artificial (Third edition 1996, first published in 1969). In Chapter 5 called "The Science of Design: Creating the Artificial" he introduced the importance of design this way:

Engineers are not the only professional designers. Everyone designs who devises courses of action aimed at changing existing situations into preferred ones. The intellectual activity that produces material artififacts is no different fundamentally from the one that prescribes remedies for a sick patient or the one that devises a new sales plan for a company or social welfare policy for a state. Design, so construed, is the core of all professional training; it is the principal mark that distinguishes professions from the sciences. Schools of engineering, as well as schools of architecture, business, education, law, and medicine are all centrally concerned with the process of design (p. 111).

There are three aspects of this statement that want to highlite in today's blog.

1. Herb claims that the intellectual activity used across these diverse examples is fundamentally the same. The intellectual activity he is referring to is problem solving. Herb believes the problem solving element of design can be rigorously taught. Herb, in collaboration with others, developed a simulation of problem solving in the early days of AI called the General Problem Solver (GPS) which was largely an implementation of means ends analysis, one of the main techniques of problem solving and design.

2. Herb defined design as "courses of action aimed at changing existing situations into preferred ones". This is a good definition but he went a bit further and gave a formal definition of what is involved in design:

The reasoning implicit in GPS is that, if a desired situation differes from a present situation by differences D1, D2 ..., Dn, and if action A1 removes differences of type D1, action A2 removes differences of type D2, and so on, them the present siatuion can be transformed into the desired siatuion by performing the sequence of actions D1, D2 ..., Dn.

This reasoning is by no means valid in terms of the rules of standard logic in all possible worlds. Its validity requires some rather strong assumptions about the independence of the effects of the several actions on the several differences. One might say that the reasoning is valid in worlds that are "addititive" or "factorable". (p. 122)

Even if the world does not exhibit the independence that is required, that is often something we don't appreciate until we engage in exploratory design.

3. The final point I want to discuss is Herb's statement that "Design, so construed, is the core of all professional training". In this chapter of his book, Herb Simon was concerned with coming up with a curriculum that could be taught to such professionals so that a core element of being a professional, the ability to design, was actually taught to them in a scientific manner. These are the topics he would cover and they are also the list of topics he discussed in more detail in this chapter.

THE EVALUATION OF DESIGNS
1. Theory of Evaluation: utility theory, statistical decision theory.
2. Computational Methods:
a. Algorithms for choosing optimal alternatives such as linear programming computations, control theory, dynamic programming.
b. Algorithms and heurisitics for choosing satisfactory alternatives.
3. The Formal Logic of Design: imperative and declaritive logics.

THE SEARCH FOR ALTERNATIVES
4. Heuristic Search: factorization and means-ends analysis.
5. Allocation of resources for search.
6: Theory of Structure and Design Organization: Hierarchic systems.
7. Representation of design problems.

One problem I have with Herb's curriculum is that it is a curriculum for "professional training" but design has become much more democratized skill these days. I doubt that this curriculum will be used to teach the masses about design. This is not to say it isn't worth studying for powerful ideas about design, but I would suggest that a book like The Makers Manual: A Practical Guide To the New Industrial Revolutions (2015) is a much more relevant and inclusive curriculum for teaching design. Here you will see 3D Printers, Milling Machines, and Laser Cutters and be introduced to the software for controlling them. You'll learn about GitHub, Processing, Raspberry Pi and Auduino. You'll learn about ways to finance a business startup that might arise from your design and manufacturing work.

Since Herb Simon published his book the Maker Movement has come on the scene and becoming increasingly relevant. They are talking about a new industrial revolution happening and the skills required to be a part of it. Should we be teaching design or should we be teaching making instead? And when should we start teaching it and to whom?

Permalink 

Banking & Accounting [Finance
Posted on March 16, 2016 @ 06:45:00 AM by Paul Meagher

If you have been self-employed for awhile there is a good chance that will end up owning more than one business entity which you are deriving an income from or which you are investing money into starting up (and want to claim those expenses).

I personally have three businesses and resisted the urge to complicate my banking by setting up a separate bank account and a separate credit card for each account. It seemed crazy that I would need 4 banking accounts (which also includes my personal banking account) and 4 credit cards to manage my accounting properly so I resisted setting up a banking account for one of my businesses and getting a credit card for two of my businesses. I convinced myself that all I needed was a "personal" and a "business" credit card. I also convinced myself that I didn't need a bank account for one of my businesses because I was investing money from my other businesses into getting it started and until it was a "real" business I would hold off on setting up accounts.

The fact is that not having all of these banking accounts and credit cards made my life more complicated because it made it more difficult to create a proper accounting trail of where money was coming from and where it was going to. So yesterday I started the process of setting up a new bank account and two new credit cards. In the future if I decide to setup a new business it will be with the knowledge that I will need to also setup a banking account for it and a corresponding credit card. The credit card is not because I'll need the extra credit, it is simply that purchasing by credit card is often the most convenient way to make purchase and record it to a specific business entity.

So my advice to anyone setting up a second business is to make sure you setup a new banking account and a new credit card for your new business right away so you can use them to create an easy-to-follow accounting trail for your new business. Don't make the mistake I made and imagine that your banking will get more complicated because you have more accounts and cards; it will actually get easier because the accounting side of your business will get easier and more justifiable.

The rule I now follow is:

1 business entity = 1 bank account + 1 credit card.

Permalink 

Evaluating Forecasts [Future
Posted on March 10, 2016 @ 07:19:00 AM by Paul Meagher

This is my third blog related to the book Superforecasters: The Art and Science of Prediction (2015). In my last blog I discussed the importance of updating forecasts rather than just making a forecast and waiting until the forecasted outcome occurs or not. This naturally leads to the question of how we should evaluate our updated forecasts in light of agreements or discrepancies between the predicted and the expected outcomes. That is what this blog will attempt to do.

The forecasting example I have chosen to focus on is predicting what my book expenses will be for 2016. I came up with an exact estimate of $1920 but pointed out that assigning a probability to a point estimate is tricky and not very useful. Instead it is more useful to specify a prediction interval [$1920 +- $60] and assign a probability to how likely it is that the forecasted outcome will fall within that interval (80% probability). Now we have a forecast that is sufficiently specified that we can begin to evaluate our forecasting ability.

We can evaluate our financial forecasting ability in terms of whether the probability we assign to an outcome accurately reflects the level of uncertainty we should have in that outcome. If you assign an outcome a high probability (100%) and it doesn't happen then you should be penalized more than if you assigned it a lower probability (60%). You are overconfident in our forecasting ability and when we score your forecast the math should reflect this. If you assign a high probability to an outcome and the outcome happens, then you shouldn't be penalized very much. The way our scoring system will work is that a higher score is bad and a score close to 0 is good. A high score measures the amount of penalty you incur for a poorly calibrated forecast. To feel the pain of a bad forecast we can multiplying the penalty score by $100 and the result would determine how much money you have to pay out for a bad forecast.

Before I get into the math for assessing how "calibrated" your estimates are, I should point out that this math does not address another aspect of our forecast that we can also evaluate in this case, namely, how good the "resolution" of our forecast is. Currently I am predicting that my 2016 book expenses will be $1920 +- $60, however, as the end of 2016 approaches I might decide to increase the resolution of that forecast to $1920 +- $30 (I might also change the midpoint) if it looks like I am still on track and that my forecast might be only off by the cost of 1 book (rather than 2). When we narrow the range of our financial forecasts and the outcome falls within the range then a scoring system should tell us that we have better resolving power in our forecasts.

The scoring system that I will propose will address calibration and resolution and has the virtue that it is very simple and can be applied using mental arithmetic. Some scoring systems can be so complicated that you need to sit down with a computer to use them. David V. Lindley has a nice discussion of Quadratic Scoring in his book Making Decisions (1991). The way Quadratic Scoring works is that you assign a probability to an outcome and if that outcome happens you score it using the equation (1-p)2 where p is your forecast probability. If the predicted outcome does not happen, then you use the equation p2. In both cases, a number less than 1 will result so Lindley advocates multiplying the value returned by 100.

So, if it turns out that my estimated book expenses for 2016 falls within the interval [$1920 +- $60] and I estimated the probability to be 0.80 (80%) then to compute my penalty for not saying this outcome had a 100% probability, I use the equation (1-p)2 = (1-.8).2 = .22 = 0.04. Now if I multiply that by 100 I get a penalty score of 4. One way to interpret this is that I only have to pay out $4 dollars for my forecast because it was fairly good. Notice that if my probability was .9 (90%) my payout would be even less ($1), but if it was .6 (60%) it would be quite a bit bigger at $36. So not being confident when I should be results in a bigger penalty.

Conversely, if my estimated book expenses for 2016 didn't fall within the interval [$1960 +- $60] and I estimated the probability to be 0.80 (80%) then to compute my penalty I use the second equation which is p2 = .82 = .64. Now multiply this by 100 and I get a penalty score of $64 that I have to payout. If my probability estimate was lower, say .60 (60%), then my penalty would be .62 = .36 x 100 = $36. So if I'm not so confident when I'm wrong that is better than being confident.

The quadratic scoring rule is summarized in this table:


Source: David Lindley, Making Decisions (1991), p. 24

I hope you will agree that the Quadratic Scoring Rule usefully reflects how penalties should be calculated when we compare our forecasted outcomes to actual outcomes. It measures how "calibrated" our probability assignments are to whether the events they predict actually happen. In cases where we are not predicting numerical outcomes this scoring system would be all we need to evaluate the goodness of our forecasts. Our prediction problem, however, is a numerical prediction problem so we also need to concern ourselves with how good the resolution of our forecast is.

Intuitively if our prediction interval is smaller and the actual outcome falls within this range then we consider this a better forecast than one that involves a prediction interval that is wider. My proposal is simply to measure the size of your range and add it to your quadratic score. So if my prediction interval is [$1920 +- $60] with 80% confidence and I am correct then my overall score is 4 (see previous calculation) plus the range which is 120. Lets convert this all to dollars and our overall penalty is $4 + $120 = $124. If we narrow our prediction interval to $1920 +- $30 then we get $4 + $60 = $64 as our penalty score.

In an ideal world we would make exact forecasts (+- 0 as our range) with complete confidence (100%) and the forecasted outcomes would happen exactly as predicted. In this universe our penalty scores would be 0. In the real world, however, our predictions often have calibration or resolution issues so most predictions involve a penalty score to some extent. It might help to think of this as a cost you have to pay to someone because your predictions are not as perfect as they could be.

With this scoring system you can check in on your forecasts at some midway point to see how you are doing. If you update your forecast what you are looking for is a reduced penalty score when you check up on your forecast again. How much your penalty score improves tells you if your updates are on the right track. Generally your penalty scores should go down if you update your forecasts on a regular basis like Superforecasters do. Superforecasters are quite interested in evaluating how their forecasts are progressing and using some simple math like this helps them figure out how well they are doing.

A book that is on my priority list to read is Simple Rules: How to Thrive In a Complex World (2015). They argue that it is often a mistake to use complex rules to solve complex problems (which forecasting problems often are). They document how simple rules are often effective substitutes and can be used more flexibly. It is possible to be more sophisticated in how we evaluate forecasts but this sophistication comes at a price - the inability to quickly and easily evaluate forecasts in the real world. We often don't need extra sophistication if our goal is to easily evaluate forecasts in order to get some useful feedback and produce better forecasts. I would challenge you to come up with a simpler method for evaluating financial forecasts that is as useful.

If you want to learn more about the motivations, applications and techniques for forecasting, I would recommend the open textbook Forecasting: Principles and Practice.

Permalink 

Updating Forecasts [Future
Posted on March 8, 2016 @ 07:39:00 AM by Paul Meagher

In my last blog I started discussing the book Superforecasters: The Art and Science of Prediction (2016) by Philip Tetlock and Dan Gardner. I suggested that financial forecasting is a useful arena in which to hone forecasting skills and I used the example of forecasting my book expenses for 2016. I estimated that I would purchase 52 books (average of 1 per week) and each book would cost $30 so my overall projected expenses for books in 2016 was $1,560.

It turns out that when I actually tally up all the books I purchased since the beginning of the year until Mar 1, 2016, sum the cost for all of them and add taxes the amount is $583.43 (I don't generally incur any shipping costs). I purchased 20 books in that period. Average cost per book was $29.17 (which was very close to my estimate). If I assume that I will spend the same amount over the next 10 months then my forecasted book expenses would be $3500.57. The difference between my initial estimate of $1,560 and this estimate of $3500.57 is $1940.57. We have quite a discrepancy here.

When you make a forecast that forecast should not be written in stone. It is based upon the best information you had available at the time. You learn new information and the world changes so you have to adjust your forecasts to incorporate this new information. When superforecasters update their forecasts the change from the previous forecast is generally not a big shift although it can happen. The information up to that point still has some weight in determining what the current forecast should be. Forecasters need to be wary of overreacting to new information by making large changes to their forecast right away.

Likewise in light of the new information that my book expenses could be $3500.57 I have to decide how to incorporate this new information into my current forecast of $1,560. Because my estimate of the cost per book was quite accurate ($30) the question boils down to whether I will end up purchasing 116 books instead of the 52 I estimated. Even though I like books I can't see me doubling my current rate of book buying. I don't expect to keep buying at this rate during the spring/summer as I won't have as much time for reading. So I am inclined to remain close to my original forecast but perhaps bump it up a bit to take into account the hard data I have on how much I spent so far.

Financial forecasting is subject to events happening in the world but it is also subject to a policy decision that will control costs. My policy decision is to purchase at the rate of 1 book a week however I will also sometimes buy books more impulsively if I'm in a bookstore, or, as happened last Saturday a local book author was at a seed buying event and I purchased her new Permaculture book. So my model of book purchasing consists of a policy component involving 1 book a week and another "random" component which I'll simply assume amounts to 1 book a month over and above my policy. This will generate a forecast of 64 books this per year at $30 per book with is $1920. So my forecasted 2016 Book Expenses has moved from $1560 to $1920 as a result of new information about my actual book purchasing costs to date.

I could wait until the end of the year to see how close my forecasted book expenses are to my actual book expenses, but why wait until then? I might want to check in after 6 months and see where I stand then and adjust my forecast accordingly. After six months my expenses should be half of $1920 or $960. So I'll check in again at 6 months and see if my expenses are close to this amount. Superforecasters regularly update their forecasts and will also often do a post-mortem when examining forecast accuracy to figure out what they did right or wrong. Incorporating feedback in this way helps to improve future forecasting in that domain.

Another way to make forecasts instead of simple point estimates as I have done is to forecast that my book costs will fall within some interval with a certain probability. So I might say that in 6 months my book expenses will fall within +- 60 dollars of $960 with a probability of 80%. The two ways I can improve upon my future forecasts is to 1) narrow my range (to +- 30 dollars) and 2) increase my estimate of its probability (to 90%). One method we can use to score such forecasts is quadratic scoring which penalizes you more for incorrect estimates that are assigned a high probability (90%) of being true compared to a lower probability of being true (60%). I'll leave the discussion of the math used for quadratic scoring for my next blog.

The purpose of this blog was to discuss the idea that being a better forecaster involves updating your forecast as you assimilate new information rather than just making a forecast and waiting until the forecast date to see if it is correct. Superforecasters update their forecasts regularly, they generally don't overreact by making big shifts from their previous forecasts. They analyze what went right or wrong when a forecast is checked against actual numbers so they can use this feedback to improve their future forecasts. It is hard to assign a probability to a point estimate so we introduced the idea of assigning a probability that the forecasted number would fall within some range. In my next blog we will look at quadratic scoring (or Brier scoring) used to evaluate how good these forecasts are.

Permalink 

Financial Forecasting [Finance
Posted on March 2, 2016 @ 08:55:00 AM by Paul Meagher

I am halfway through Philip Tetlock and Dan Gardner's book Superforecasting: The Art & Science of Prediction (2015). Bloomberg suggest that this is the third most important book for investors to read over the spring break. With all due respect to Elon Musk and the Originals, I would probably rank this book #1 because of its' timeliness.

One current event that makes the book timely is the US Presidential elections and the race to predict who will be the next US President as well as the outcome of all the skirmishes along the way. A new group of people who will be given credibility in predicting these results will be the people that Philip Tetlock's research program has identified, through testing, as superforecasters. These are people who come from different backgrounds but who share the feature that they are significantly above average in predicting the outcome of world events.

The superforecaster club appears to be an elite group of prediction experts. Philip and Dan analyze what makes them tick in the hopes of helping us all become better at predicting the future. Many of the questions that Philip and Dan ask them are the types of geopolitical questions security intelligence agencies want to have answers to so that they can properly prepare for the future. Much of the research that is reported comes from research sponsored by intelligence agencies. Intelligence involves alot of prediction work so this makes sense.

The second reason why this book is timely is because this is tax time for many people and there is probably no better time of year to figure out a financial forecast for next year. You have your financial performance from last year clear in your mind and your are now 2 months into 2016 and have some current data to integrate into your forecast for 2016. So around now is a good time to exercise your prediction muscle and come up with a 2016 financial forecast. If you don't seriously exercise your prediction muscle don't expect your prediction abilities to get any better.

One prediction that small businesses are required to make each year is their expected income in order to make appropriate quarterly income tax payments. One reason superforecasters are good at predicting the future is because they are good at breaking down complex prediction problems, like yearly financial forecasts, into smaller and easier prediction problems. If asked to evaluate whether Hillary or Trump will win, they don't try to predict the question as it stands. They break it down into all the things that would have to be true in order for the Hillary or Trump presidential outcome to happen and evaluate the likelihood of those component outcomes. Similarly, to come up with a good financial forecast for next year you need to break your prediction down into expense and income buckets and try to estimate how full those different buckets will be.

One business expense that I claim are the books I purchase for educational or blogging purposes. How much will I spend on books in 2016? If I can nail this down fairly well and then nail down how much I'll spend on keeping my vehicle on the road and so on, then I should come up with a better forecast of my expenses for 2016 than if I just used my overall expenses from 2015 as my guide to forecasting my 2016 expenses. One way to improve your tax season experience is to look at it as an opportunity to improve your forecasting skills in a domain where you can get good feedback on your forecasting accuracy. To be become better at forecasting it is not enough to simply make forecasts, you also have to evaluate how well your forecasts did and this is relatively easy in the case of financial forecasting (next year's income taxes will tell you how accurate you were).

So back to forecasting my book expenses for 2016. Superforecasters are good at taking an outside view of the prediction problem before taking an inside view. The outside view is the objective view of the situation. They ask what might be the relevant numbers, statistics, and base rates upon which I can base my forecast. In my case, I can log into Amazon and see how many books I have purchased so far from them in 2016 and use the amount spent so far to project what I'm likely to spend this year. Once I have these numbers than I can take the inside view and ask whether my rate of reading is likely to persist for the rest of the year. As spring and summer approaches, and I spend more time at the farm, I expect my reading to go down. Even though my reading rate may go down, I nevertheless expect to keep investing at that rate of 1 book a week because my new years resolution was to read a book a week.

So one book a week at an average price of around $30 per book leads me to make an exact prediction of $1,560 (52 weeks x $30 per book) as the expected total for my 2016 book expenses category.

Another feature of superforecasters is that they are not afraid to do a little math. It is hard to assimilate forecast feedback if you don't compare forecasted and actual numbers. Comparing forecasted and actual numbers is trickier than just comparing two single numbers. It is at this point, however, that I want to conclude this blog because I think it will take another blog to address the issue of specifying and evaluating forecasts. I also have the second half of the book to read :-)

To conclude, the Superforecaster book is a timely book to read. Financial forecasting is arguably one of the best arenas in which to develop forecasting skill as it involves breaking down a complex prediction problem into simpler prediction problems and the opportunity to gather feedback regarding your forecasting accuracy. Financial forecasting is also a very useful skill so it is a good arena in which to hone forecasting superpowers.

In my next blog on this book/topic I'll address some of the math associated with specifying and evaluating forecasts.

Permalink 

 Archive 
 

Archive


 November 2023 [1]
 June 2023 [1]
 May 2023 [1]
 April 2023 [1]
 March 2023 [6]
 February 2023 [1]
 November 2022 [2]
 October 2022 [2]
 August 2022 [2]
 May 2022 [2]
 April 2022 [4]
 March 2022 [1]
 February 2022 [1]
 January 2022 [2]
 December 2021 [1]
 November 2021 [2]
 October 2021 [1]
 July 2021 [1]
 June 2021 [1]
 May 2021 [3]
 April 2021 [3]
 March 2021 [4]
 February 2021 [1]
 January 2021 [1]
 December 2020 [2]
 November 2020 [1]
 August 2020 [1]
 June 2020 [4]
 May 2020 [1]
 April 2020 [2]
 March 2020 [2]
 February 2020 [1]
 January 2020 [2]
 December 2019 [1]
 November 2019 [2]
 October 2019 [2]
 September 2019 [1]
 July 2019 [1]
 June 2019 [2]
 May 2019 [3]
 April 2019 [5]
 March 2019 [4]
 February 2019 [3]
 January 2019 [3]
 December 2018 [4]
 November 2018 [2]
 September 2018 [2]
 August 2018 [1]
 July 2018 [1]
 June 2018 [1]
 May 2018 [5]
 April 2018 [4]
 March 2018 [2]
 February 2018 [4]
 January 2018 [4]
 December 2017 [2]
 November 2017 [6]
 October 2017 [6]
 September 2017 [6]
 August 2017 [2]
 July 2017 [2]
 June 2017 [5]
 May 2017 [7]
 April 2017 [6]
 March 2017 [8]
 February 2017 [7]
 January 2017 [9]
 December 2016 [7]
 November 2016 [7]
 October 2016 [5]
 September 2016 [5]
 August 2016 [4]
 July 2016 [6]
 June 2016 [5]
 May 2016 [10]
 April 2016 [12]
 March 2016 [10]
 February 2016 [11]
 January 2016 [12]
 December 2015 [6]
 November 2015 [8]
 October 2015 [12]
 September 2015 [10]
 August 2015 [14]
 July 2015 [9]
 June 2015 [9]
 May 2015 [10]
 April 2015 [9]
 March 2015 [8]
 February 2015 [8]
 January 2015 [5]
 December 2014 [11]
 November 2014 [10]
 October 2014 [10]
 September 2014 [8]
 August 2014 [7]
 July 2014 [5]
 June 2014 [7]
 May 2014 [6]
 April 2014 [3]
 March 2014 [8]
 February 2014 [6]
 January 2014 [5]
 December 2013 [5]
 November 2013 [3]
 October 2013 [4]
 September 2013 [11]
 August 2013 [4]
 July 2013 [8]
 June 2013 [10]
 May 2013 [14]
 April 2013 [12]
 March 2013 [11]
 February 2013 [19]
 January 2013 [20]
 December 2012 [5]
 November 2012 [1]
 October 2012 [3]
 September 2012 [1]
 August 2012 [1]
 July 2012 [1]
 June 2012 [2]


Categories


 Agriculture [77]
 Bayesian Inference [14]
 Books [18]
 Business Models [24]
 Causal Inference [2]
 Creativity [7]
 Decision Making [17]
 Decision Trees [8]
 Definitions [1]
 Design [38]
 Eco-Green [4]
 Economics [14]
 Education [10]
 Energy [0]
 Entrepreneurship [74]
 Events [7]
 Farming [21]
 Finance [30]
 Future [15]
 Growth [19]
 Investing [25]
 Lean Startup [10]
 Leisure [5]
 Lens Model [9]
 Making [1]
 Management [12]
 Motivation [3]
 Nature [22]
 Patents & Trademarks [1]
 Permaculture [36]
 Psychology [2]
 Real Estate [5]
 Robots [1]
 Selling [12]
 Site News [17]
 Startups [12]
 Statistics [3]
 Systems Thinking [3]
 Trends [11]
 Useful Links [3]
 Valuation [1]
 Venture Capital [5]
 Video [2]
 Writing [2]