Posted on July 5, 2016 @ 05:56:00 AM by Paul Meagher
The 2016 Behavioral Economics Guide (PDF) was recently published. The introductory article by Gerd Gigerenzer discussed a research program between his research group and the Bank of England. Some of that research is reported in a paper called Taking Uncertainty Seriously: Simplicity versus Complexity in Financial Regulation (2014). The basic argument being made in these papers is that we would probably do better in regulating banks if we used simple heuristics to decide if a bank was at risk of failing rather that relying upon increasingly complex metrics to arrive at that assessment. Indeed it can be argued that the complexity of the metrics is making it even more difficult to assess the risk of bank failure.
How do we go about deriving these simple heuristics? You can use a framework of multiple fallible indicators to come up with the best set of indicators to determine if a bank might fail or not. Here are some of the indicators they looked at.
Using the indicators that had the most explanatory power, they proposed the following "fast and frugal" decision tree as a basis for evaluating whether a bank is likely to succeed or fail. A fast and frugal decision tree involves making a pass/fail decision at each node so you don't have to traverse all the nodes to potentially make a decision as to the vulnerability of the bank.
The purpose of this blog is to whet your appetite to read some of the linked to papers to read more. It is also to demonstrate an approach to coming up with your own fast and frugal approach for dealing with complex assessment problems (identify indicators, rank order them, and incorporate them into a fast and frugal decision tree or lens model). I'm not aware of this approach having much traction yet among bank regulators and the authors offered up this model as "illustrative" of a suggested approach rather than a real proposed model. That is too bad because such a model would allow the general public to gain a sense of how vulnerable a bank might be to failure. We can't do that with more complex approaches and therein lies one of the the problems with overly complex approaches that don't necessarily perform any better at predicting bank vulnerability that simple approaches.