Marked to Model

I recently watched Moneyball, a movie that is based on the true story of the Oakland Athletics and how they used statistical analysis and modeling (sabermetrics) to put together a team of “misfits” that eventually won a record 20 consecutive games. Sabermetricians take the rich statistical information that is kept as part of every baseball game, combined with detailed statistical analysis and simulation to create advanced metrics that assess the “true” value of players. All of this was necessitated by the Oakland Athletics’ fight to remain competitive against the major teams in Major League Baseball while having one of the lowest player payrolls – more on this later.

As I watched the movie, it got me thinking about modeling, and the extent to which we can trust models: Is it really possible that a computer model can find the hidden value in a player that can’t catch or hit, but somehow adds to the value of the collective? We are clearly moving into an era in which we are inundated with information and data for analysis; however, we need to be mindful of not only developing models to utilize the exponential growth in computing resources, but also communicating the appropriate use and validation of these models. The world of finance is filled with cautionary tales of the danger of excessive faith in models – often with catastrophic results.

In 1998, Long Term Capital Management (LTCM), a hedge fund headlined by 2 Nobel Prize winners for Economics, Myron Scholes and Robert Merton and staffed with the crème de la crème of finance minds in industry and academia, required a bailout by other Wall Street firms after it lost ~$5 billion dollars on bad trades. The fund had used complex mathematical models to take advantage of fixed income arbitrage: Without going into the gory details, the fund made money by identifying small price discrepancies between fair price and market price of financial instruments. In order to do this, the fund, in particular Scholes and Merton, had developed complex models to forecast interest rates, economic activity, and a myriad of other factors. Furthermore, because the price differences were small they had to use massive leverage to gain profits. That is they borrowed up to 25 times the money investors had put into the fund. As Myron Scholes put it, LTCM would “operate like a giant vacuum cleaner sucking up nickels that everyone else had overlooked” – true value. There was supposed to be a relatively low endeavor, if one believed the metaphor.

Over its first three years LTCM returned over 300 percent in nominal returns – unprecedented returns for that time. As the money rolled in the model it seemed was validated and the trades became more and more aggressive, until in 1998, the model failed to anticipate Russia’s default on its international debts. Consequently, the whole thing collapsed in a hurry, and a bailout ensued. Later, Merrill Lynch, one of the investors in the fund would remark in their annual report that mathematical risk models “may provide a greater sense of security than warranted; therefore, reliance on these models should be limited.”

After such a disaster based on the belief that models can accurately capture reality, one would think that investors and regulators would have learned the lesson. However, the current financial crisis that we are facing can be traced to the misapplication of a model for default rates that lead to the easy credit and housing boom and bust. I won’t go into the details and rather refer you to an excellent exposition on the topic at wired.com.
http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all

While I have restricted myself to examples from finance, the literature is replete with misapplication of models – however, with much less dire consequences. We are moving towards a future where modeling will impact every facet of our lives (see “The Tech-led Boom” by Julio Ottino and Mark Mills on the Wall Street Journal Opinion Page), and so it is incumbent upon anyone working on modeling to remain vigilant and strive to improve models by accepting their fallibility, and to ensure that models are not misapplied.

Finally, back to the A’s, it seems the jury is still out on sabermetrics as a primary method of building a winning team. While the Oakland A’s had a great season that year, they have only made the play-offs twice since implementing their stats based philosophy. And in the last 17 years the world series has been won by a team in the top half of payroll 14 times. I guess you still need talent, and the cash to pay for it in order to win ball games.

— Rufaro Mukogo