Overconfidence, bad models and delusion

How bad can a predictive model be before people abandon it?  Intuitively, most people assume a model or decision process has to be right at least 51% of the time.  However when it comes to innovation and growth investing, people clutch onto models that are dead wrong 70%, 80% and even 99% of the time.  At what point is this delusional?

A lot of people know – rationally – that their models are inaccurate.  For example, venture capitalists and innovation managers know the majority of their growth investments fail.  Yet this rarely shakes their confidence in the criteria, methods and processes they use to choose investments.  There can be a mind-blowing disconnect between means and ends.

Think about venture capital.  Seen as the lords of predicting which startups will succeed or fail, VCs and their techniques are emulated by corporate managers, MBAs and entrepreneurs alike.  Meanwhile, three-quarters of US venture capital investments lose money[i] and the entire industry – measured by the national venture capital index – yielded negative returns for the full decade between 2000 – 2010.[ii]  We know what most VCs do.  We know it mostly doesn’t work.  Yet for some reason we continue following their lead.  Imagine the awkward silence if one of the investors on Shark Tank was asked to share his/her predictive accuracy.  The gap between means and ends is so huge that even basic questions like this are unheard of.

Overconfidence

It’s easy to scoff at VCs but corporate managers are equally guilty.  Depending on the industry, 70% – 90% of new products launched by corporations typically fail.  According to Nielsen, 85% of new consumer products fail within two years.  These are astounding failure rates, but they rarely cause businesses to reevaluate their new innovation screening processes or selection criteria.  Companies have a process.  The process mostly doesn’t work.  Yet they keep using the process as if it works.  It’s bizarre.

Even the big boys of Wall Street have this problem.  We all know finance is brimming with quants and modelers.  If you’ve ever met a large mutual fund manager, chances are the person didn’t strike you as lacking confidence.  Yet according to the Washington Post, the number of actively-managed mutual funds that beat the market last year was zero.  Zero![iii]

This chasm between performance and confidence wasn’t lost on Nobel Laureate Daniel Kahneman, who summed it up:

“Mutual funds are run by highly experienced and hard-working professionals who buy and sell stocks to achieve the best possible results for their clients.  Nevertheless, the evidence from more than 50 years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker… the year-to-year correlation among the outcomes of mutual funds is very small, barely different from zero…

What we told the directors of the firm was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill.  This should have been shocking news to them, but it was not.  There was no sign that they disbelieved us. How could they?  After all, we had analyzed their own results, and they were certainly sophisticated enough to appreciate their implications, which we politely refrained from spelling out.  We all went on calmly with our dinner, and I am quite sure that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before.”[iv]

Placing the fate of venture capital, corporate money or mutual funds in poor models is one thing.  What about the global financial system as a whole?  Surely these models are better… right?  Not necessarily.  In fact, they may be among the worst.

For example, according to mainstream economic models, big market fluctuations like the 15% dip in August of 1998 should almost never happen (less than one chance in 20 million)… but it happened, and wasn’t an isolated event.  The prior year the Dow fell 7.7% in one day (probability: one in 50 billion).  In July 2002 the Dow fell to a level with odds of one in four trillion.  In one of the worst trading days ever – October 19, 1987 – the Dow fell 29.2% (probability: less than one in 10 to the 50th power).  To put this into perspective, if you traded every minute for 1,000 years, you still wouldn’t expect to see these kinds of fluctuations in your lifetime.[v]  Yet they happen relatively often.

If a model predicts something won’t happen in a trillion years, but instead you see it regularly, it’s a good indication that the model sucks.  It’s unnerving to realize the world’s economic scales can be so miscalibrated.  No wonder the US averages a recession every seven years.

If current models are so bad, why do people keep using them?  In part, it’s hard to unseat established orthodoxy, even if the orthodoxy is dead wrong.  There’s also the problem that, despite the models’ failures, people don’t know what to replace them with.  They can’t think of anything better.

While these can be daunting challenges, I’m struck by how over-confident and even self-righteous people sometimes get about their models – even when they know their models don’t work.  Similarly to what Kahneman observed on Wall Street, VCs know their “gut” doesn’t have a great track record for picking the right startups, but that doesn’t cause them even a moment’s hesitation when using it on the next deal.

While common in business, this type of irrationality sticks out like a flashing light in other domains.  If a meteorologist’s model predicted snow 100 times, and was wrong 99 times, it’s hard to imagine the meteorologist continuing to espouse the model’s supremacy.

It’s time for innovators – be they VCs, corporate managers, entrepreneurs, or even finance and economic wonks – to take a sober look in the mirror.  For the most part, our predictions suck.  In order to address the problem we have to be candid about it.

I’m not writing to recreationally bash existing models, but to insist that people take the first step of admitting when models don’t work, so we can rule those methods out and look for better ways forward as a society.  What’s happened instead, is “experts” hide their poor accuracy while embedding their hackery into the fabric of management thinking and best practice.  Poor models thereby become the orthodoxy, which can take generations to unseat.  All the while, poor innovators, investors and entrepreneurs suffer miserably.  We’re doing the world a disservice when bad expertise is professed, and received, without acknowledging the facts.  No more denial.

To quote Kahneman again:

“Over-confident professionals sincerely believe they have expertise, act as experts and look like experts.  You will have to struggle to remind yourself that they may be in the grip of an illusion.”[vi]

 


[i] See Cage, The Venture Capital Secret: 3 Out of 4 Start-Ups Fail (Sept. 20, 2012) http://www.wsj.com/articles/SB10000872396390443720204578004980476429190

[ii] See Cambridge Associates LLC, U.S. Venture Capital Index And Selected Benchmark Statistics; Non-Marketable Alternative Assets, (December 31, 2009); Kedrosky, Right-Sizing The U.S. Venture Capital Industry, Ewing Marion Kauffman Foundation, (June 10, 2009)

[iii] Marte, Do any mutual funds ever beat the market? Hardly. (March 17, 2015).  https://www.washingtonpost.com/news/get-there/wp/2015/03/17/do-any-mutual-funds-ever-beat-the-market-hardly/

[iv] Kahneman, The Surety of Fools, the New York Times (October 23, 2011)

[v] Mandelbrot, The Misbehavior of Markets (2006)

[vi] Kahneman, The Surety of Fools, the New York Times (October 23, 2011)

This Post Has 3 Comments

  1. I assume a correlation between predictions on the “future ” of anything to be as statistically in error, isn’t it?

  2. Straight shooting Thomas Thurston!!! Can’t deny your passion. Can’t deny the human irrationality but I’m confident we’ll ‘get it’. We have to – right?

Leave a Reply