Don’t trust your judgment – trust the data?

This month in a Harvard Business Review blog Andrew McAfee wrote a stirring article called Big Data’s Biggest Challenge? Convincing People NOT to Trust Their Judgment.  It says management education and human nature encourage people to trust their guts and instincts in business. He says this is a bad thing, calling it “the most harmful misconception in the business world today (maybe in the world full stop).”

Instead of trusting intuition, the article says businesspeople need to build, and rely on, better empirical data and algorithms to guide decision-making. In short, intuition is a poor man’s algorithm.

As a data scientist who uses algorithms to predict business survival or failure, this argument is dear to my heart.  Andrew… go on with your bad self.

The point he makes is, without question, a controversial one.

On the one hand, algorithms have proven far Halmore accurate than intuition at predicting a variety of things. The article lists examples, but for a longer list check out Ian Ayres’s website for the book Super Crunchers or read Daniel Kahneman’s Thinking Fast and Slow.  There are domains where algorithms are simply better.  End of argument.

This makes some people very uncomfortable.  Sometimes the discomfort is justified, other times it isn’t.  In either case, there’s a human tendency to rush to intuition’s defense whenever it gets challenged.

Some of the better arguments against algorithmic decision-making are as follows:

  1. the algorithm has to work in the first place, or at least do a better job than the alternatives (obviously you shouldn’t rely on a broken tool); and
  2. Beware of unintended consequences or moral hazards (mortgage-backed securities… anyone?).

Meanwhile weaker objections include:

  1. Algorithms are soul-crushing, creativity-smashing bludgeons that sap the joy out of everything (art is always better than science);
  2. Math sucks so I don’t want it to catch on more than it already has;
  3. Embracing algorithms will make everything over-constrained and rigid, making it impossible to be “outside the box”;
  4. Accurate predictions in <insert domain here> are simply impossible, even if it’s actually happening right now (this is called ignorance or denial); and
  5. Human intuition is inherently better all the time, no matter what.

In between strong and weak objections are more nuanced, circumstance-dependent concerns such as:

  1. You can’t take the “human” out of an algorithm – they’re built by humans, run by humans, and their outputs are acted on by humans. Never forget algorithms are only as good as the people surrounding them; or
  2. Algorithms threaten my value or livelihood (fear of being replaced by a robot).

I think any algorithm or technology that solves a big human problem in an empirically better, morally positive way is a good thing. It’s a great aspiration.  In medicine, it’s sometimes called a “cure.”  Viewed on a continuum between perfect knowledge and total ignorance, algorithms and educated guesses are simply different waypoints. We’d all like to know everything, about everything, all the time. Until then, bits of knowledge remain scattered across the varied landscape of progress.

Algorithms aren’t all equal. Not all of them will have the same impacts on their domains and the human lives they touch.  We need to see some algorithms, the good ones, as what they are – a sign of progress (even if it makes us a little uncomfortable at first).  Humans do learn things.  Knowledge moves forward, often taking the shape of algorithms as our understandings of the universe deepen and mature.  Beware of knee-jerk resistance to algorithms because, just as in the case of intuition, sometimes the biggest leaps forward come from unexpected places.

 

(BTW: special thanks to Jared for sending me this HBR article the other day.)

This Post Has 9 Comments

  1. Kevin Alexander

    People are always overconfident about their instincts. I think one would need a very balanced mind and ego to accept that their instincts are not as accurate as a more methodical approach that uses data.

  2. Keith

    Seems like a bit of a false dichotomy to me. Judgement is best applied where there is not enough data or there isn’t yet a reliable tool.

    Caveat: even reliable tools can be horribly wrong when underlying changes (viz. there’s always some fraction of the actual system behavior that isn’t modelled. When the mis-modelled (or unmodeled ;>) effects suddenly dominate your tool is wrong until the model is adjusted.). So judgement should always be applied to the question of “is our mathematical model still reflecting reality). Sometimes that’s obvious (say, in a government regulated industry there’s a major change in regulatory behavior) other times not so much.

  3. Tebogo

    Debatable

  4. Kevin Lind

    Intuition and number-crunching should be followed up with understanding, especially when they disagree. Once you get an “answer”, you need to think about it to understand why that answer is what it is, and what to do about it. Intuition is often based on experience; but is it the right experience? Data analytics are based on data and algorithms; but what are the right data and algorithms? Decisions are made by people; computers and intuition are tools. Use both wisely.

  5. John

    It is, indeed, a thought provoking piece and does embrace a raging debate within the economic development professional on the most effective policies to nurture and support to encourage the sustainability of nascent entrepreneurial clusters within a community or a region. The fundamental dilemma is creating means to justify investment of public capacity and resources on behalf such fledgling clusters through some level of measure while not stifling the independent, creative, and private sector forces that have begun such clusters.

  6. Byron E. Warren

    Sometimes clinical reasoning is presented as a form of evaluating scientific knowledge, sometimes even as a form of scientific reasoning. Critical thinking is inherent in making sound clinical reasoning.

  7. Jesse O. Phillips

    We propose an improvement in the random search algorithm called COMPASS to allow it to deal with a single stochastic constraint. Our algorithm builds on two ideas: (a) a novel simulation allocation rule and (b) the proof that this new simulation allocation rule does not affect the asymptotic local convergence of the COMPASS algorithm. It is shown that the stochastic-constrained COMPASS has a competitive performance in relation to other well known algorithms found in the literature for discrete-valued, stochastic-constrained simulation problems.

  8. Ross Reyes

    I think any algorithm or technology that solves a big human problem in an empirically better, morally positive way is a good thing. It’s a great aspiration. In medicine, it’s sometimes called a “cure.” Viewed on a continuum between perfect knowledge and total ignorance, algorithms and educated guesses are simply different waypoints. We’d all like to know everything, about everything, all the time. Until then, bits of knowledge remain scattered across the varied landscape of progress.

  9. Reid Floyd

    Recent work in machine learning has examined the complexity of the data as it affects the performance of supervised classification algorithms. Ho and Basu present a set of complexity measures for binary classification problems.

Leave a Reply