Algorithms and the Robo-Apocalypse

People like algorithms… sometimes. As humans, we’re okay with algorithms doing math, running electronics or machinery. Yet people can get very uncomfortable (or even furious) when algorithms dare to cross an invisible threshold – into emotion.

This became crystal clear when music professor David Cope at UC Santa Cruz gave birth to Emmy; a computer program that composed classical music. Emmy used data from Beethoven, Bach, Chopin and other icons to compose original scores mimicking each composer’s style. As a result, Cope became one of the most loathed people in classical music. Professional musicians refused to play Emmy’s songs. Critics accused Cope of endangering music itself. After hearing one of Emmy’s pieces at a festival, delighted fans quickly turned irate after learning they’d been musically seduced by a soulless algorithm.[i]

Robo ApocalypseSimilar controversy surrounds the growing efforts to predict hit movies using algorithms, such as those by researchers at Tottori University in Japan, the now shuttered Worldwide Motion Picture Group rebooted as C4, Epagogix, Twitter, Google, or even by a student at Harvard. The same goes for hit music, with varying approaches such as Uplaya, Mixcloud, MusicXray, Section 101 and (now joining the fray) Shazam and Spotify. The idea of using algorithms to find or create hits has made a lot of people mad. As singer-songwriter Kym Tuvim put it, “from an artist’s standpoint, a songwriter’s standpoint, it’s horrifying to me… You’ll find a decreasing amount of any kind of surprise in music.”[ii]

If she isn’t horrified enough, Tuvim may hate it even more to learn she was right about how musical algorithms can lack surprising spark, but also how this can be addressed by… simply changing the parameters of the algorithm. For example, David Cope (who invented Emmy, the classical music-composing algorithm) also found initial algorithm-generated music somewhat flat, predictable and… well… robotic. Realizing how great composers create anticipation and elements of surprise, he later injected some random functions into his code to produce a more life-like flow with occasional spontaneous moments. “It could mix slow, gentle notes with the brash rumblings of a baroque finale. Cope had devised a pile of code, a machine, that truly captured the spirit of someone as brilliant as Johann Sebastian Bach.”[iii]

Music and movies are one thing, but all sorts of arts have been encroached by algorithms; even wine-making. For decades Sommeliers have been outraged by an algorithm that was published by Princeton economist Orley Ashenfelter. His algorithm uses basic climate data to predict the quality and prices of Bordeaux wines. Imagine telling a French Sommelier that a formula is superior when it comes to judging a wine’s quality! As Yale professor Ian Ayres wrote in 2007, “maybe the world’s most influential wine writer (and publisher of The Wine Advocate), Robert Parker, colorfully called Ashenfelter ‘an absolute total sham.’ Even though Ashenfelter is one of the most respected quantitative economists in the world, to Parker his approach ‘is really a Neanderthal way of looking at wine. It’s so absurd as to be laughable.’”[iv]

As a venture capitalist who uses algorithms to predict which startups to invest in, I also encounter the visceral tension between algorithms and emotion. Business innovation – like music, movies or wine-making – has always been regarded as something of an art. The intangible, emotional, spontaneous and very “human” inspiration and perspiration behind launching a business are treasured ideals. While the world of publicly-traded companies has a long history of algorithms (computer algorithms drive an estimated 60% – 70% of all trading activity on US stock exchanges), it’s quite upsetting for some people the moment you use algorithms to evaluate privately-held startups.

One commenter’s reaction to the very existence of startup investment algorithms was: “we’re living in a world where businessmen don’t even require a ‘gut’. What’s next? Removing a spine? Or did they already not need one of those? I’m starting to understand robo-apocalypse conspiracies.” A common reaction is along the lines of “these models build upon the assumption that we operate in a rational, predictable world. Which is, of course, complete nonsense.”[v] Some folks are against the idea of statistics altogether, saying things like “statistics can say whatever you want them to, so they’re useless.” Others like to point out how any poorly made algorithm, or one using bad data, will fail (garbage in, garbage out). Some people fear algorithms will somehow bring about the death of innovation creativity itself. One of my favorite objections was: “the age of algorithms is a veiled return to worshiping idols.”[vi] When I started this work I never imagined it could push buttons at Biblical levels.

Yes, faulty algorithms are a bad thing. Yes, bad data is a bad thing. Yes, there is randomness in the world that needs to acknowledged (like Cope discovered with classical music). Yes, statistics can be manipulated, but they can also be done in diligent and straight-forward manners. While all these topics deserve (and have received) more fleshing out than I’m going to give them here, there’s no denying how tensions flare the moment algorithms tiptoe into emotional realms that have been subconsciously reserved for humans, and humans alone. Ironically, algorithms are made by humans, so the real anger is about humans introducing new toys into each others’ sandboxes.

Personally, even though my work is on one side of the fence (building algorithms), I have mixed feelings about the use of algorithms in emotional and “human” realms. Feeling conflicted, I started thinking about algorithms as along a spectrum, instead of as a binary issue.

algorithm spectrum.001

I can see why people worry about using algorithms to evaluate innovations and startups. Done badly, it could lead to shoddy results if the algorithms are defective or poorly done. It could lead to wasteful gamesmanship as entrepreneurs try to manipulate the algorithms to their advantage. It could lead to an undesirable homogenization of what gets funded and what doesn’t, constraining human advancement in a lot of important areas that don’t necessarily fit an algorithm’s profile. It could lead to economic bubbles and bursts. It could create greater inequality between the have’s and the have-not’s. And yes… maybe it could somehow lead to a robot apocalypse… I suppose.

I can entertain these ideas, but at the same time other voices in my head object to the false dichotomy. Algorithms are simply rules that have been designed to achieve a desired result. They can be fancy artificial-intelligence, machine-learning, big-data, real-time, super-computing marvels (hyphens intended). They can also be the steps to cooking an omelet (heat pan, break egg, put egg in pan). Anyone who riles against algorithms, just because they’re algorithms – computer-based or otherwise – is a hypocrite. In truth, we all try to get whatever results we care about, and most of us do our best to use every tool at our disposal within the bounds of ethics and legality. We all spend our lives crafting decision rules (i.e. algorithms) as we learn and improve various facets of our lives. You can’t cry foul just because someone makes their rules explicit, learns something you didn’t, or otherwise shows up with tools you didn’t think of first.

Traditional venture capitalists use mental rules and patterns to try and sort out good innovations from bad. The only difference between this mental process and our statistical algorithms is – our criteria are explicit, objective and statistically vetted (as opposed to ambiguous, subjective and statistically un-vetted). We can’t invest in “Joe” just because “Joe’s a winner,” or fund “plastics” just because “plastics are hot right now” unless there’s a clear, correlative, causal and statistical basis for doing so.

Frankly, historical approaches to venture capital have created the precise dangers people fear algorithms will lead to (except for, perhaps, the robo-apocalypse). With only a handful of exceptions, for decades venture capital has produced shoddy results (around 75% of venture-backed firms in the US don’t return the investors’ capital).[vii] As an industry, venture capital had negative returns for the entire decade between 2000 – 2010.[viii] The amount of wasteful gamesmanship undertaken by entrepreneurs to win funding can be insane. There’s strong herd mentality among most VCs and startups, which causes dire homogenization as to what gets funded and what doesn’t, constraining human advancement in a lot of important areas. There are bubbles and bursts. There’s been increasing inequality between have’s and have-not’s. In the face of these trends, it’s hard to imagine how more accurate, vetted, explicit and transparent ways of predicting winning startups could make things much worse.

More honest concerns we humans share about allowing algorithms to enter our emotional spheres comes from fear of being de-valued, manipulated or lost in existential uncertainty. It’s scary to leave the known, no matter how bad it is, for the unknown. No traditional fund manager wants to end up replaced by an algorithm. No musical composer wants to be replaced by software. No Sommelier wants to be ousted by an equation. Just as the banter between Kirk and Spock reminds us, people consider emotion to be the last, untouchable bastion of what defines humanity and gives meaning to our lives. It’s what makes us special. When these dear qualities can be mimicked at will by a machine, with a fidelity that makes them indistinguishable from blood-and-tissue humanity, a key assumption about who we are is pulled away with nothing obvious or consoling to replace it. We don’t know where that leaves us, and that’s why we’re weary of algorithms in our emotional lives.

My only feeble advice is to proceed with caution. Yes, we need to be cautious about bad algorithms, bad outcomes and bad intentions. However we need to be just as cautions about knee-jerk opposition to algorithms in circumstances where they can absolutely make the world a better place. Algorithms are mere tools, like a hammer, that can be used in good ways and bad. How we use algorithms, and their potential for good and bad, depends entirely on us humans – which is perhaps the most emotionally difficult part of the whole equation.

 

 

 

 



[i] Steiner, Automate This; How Algorithms Came To Rule Our World (2012).

[ii] Sydell, New Music Software Predicts The Hits (2009) http://www.npr.org/templates/story/story.php?storyId=113673324

[iii] Steiner, Automate This; How Algorithms Came To Rule Our World, p. 94 (2012).

[iv] Ayres, Super Crunchers: Why Thinking-By-Numbers is the New Way To Be Smart (2008).

[vi] http://my-inner-voice.blogspot.com/2013/11/the-age-of-algorithms-and-rebbe.html

 

This Post Has 2 Comments

  1. Justin

    Yes reminds me of Nate Silver and all the controversy, which mostly came from people who didn’t like what he predicted (even though he was right).

  2. Michael

    This is timely for me as I am writing a white paper addressing the algorithms used in market analysis. The numbers – the formulas – the data can help us to understand the environment, but only if the inputs and the methodology are reliable and complete. In the end, the only way to assure that is to introduce a great deal of qualitative analysis. Numbers alone do not reveal the truth.

Leave a Reply