The Illusion of the Spreadsheet

In the Jan-Feb 2017 edition of Harvard Business Review (am I the only one who saves their copies?) there is a short article that caught my attention: “Algorithms: People Like the Illusion of Control” that is based on research from Chicago Booth and UPenn Wharton (Dietvorst, Simmons, and Massey, Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them, April 2016). As the article states, the premise of the research is that even with the rise in machine learning and artificial intelligence capabilities, people tend to rely on their own judgment – especially when working with algorithms. Researchers refer to this as “algorithm aversion,” and this study seeks to shed light on the topic through various experiments and industry examples, some of which is outlined in a supplemental paper.

Photo by Stephen Dawson on Unsplash

Photo by Stephen Dawson on Unsplash

Just as there are dangers in overly relying on algorithms, spreadsheets and dashboards to manage people and projects, there are increasingly risks in ignoring or discounting them, as well. While this may sound contradictory, what I have recognized for many years (and have said out loud within countless presentations) is that the more you involve people in the process, the more accepting people are toward the end result. The same can be stated for complex algorithms, as the researchers attempt to show in their paper:

Computer-driven algorithms are becoming adept at making decisions and offering forecasts – in fact, their assessments are frequently better than those of humans. Despite clear evidence of this superiority, studies have shown that people often opt to rely on their own judgment instead. That’s especially true of those who have had some experience with algorithms and have found them to be imperfect. Researchers call this “algorithm aversion.”

A new study explores one way to reduce this aversion: let people tinker with the machine’s results. “The benefits of getting people to use the algorithm may outweigh the costs associated with degrading the algorithm’s performance,” the researchers theorized, reasoning that even a slightly flawed math-driven decision was likely to be more accurate than a human’s prediction.

I’ve had many a conversation with experts on this subject matter, including Microsoft’s Naomi Moneypenny (@nmoneypenny) and tyGraph’s John White (@diverdown1964) as well as others, about the limitations of AI and machine-learning, and whether science will ever replicate the complexity of the human brain. I remember one particularly heated discussion at a SharePint event in downtown Helsinki where I pushed back strongly on a future where robots and AI would replace most human labor, allowing mankind to thrive, removing hunger and poverty from the world. Whether or not any of that happens within my lifetime is up for debate – but I do agree with the premise that giving people some semblance of control over their algorithms and data does increase user satisfaction. In general, providing transparency to your processes, data, and policies has a direct impact on your success. And while operating in a closed system may increase the accuracy of the data, it may also decrease participation and trust.

For the data nuts out there, figure this one out: would you be willing to reduce the quality and accuracy of your data (your perceived quality, at least) if there were an exponential increase in end user participation and/or satisfaction with that data? Just thinking out loud…

Christian Buckley

Christian is a Microsoft Regional Director and M365 Apps & Services MVP, and an award-winning product marketer and technology evangelist, based in Silicon Slopes (Lehi), Utah. He sits on the board of TekkiGurus, is an advisor for both revealit.TV and WellnessWits, and provides channel and marketing services for Microsoft partners. He hosts the quarterly #CollabTalk TweetJam, the weekly #CollabTalk Podcast, and the Microsoft 365 Ask-Me-Anything (#M365AMA) series.