Book Club: Weapons of Math Destruction

Joining along with Bryan Alexander’s book club has been on my todo list for quite a while now. With grad school and other “life stuff” I haven’t been able to make it happen – until now! Cathy O’Neal’s Weapons of Math Destruction was on my must-read list, and when I saw Bryan’s post confirming it as the next selection, I jumped right in. I’ve finished the book now and wanted to share my thoughts and my responses to the provided book club discussion questions.

First off, I did like the book quite a bit. It was illuminating to get a data scientist’s  insider perspective on algorithms and the myriad ways they operate just beneath the surface of our everyday lives; they influence our interactions with institutions, mediate our transactions, and shape our perceptions by determining the media content we see. The danger, O’Neal warns us, is when algorithms become Weapons of Math Destruction – automated decision makers that codify human bias or prejudice into unassailable mathematical facts of life. These systems don’t bother to correct misconceptions that lead to unfair outcomes, and their inner-workings are kept secret by their corporate masters (or, frighteningly, are ill-understood even by their creators). Most of all, O’Neal contends that these WMDs tend to punish or exploit the poor and marginalized, while favoring the privileged, who can often count on access to an empathetic human decision-maker instead of an indifferent mathematical formula.

Throughout the book, O’Neal cites cases in which algorithms are used to automate and optimize the process of economic and social stratification. From credit scores to law enforcement, to hedge funds and predatory lending, to college admissions to retail worker scheduling – over and over we see systematized processes that make the rich even richer and exclude or abuse the downtrodden.

The book invites, but never quite answers, the question: is technology inherently good or bad? Are the tools, the tool-makers, or the tool-wielders at fault? O’Neal seems to suggest plenty of blame to go around: The unstated aims of the powerful and wealthy are often to maintain their privileged position at the top of the heap (Sociology 101!), and ambitious or opportunistic firms are eager to sell algorithmic solutions to “solve” difficult social problems that are not as bulletproof as advertised. Ultimately, O’Neal says, we need to “stop relying on blind faith and start putting the ‘science’ back into data-science.” (p. 219)

Discussion Questions

  • How can political campaigns best use big data and data analytics without causing harm?

Algorithms and analysis of big data can help actors to achieve aims in a more accurate/automated/efficient way. O’Neal’s big question is “what are those aims?” I saw few examples in the book where the overall purpose of the unit deploying algorithms was really to serve the greater good, but noble goals went awry due to bad data science. Perhaps this is because transparency and the opportunity for feedback tend to accompany ethically deployed algorithms.

  • Which educational uses of algorithms actually benefit learners?

I think there is room for algorithms in education when they are complementary to learning – particularly in informal learning scenarios where there is no time/money/opportunity for more in-depth instruction. Duolingo is a great example of a tool that provides additional learning opportunities outside of the classroom that might not exist otherwise.

  • Which actors (agencies, nonprofits, companies, scholars) are best placed to help address the problems O’Neil identifies?

I ran across gobo from MIT’s Media Lab, which is an interesting example of a counter-algorithm that is designed to let you customize your social media feeds. Are open source, transparency, and more user control a step in the right direction?

Leave a Reply

Your email address will not be published. Required fields are marked *