Over the past six months, I’ve been drowning in evaluation hell. I had to assess 30 students for a prestigious federal award, ranking them against each other on multiple criteria. I also had 200 job applications that needed to be sorted, compared, and whittled down to a shortlist. I did all of this the old-fashioned way—reading files, taking notes, making gut calls. And I did not ask AI for assisance. But I’m no longer convinced that was the responsible choice. Like Paul Bloom, I am starting to wonder if not using AI in such cases is the right thing to do. Yet, if I had a well-written rubric with specific criteria, I’m pretty sure AI would do a better job than I would. It would be more consistent, less biased by irrelevant factors, and certainly not influenced by whether the Eagles are playing on the radio while it’s grading. This bothers me. A lot. It made me think back to a piece I wrote last year about predictive AI, and why I think the anti-AI crowd has it wrong. So here is that essay again, with just a few edits. It’s fashionable these days to be anti-AI. If you don’t believe me, check out this social media mob that tore into my University of Toronto colleague for having the audacity to promote an AI-powered educational tool. Critics are right to call out the serious problems with AI, like biases in U.S. healthcare decisions, discriminatory fraud detection in the Netherlands, or even error-prone search results. These issues are real and demand attention. But bias, discrimination, and errors aren’t unique to AI; they’re deeply embedded in human decision-making, too. If you think humans are the gold standard, I’ve got a bridge to sell you. Our judgments are inconsistent, riddled with bias, and nearly impossible to fix. The real question isn’t whether predictive AI is perfect, but whether it’s better than the messy systems we’re already using. Spoiler: it usually is. That question—whether AI is an improvement over the status quo—was on my mind when I picked up AI Snake Oil, a book by Princeton computer scientists Arvind Narayanan and Sayash Kapoor. The title alone promised a scathing critique of the technology, and I wanted to see if their arguments would challenge my perspective. Was I just an unapologetic AI stan after all? Determined to keep an open mind, I dove in, ready to hear from experts. The authors didn’t disappoint in their skepticism. They didn’t just criticize AI; they came out swinging. The book covers a lot of ground, and I found the chapter on the history of AI especially enlightening. I learned a ton. But when they turned to evaluating the technology itself, the gloves came off. To their credit, they acknowledge AI’s strengths in areas like autocomplete, chatbots, and even some uses of generative AI for coding and images. But when it comes to predictive AI, their critique takes a sharp turn. They don’t just find flaws. They call for the outright abandonment of predictive AI—if not of prediction altogether. This is where I part ways with them. Prediction itself, even if flawed, is essential. Decisions need to be made. Who gets admitted to an ICU when there aren’t enough beds? Who is likely to reoffend and should not be released on parole? Who gets hired for a job when there are hundreds of applicants? Prediction is how we allocate scarce resources, prioritize care, and maintain justice and fairness. But what do we mean by prediction here? Prediction is about using data to anticipate outcomes and guide decisions in structured ways. For instance, it could mean predicting which ICU patient is likeliest to recover, which parolee poses the least risk, or which refugee might face persecution if returned to their home country. Prediction isn’t about certainty—it’s about estimating probabilities to make decisions that matter. These decisions are high-stakes, and without prediction, they’d often become arbitrary, relying on gut feelings or flawed human judgment. AI enhances this process by handling complexity, processing massive datasets, and reducing bias. It doesn’t eliminate the challenges of prediction, but it allows us to make better, more consistent decisions than humans alone. This is why prediction isn’t just essential—it’s unavoidable. When it comes to making better decisions, research makes it clear that algorithms are the way to go. The term algorithm gets thrown around like it’s synonymous with computers and AI, but it originally had nothing to do with machines. An algorithm is simply a set of steps or rules used to make a decision. We could create and implement one with nothing more than a pencil and paper. Take graduate admissions. Instead of relying on unstructured interviews where we decide based on whether we like someone or deem them a good fit, we could create a decision rule. Assign each applicant a score out of 10 on criteria like undergraduate grades, research experience, writing ability, math skills, ability to work with others, even likability. Then, give each criterion a weight, say, 25% for grades, 15% for likability and so on. A little algebra later, and you’ve combined the scores to compare candidates and select the highest-scoring ones. Congratulations! You’ve just created and implemented an algorithm. It’s not perfect, but it’s far more objective and less biased than gut feelings or intuition. I know this from personal experience. When I evaluate students or job candidates—which I do regularly—I can feel my judgments shifting based on completely irrelevant factors. The same application looks different if I read it first thing or its the last application in my pile. A candidate who reminds me of a former successful student gets an unfair boost. Someone whose writing style annoys me gets dinged for reasons that have nothing to do with their actual qualifications. An algorithm doesn't have these problems. |