I just became president of a scientific society dedicated to understanding human motivation, and my first official act was to sit through a keynote delivered by one of my heroes that made me want to stand up and shout “I don’t fully agree!” I recently returned from a trip to Washington, DC, where I attended the annual meeting of the Society for the Science of Motivation. What has stayed with me the most is a talk delivered by one of the OGs of motivation research, Arie Kruglanski, that felt like a lament for the decline of grand theorizing. In his view, social psychology (and motivation science) needs more theory. Big theory. Grand unifying frameworks to organize our scattered findings into something coherent and meaningful. The field, he argued, has become so fixated on rigor and methodological reform that it’s become distracted from the real work of theorizing. All this preregistration and replication stuff? That’s nice, but where are the bold ideas and sweeping explanations? Kruglanski lamented that without big new theories, social psychology is stagnating. But what if the opposite is true? What if social psychology’s problem isn’t too little theory, but too many theories proposed too hastily? Anthony Greenwald—the guy who gave us the Implicit Association Test—once suggested that theories can actually stifle progress. Once a theory is out there, you start seeing confirming evidence everywhere, even when it's weaker than the Nescafe “coffee” my mother drinks. And when stronger evidence comes along that doesn't fit? Well, there must be something wrong with the data, not the Big Beautiful Theory. We all fall prey to confirmation bias, even those of us who study it for a living. Researchers become so invested in their theoretical hunches that they'll keep tweaking experiments, massaging methods, and reframing results until they get what they expect, essentially bending their methods to confirm their pet hypotheses. It's like that rug in The Dude's apartment: the theory really ties the room together, even when it's covering up stains in the floorboards. And if the theory is appealing—because it is intuitive or elegant or socially compelling—it can be seductive, discouraging dissent and surviving on its rhetorical appeal or supposed importance even as the evidence starts to look threadbare. I've lived through the spectacular collapse of two theories I once believed in: stereotype threat and ego depletion. Both were elegant, intuitive, and socially important. Both inspired hundreds of studies. And both turned out to be largely bullshit. Imagine an alternative timeline where these effects were discovered without the grand theorizing. "Huh, some women seem to perform slightly worse on math tests under certain conditions when we remind them of negative stereotypes. Weird. Wonder if anyone else can find this?" If we'd approached it as a curiosity rather than a revolution in understanding inequality, maybe we would have noticed how fragile the initial evidence was. Maybe we would have demanded better replications before theorizing and building entire interventions around it. But no. We canonized these theories before we rigorously tested them. And, once canonized, theories stop serving truth and start serving the reputations of their creators. Our top journals actively encourage this theoretical gold rush, demanding that every paper advance or propose a new theory. Try submitting a careful descriptive study and watch reviewers ask: "But what theory does this advance?" As if documenting something interesting about human behaviour isn't valuable unless it moves some theoretical ball forward. I don’t mean to imply that theory is unimportant. But when your celebrated theoretical work turns out to be an elaborate castle built on data that was tortured into compliance, you get humble real fast. Perhaps rushing to theorize—before understanding what you’re explaining—just to get published in top journals is not ideal. What’s the alternative? Maybe we aim for something more modest, but no less important: describing interesting phenomena accurately and measuring them with fidelity. Paul Rozin reminds us that fields like biology spent centuries in a descriptive phase, patiently collecting facts, before their theoretical revolutions. Scientists often proceed more abductively, guided by informed curiosity: their hunches and speculations guide them as they poke around in the natural world, collecting descriptive data, and only later building models. I'm not suggesting we can observe without any theoretical hunches. Philosophers of science remind us that all observations are theory-laden[1]. You need some provisional hypotheses to know what's worth studying, what’s worth observing. But there's a crucial difference between having research questions and committing to elaborate theoretical mechanisms. The former is abduction: forming a tentative hypothesis to guide observation, ready to revise based on what is observed. The latter is a theoretical commitment that can blind us to what we're seeing. And once we make that commitment, especially if you have not mapped out and observed the phenomenon fully, psychological forces take over. But descriptive research alone isn't enough. Anthony Greenwald argues that the key to progress is methodological: we need better tools for discovering things before we start theorizing about them. Think about it: when have breakthroughs occurred in physics and biology? Usually when someone invented a better telescope or microscope, not when they spun a cleverer theory. Great methods beget great theories because they generate the accurate observations that theories need to explain. Indeed, some of psychology's greatest breakthroughs came not from theoretical leaps, but from careful observation and methodological innovation. The Stroop effect wasn't discovered by someone testing a theory about attention and automaticity—it emerged from Ridley Stroop's simple observation that people struggle to name the colors of words when the word and color don't match. That basic descriptive finding and new tool—not theoretical insight—opened up decades of research on cognitive control. Or consider patient H.M., whose profound amnesia after brain surgery revealed the distinction between different memory systems. McGill University’s Brenda Milner and her colleagues weren't testing |