What Motivation Researchers Can Learn from Professional Sports Teams

What Motivation Researchers Can Learn from Professional Sports Teams

https://www.geckoboard.com/assets/hero-publication-bias.png
Photo by https://www.geckoboard.com/assets/hero-publication-bias.png

Authored by Eric Ekholm

As a lifelong fan of Washington D.C. sports teams, I can confidently say that there are plenty of aspects of the Washington Redskins that motivation researchers don’t want to emulate. Resistance to change, cultural insensitivity, and a recent history dominated by mediocrity and irrelevance, just to name a few. And though we’re better not following their example in these respects, there is at least one thing that we can stand to learn from Washington football. We can learn to be more open about our losses.

On a Monday morning not too long ago, I was sitting at my kitchen table enjoying a cup of coffee, lazily eating my breakfast, and scrolling through my Instagram feed. Amidst posts from friends, family, local breweries, famous French bulldogs (check out Walter Goodboy if you’re not already following him), NPR, and CrossFit athletes, I saw a post from the Washington Redskins. They’d played, and lost, the day before, and their Instagram post acknowledged that loss to the Colts via a picture of several Washington players standing on the sidelines and a caption soberly stating “Final: Colts – 21, Redskins – 9.” At first, I thought this was strange – why would their Instagram account post the final score of a game they’d lost? Why not just let it go unspoken? But the more I thought about it, the more it seemed, if not noble, then at least respectable, that this organization would publicize an outcome that didn’t directly benefit them.

So how does this relate to education? The recent replication crisis has been widely publicized across academic disciplines, and education is no exception. For those unfamiliar, the problem is basically that, using similar samples and copying the methods, measures, and procedures of previous studies, researchers often fail to reproduce the results of these earlier studies. This has led some to conclude that many of these previous findings were “false,” little more than mere products of chance (see e.g. Ioannidis, 2005). That is, researchers didn’t publish their null findings and did publish their significant findings, which led the body of literature to become bloated with studies reporting false positives. Reviews of primary education research and education meta-analyses have found that unpublished studies tend to report significantly smaller effect sizes than do published studies (e.g. Cheung & Slavin, 2016; Chow & Ekholm, 2018), which is consistent with the idea, often referred to as publication bias, that nonsignificant findings often don’t make their way into the published literature, even when they represent the true effect of a given phenomenon.

Let’s return to football for an example of how this might play out. Imagine we want to figure out who the best team in the NFL is. Or, similarly, imagine we want to figure out whether a given team is any good. The simplest way would be to compare a team’s number of wins to its number of losses, although this season I suppose we’d have to consider ties as well. But now imagine that teams didn’t have to report their losses and, consequently, we don’t know how many games each team has played, so when scanning team records, all we get is the W column (I realize that in the NFL, teams play each other, so there’s a zero-sum win-loss thing going on, but just indulge me here). How do we know who’s good? Is a team with 8 wins good? Are they better than a team with 7 wins? What about a team with 11 wins? Is the best team simply the one with the most wins? Without all of the information available, it’s pretty tough to answer these questions definitively, and yet this is essentially what we try to do in much of our research.

But how does this relate to motivation research? As motivation researchers, we understand that motivation is critical for student learning and achievement. Even the brightest students won’t reach their academic potential unless they’re motivated to do so. Therefore, accurate understandings of factors relating to student motivation as well as of the effectiveness of interventions designed to foster motivation are crucial. If we systematically overestimate the magnitude of these relations or the effects of these interventions, we are doing a disservice to students who could otherwise benefit from more valid research or better interventions. As Katerina Schenke wrote in her recent MotSig blog, foundations, schools, and government agencies are investing heavily in research and products relating to student motivation and social-emotional learning. Further, teachers consider student motivation so important that they often factor it into students’ grades (Brookhart et al., 2016). Clearly, there is a hunger for motivation research. But, as the providers of this research, we need to be cognizant of what, and how, we’re feeding this hunger, particularly if we want practitioners and policymakers to look to us for sustenance.

I don’t mean to suggest that motivation researchers are more likely to selectively report findings than are other researchers or that issues of publication bias are unique to motivation research. We aren’t and they’re not. What I do want to emphasize, though, is that because there is such a demand for motivation research currently, particularly in STEM domains, the implicit pressure to churn out publications may be greater, and we should be vigilant of how this pressure might affect our research practices.

For better or worse, the NFL is way more popular than motivation research. Expectancy-value theory doesn’t get hours and hours of dedicated Sunday-afternoon programming each fall, and nobody spends their free time in August preparing for their motivation theorist fantasy draft. As a result of this popularity, the NFL has its own sort of built-in research registry – people know when NFL teams play, and it’s incredibly easy to figure out who won, lost, or….tied. This isn’t so much the case for motivation research. The Society for Research on Educational Effectiveness (SREE) just launched an updated registry of efficacy and effectiveness trials, but there will still be many studies not included in this registry, particularly given that many studies of motivation are not RCTs.

This is where motivation researchers can follow the example of professional sports teams’ social media departments and report our “losses.” Just like the Washington Redskins, Capitals, and Wizards post final scores of the games they lose as well as the games they win, we need to report all of our results. This means keeping variables with nonsignificant effects in our regression models. It means publishing (or at least trying to publish) the results of intervention studies that didn’t quite yield the effects we expected. It means embracing research as a prolonged enterprise in figuring out what doesn’t work and being open about this. Because the goal of motivation research is a noble one – to help instill in students a passion for learning. But we may forfeit our claims to any sort of nobility if our reporting practices are less transparent than those of multibillion dollar profit-driven organizations.

About the author

Eric Ekholm is currently a PhD student in educational psychology at Virginia Commonwealth University in Richmond, VA. His research centers on writing processes, particularly on writers’ motivational and affective processes, as well as on advancing quantitative methods in primary studies and reviews. Before returning to school to pursue his PhD, Eric taught 8th grade English Language Arts for four years in Chesterfield County, VA. In his free time, Eric enjoys running, reading, and playing with his two dogs.

Special shout out to Jason Chow for reading and giving feedback on an earlier draft of this piece.