The Prevalence of Questionable Research Practices (QRPs) in Education Research

New study of education researchers shows that use of QRPs are not uncommon

The past decade has been hard on the social sciences, with psychology predominantly in the spotlight for their replication crisis that became mainstream news in 2015. Although psychology has taken the brunt of public criticism – while also taking the lead on field-wide reforms with the open science movement – other social science disciplines are also beginning to take a hard look in the mirror to evaluate the state of their field.

The blame for the replication crisis in psychology and more broadly the social sciences has fallen largely on methodological practices (with some recent focus on theoretical problems too). Social science research can get messy because there are no hard rules for how to design studies, measure variables, or analyze data. Because there are no hard rules, there are a considerable number of decisions that can be made throughout the research process that can impact the results of the study, or what are sometimes referred to as “researcher degrees of freedom”.

In addition to the endless decision tree across the research process, there are systemic issues across the sciences and academia that incentivize the “wrong” things in science, such as publication quantity and positive results bias in publishing, that to some extent drive the use of methodological practices that fall into “gray areas” of social norms. These practices, now commonly referred to as “questionable research practices” (QRPs), include omitting non-significant variables form analyses or non-significant studies from papers, peeking at data during data collection, post-hoc hypothesizing of results. Using a combination of these types of practices can nearly guarantee a researcher a positive result that is more likely to be published in scholarly journals.

Since the impact of these methodological problems on the scientific knowledge base has become well-known, an open science movement has taken hold of the social sciences. Open science practices, such as preregistering hypotheses, sharing materials, incentivizing replication research, and posting pre-prints, have quickly become normative. It is the hope that such open and transparent practices (practices that fields like physics have used for decades) will increase the reliability and credibility of the social sciences.

The changes over the past decade have also resulted in an increase of research that is focused on research, referred to as “meta-science”. Meta-science focuses on researching the research and researchers to evaluate the state of the field and changes in social norms.

A new study in Educational Researcher has for the first time evaluated the prevalence of QRPs in the education research field. Education research, like psychology research, faces a number of problems including small effect sizes, low power and, as a new study shows, the use of QRPs.

Share

Makel and colleagues surveyed authors of articles in leading education research journals that were published in the last decade. After sending more than 14,000 emails to authors, a total of 1,488 researchers responded to the survey. The survey aimed to evaluate how often authors reported using QRPs themselves, their estimates of the prevalence of QRPs within their field, and whether each QRP was acceptable to ever use. The authors also evaluated the prevalence and use of new open science practices as a way to gauge how the methodological reform movement is progressing in education research.

The table below summarizes the main results. I have the “abbreviated” QRP label highlighted to make it easier to see (for those unfamiliar with specific QRPs check out the “Item Stem” column to the left for a description). The three columns of percentages beginning to the right of the highlighted QRP corresponds to survey respondents’ average estimate of how prevalent they believe these practices to be in their field, the percent of respondents that reported ever engaging in the practice at least once, and the percent of respondents that say the practice should never be used.

The results are a bit uninspiring, yet useful baseline data needed to advance change in the field. Overall, the estimated prevalence suggests that use of QRPs are pretty common in the education research field, especially practices such as omitting variables, analyses, and/or whole studies from publications (also referred to as “selective reporting”). Other QRPs were estimated to be lower in prevalence, such as filling in missing data without sharing the methods, data peeking, and data exclusion to achieve statistical significance.

(Un)interestingly, respondents on average reported engaging in most QRPs less often than they think their colleagues do. In other words, everyone is above average in research integrity! The notable exceptions to this are selective reporting of analyses, variables, and studies, which until recently were highly accepted and normative practices in science publishing; and “analysis gaming” whereby researchers change analyses to more favorably or accurately describe the data.

Finally, the results pertaining to the percentage of respondents that indicate such QRPs should never be used demonstrate why QRPs are called what they are – questionable – because there is no clear consensus on what practices are always bad! Social science, remember, doesn’t have hard rules for how to conduct research and analyze data so there are many situations where there are valid reasons to change an analysis plan, or omit variables, for instance.

The results of this survey do show promising results about open science practices. Open science practices are quite common, even if QRPs are, too! And importantly, only a small number of respondents think that open science practices should “never be used”. Why that is the case, I have no idea, but the authors note in their paper that they have another manuscript forthcoming in which they report respondents’ open-ended explanations as to why they think such practices should never be used.

It should be noted, however, that the sample here is self-selected from the education researcher population, so it’s not entirely clear if prevalence estimates truly reflect the state of the field, or may be over- or under-reporting what is truly happening.

This type of meta-research is important for fields to evaluate change in social norms and methodological practices over time. It also shows that education research is, in fact, a social science discipline with similar problems and practices as other fields, like psychology.