Periodic Ponderings🧠

Share this post

The Prevalence of Questionable Research Practices (QRPs) in Education Research

nicolebarbaro.substack.com

The Prevalence of Questionable Research Practices (QRPs) in Education Research

New study of education researchers shows that use of QRPs are not uncommon

Nicole Barbaro, Ph.D.
Apr 27, 2021
Share this post

The Prevalence of Questionable Research Practices (QRPs) in Education Research

nicolebarbaro.substack.com
Actual picture of a scientist (identity concealed) omitting a variable from their analysis.

The past decade has been hard on the social sciences, with psychology predominantly in the spotlight for their replication crisis that became mainstream news in 2015. Although psychology has taken the brunt of public criticism – while also taking the lead on field-wide reforms with the open science movement – other social science disciplines are also beginning to take a hard look in the mirror to evaluate the state of their field.

The blame for the replication crisis in psychology and more broadly the social sciences has fallen largely on methodological practices (with some recent focus on theoretical problems too). Social science research can get messy because there are no hard rules for how to design studies, measure variables, or analyze data. Because there are no hard rules, there are a considerable number of decisions that can be made throughout the research process that can impact the results of the study, or what are sometimes referred to as ā€œresearcher degrees of freedomā€.

In addition to the endless decision tree across the research process, there are systemic issues across the sciences and academia that incentivize the ā€œwrongā€ things in science, such as publication quantity and positive results bias in publishing, that to some extent drive the use of methodological practices that fall into ā€œgray areasā€ of social norms. These practices, now commonly referred to as ā€œquestionable research practicesā€ (QRPs), include omitting non-significant variables form analyses or non-significant studies from papers, peeking at data during data collection, post-hoc hypothesizing of results. Using a combination of these types of practices can nearly guarantee a researcher a positive result that is more likely to be published in scholarly journals.

Since the impact of these methodological problems on the scientific knowledge base has become well-known, an open science movement has taken hold of the social sciences. Open science practices, such as preregistering hypotheses, sharing materials, incentivizing replication research, and posting pre-prints, have quickly become normative. It is the hope that such open and transparent practices (practices that fields like physics have used for decades) will increase the reliability and credibility of the social sciences.

The changes over the past decade have also resulted in an increase of research that is focused on research, referred to as ā€œmeta-scienceā€. Meta-science focuses on researching the research and researchers to evaluate the state of the field and changes in social norms.

A new study in Educational Researcher has for the first time evaluated the prevalence of QRPs in the education research field. Education research, like psychology research, faces a number of problems including small effect sizes, low power and, as a new study shows, the use of QRPs.

Share

Selected results from Table 1 of Makel et al. 2021. Full table presented below.

Makel and colleagues surveyed authors of articles in leading education research journals that were published in the last decade. After sending more than 14,000 emails to authors, a total of 1,488 researchers responded to the survey. The survey aimed to evaluate how often authors reported using QRPs themselves, their estimates of the prevalence of QRPs within their field, and whether each QRP was acceptable to ever use. The authors also evaluated the prevalence and use of new open science practices as a way to gauge how the methodological reform movement is progressing in education research.

The table below summarizes the main results. I have the ā€œabbreviatedā€ QRP label highlighted to make it easier to see (for those unfamiliar with specific QRPs check out the ā€œItem Stemā€ column to the left for a description). The three columns of percentages beginning to the right of the highlighted QRP corresponds to survey respondents’ average estimate of how prevalent they believe these practices to be in their field, the percent of respondents that reported ever engaging in the practice at least once, and the percent of respondents that say the practice should never be used.

Table 1 from Makel et al.

The results are a bit uninspiring, yet useful baseline data needed to advance change in the field. Overall, the estimated prevalence suggests that use of QRPs are pretty common in the education research field, especially practices such as omitting variables, analyses, and/or whole studies from publications (also referred to as ā€œselective reportingā€). Other QRPs were estimated to be lower in prevalence, such as filling in missing data without sharing the methods, data peeking, and data exclusion to achieve statistical significance.

(Un)interestingly, respondents on average reported engaging in most QRPs less often than they think their colleagues do. In other words, everyone is above average in research integrity! The notable exceptions to this are selective reporting of analyses, variables, and studies, which until recently were highly accepted and normative practices in science publishing; and ā€œanalysis gamingā€ whereby researchers change analyses to more favorably or accurately describe the data.

Finally, the results pertaining to the percentage of respondents that indicate such QRPs should never be used demonstrate why QRPs are called what they are – questionable – because there is no clear consensus on what practices are always bad! Social science, remember, doesn’t have hard rules for how to conduct research and analyze data so there are many situations where there are valid reasons to change an analysis plan, or omit variables, for instance.

Selected results from Table 1 of Makel et al. 2021. Full table presented above.

The results of this survey do show promising results about open science practices. Open science practices are quite common, even if QRPs are, too! And importantly, only a small number of respondents think that open science practices should ā€œnever be usedā€. Why that is the case, I have no idea, but the authors note in their paper that they have another manuscript forthcoming in which they report respondents’ open-ended explanations as to why they think such practices should never be used.

It should be noted, however, that the sample here is self-selected from the education researcher population, so it’s not entirely clear if prevalence estimates truly reflect the state of the field, or may be over- or under-reporting what is truly happening.

This type of meta-research is important for fields to evaluate change in social norms and methodological practices over time. It also shows that education research is, in fact, a social science discipline with similar problems and practices as other fields, like psychology.

Share this post

The Prevalence of Questionable Research Practices (QRPs) in Education Research

nicolebarbaro.substack.com
Comments
TopNewCommunity

No posts

Ready for more?

Ā© 2023 Nicole Barbaro
Privacy āˆ™ Terms āˆ™ Collection notice
Start WritingGet the app
SubstackĀ is the home for great writing