Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Meta-reproducibility crisis: Software edition

Its not just a ‘replication crisis‘, its a reproducibility crisis. Forget about falsification if one can’t even reproduce the workflow of another. Efforts are underway to improve transparency which allows reproduction, good. But what if even with the original data, researchers cannot reproduce numerical results simply because of where they are in time and space? What if different statistical programming languages or different packages and versions of these languages actually lead to different findings? Based on results of the Crowdsourced Replication Initiative we demonstrated that even using the same data and models, only 80% of independent replicator teams could reproduce the numerical results, even when they had access to the original Stata code.

Based on my experiences in this area, I cannot but help have a strong hunch that software or package versions threaten the computational reproducibility of our research. The fact that Stata rounds up at .5 and R rounds down already suggests that the same models might produce different results across software simply from rounding variations. Recently, I found another reason to believe this hunch.

I am a participant in SCORE, a massive collaboration to systematically investigate the reproducibility of social science research – led by the Center for Open Science. I agreed to attempt a computational reproduction of the study “Age, Inequality, and Reactions to Marketization in Post-Communist Central and Eastern Europe” by Horvat and Evans (2011).

Despite some potential differences in the data I received from the original authors and those reported in their tables, I was able to produce similar results. Similar but not exactly close. The particular coefficient I was interested in came out at -0.39 in my ordered probit (cumulative link) model, whereas their original was -0.22. As a logistic coefficient or (translated into a percentage probability), this is potentially a large difference. I used R, but was not satisfied with my results. The original authors most likely used Stata, as their public data were a .dta file – Stata’s native file storage format. Also, lets be honest, no social scientist was using R before 2010 when they likely did their analyses. My curiosity and suspicion led me to run the model in Stata. Here I got a -0.21 coefficient. Almost identical to their original study, and not surprising given that there was minor variation in case numbers they reported and in the public data file. I was still not convinced that these coefficients were actually different because the models were quite complex. So I plotted the predicted probabilities to be sure that this was not statistical artifact of some model components (intercept cut-points for example).

But my hunch was supported. These same models run in R and Stata, led to different predicted probabilities. The figure below shows that among many former Communist Eastern and Central-Eastern European countries those who are aged 60+ were less optimistic about their living standards over the next 5 years. One of their critical findings was that this gap increased form 1993 to 2007, in particular because the 60+ group became even less optimistic; although in fairness this is difficult to conclude because of all the other variables in the model. What is clear, is that they were 0.45 points apart in 1993 and 0.65 apart in 2007 (see Table 5 in their original study). When I plot the predicted values I find that these results are similar but not identical in terms of the 1993 to 2007 change, but that the probabilities are quite different across software.

I used Stata v15 and the ‘oprobit package, and I used R’s ‘MASS’ package and the ‘polr‘ function. Despite identical data (and case numbers!) the ‘polr’ routine predicts the probability of respondents answering that their standard of living will fall or fall a great deal as much higher than that of Stata. Although the relative change between age groups is similar – with a slightly steeper negative slope in R – the lines are pretty far apart, and even further apart for the age group 60+. Without unpacking each package and the exact estimation strategy taking place therein, I cannot as of yet say why. My own statistical and software abilities are by no means exceptional, but certainly above the average social scientist. Thus, it would be totally unrealistic to expect any social scientist to understand the entire routine taking place within the polr or oprobit packages. If they did, they wouldn’t need the packages and could just write their own cumulative link routine!

The implications are that using different software leads to a lack of reproducibility. As if we did not have enough to worry about in the reproducibility area already.

Overcoming replication fears

Fear of rejection, part I

To replicate a study, you need information. Probably information that is not fully disclosed in a 6-12,000 word journal article. Except for a recent trend, information such as data and analytical procedure are not going to be available publicly. This means you should, or must in case the data are not retrievable from some other source, contact the original author. Be prepared for rejection. One study demonstrated that among the top sociology journals, less than 30% of replication materials were available (even though as many as 75% claimed otherwise). Political science was only marginally better at around 50% as of 2015. Professors are likely to ignore emails asking for their data and code. One group of sociology students contacted 53 different authors asking for replication materials and only 15 provided them (28%). Ten never responded to the requests at all, despite several follow up emails. So don’t take it personally, social scientists are not known for their forthcomingness in this area.

Verification is not affirmation

Imagine being a student who tries to verify the results of a prolific, senior scholar and cannot. If it were me, I would be anxious that I made a mistake. But the only real mistake would be to assume my lack of verification is a refutation of my own skills. Of course, it’s good to double check everything. Have a colleague look at your work if you are unsure, a teacher or supervisor if you are a student. Un-verifiable results are common, no need for self-doubt. Things like reverse coding biological sex so that women appear less supportive of welfare state policies or accidentally analyzing values of 88 (a missing code) as a relevant value of coital frequency leading to a surprising rate for older persons are actually a normal part of social science.

When replicating a study just assume there will be at least one mistake. Like a treasure hunt.

Verification comes down to the availability of the materials. If the data and code are not fully available, it really is a treasure hunt because you will be unsure what you are going to find or learn. On the other hand, if the data and code are available and in good order, then it is more like cooking than hunting. This often comes down to the difference between teaching replication – the recipe approach, where students should come to the same results every time when following the exact same steps, and replication as a form of social research – the treasure hunt approach, where researchers (i.e., students) may not have a coherent recipe from the original ‘chef’. But make no mistake(!) even fully transparent studies often come with mistakes in the code or data.

Fear of mistakes

If I am not making mistakes, I am not doing research. You will make mistakes and there is nothing to fear. There are all kinds of reasons that replication results will diverge, not all of them are mistakes. Recently a well-known and well-respected sociologist retracted his own paper after someone trying to replicate the study identified coding errors. One journal started checking that data and code produced results in accepted papers, and almost none were verifiable on the first attempt. In a crowdsourced replication, mostly PhD students, postdocs and a few professors came to an exact verification of the original study only 82% of the time, despite having the original code!

Fear of the unknown

Designing statistical models using a software is like learning a new language. Student replications often involve methods unfamiliar to the them. This is a great didactic tool – learning by doing. There is nothing to fear here. Professors’ original studies often involve methods that they are not experts in. One extremely famous scholar and his colleague ran a regression with an interaction term in it and botched the interpretation of the effects, the results were basically the opposite of what they reported.

Science is a process of exploring the unknown. Replications use what is known as a tool for finding what is unknown.

Fear of rejection, Part II

Students may be interested in publishing their replications, they should be, because how else will others put their knowledge into practical use? Get prepared again for rejection. Journals and reviewers across the social sciences are not very excited about replications. A pair of researchers studied the instructions and aims of 1,151 psychology journals in 2016 and discovered that only 3% explicitly accepted replications. One sociologist pointed out not so long ago that replication is just not the norm in sociology, and another one recently came to the same conclusion. The good news is that we don’t need journals anymore to make useful science, at least in theory. Students can immediately publish their results as preprints and share data and code in a public repository. If a student elects to use Open Science Framework preprint servers, their work will be immediately found in scholarly search engines.

Fear of ego

Scientists tend to overestimate the impact of a negative replication on their reputations. Ego-alert. Assume a scientist worried about a replication is a professor. This is a person who is most like tenured, certainly the highly cited professors are. This is also a person who “professes” knowledge on a topic, meaning that they should be an expert and engage in teaching students, policymakers, the public and really anyone interested about this topic. If any of this professor’s results were shown to be unreliable or false, this would be a critical piece of information if that professor’s goal was to actually profess knowledge on that topic. Unfortunately, professors regularly suffer from some kind of ‘rock-star syndrome’ or ego-mania where they are doing science as a means to get recognition and fame. This leads them to react aggressively against anything that contradicts them. This is very bad for science. If a student replicator can help deplete a runaway professor ego through replication, then that student is doing a great service to science.

Fear of not addressing fear

In a typical primary or secondary school chemistry class, students repeat the basic experiments of chemical reactions that have been done for hundreds of years. These students are learning through replication. They are gaining knowledge in a way that cannot be simply taught in a lecture or by reading a book. They are also affirming the act of science, thus developing a faith that science works. In social science especially, we face a reliability crisis if not a public image crisis. Students should be reassured that there is a repetitive and reliable nature to doing social science, whether they will continue as a social scientist or (in the most likely case) not. Part of this reliability can be a lack of reliability. Science is simply a process of trying to understand the unknown, and even quantify this unknown. I fear that without more student replications, we are diminishing the value of social science and contributing to the perception that social science is unreliable.

Good social science should be reliably able to identify unreliability, and this is best taught through conducting replications.

Behind the specification curve

Analytical errors are a normal part of social science, as David Brady pointed out in the December 2019 IPM Newsletter. Whether conscious, unconscious or simply mistakes, we take certain paths in our research processes. All possible paths constitute researcher degrees of freedom, and they can lead to different findings. In fact, we can intentionally exploit paths to the results we want. Replications can work against this by detecting errors or identifying questionable decisions. We have few replications in sociology, those working in IPM being no exception. Thus just one replication of any given published should be a scientific improvement.

But is it?

Let’s imagine a secondary data study that a replicator wants to reproduce. We now know from a major social science journal, that even when researchers provide their code it rarely runs ‘right out of the box’; replicators often need additional information or materials from the authors (Janz 2015). Moreover, consider an R user trying to replicate Stata code. This requires writing brand new code. Replications with the simple goal of reproduction are not as straightforward as we think! Now let’s imagine a replicator has more complicated goals of testing generalizability or scrutinizing the original study. A replication is as prone to researcher degrees of freedom problems as the original study. This means replications might not be as useful as we hoped, and alone cannot alleviate the ‘crisis of science’.

To be more effective, original studies and replications alike need to focus on model specification. As Muñoz and Young (2018) demonstrate, the cost of running billions of models is roughly zero. Unless they are super-complicated, a computer can do this for us very quickly. Given that researchers accidentally or intentionally report only models supporting their claims, it is a useful exercise to check how alternative specifications might affect the findings. This does not mean running all possible models. Doing so throws models into the results pool that are impossible causally speaking. We need to carefully, logically select models based on plausible research decisions. For example, if a researcher does not weight their survey data, we should ask, ‘why not?’. If a researcher measures poverty as half the median income, we should explore alternative measures.

This figure demonstrates effect sizes (top) and model specifications (bottom) of analyses of the outcome variable “positive reciprocity” (a tendency to pay back favors). This figure was produced by Julia Rohrer (2018) and demonstrates that specification curves are both visually appealing and scientifically useful

Putting together all combinations of plausible decisions we come to a set of models that are theoretically defensible. A major gain is that we can see which decisions have an impact on the results. This gives an indication of where we need to focus our future research. One of the newest ways to do this is p-curve analysis, also known as “specification curve” modeling. These methods were originally introduced as ways to detect p-hacking and publication bias. But we can extend them to standard research and replications, leading to gains in theory. These methods caught on in psychology recently (Rohrer 2018), and it is time for sociology to get behind the curve and move ahead our discipline’s reliability.

References:

Janz, Nicole. 2015. “Leading Journal Verifies Articles before Publication – So Far, All Replications Failed.” Political Science Replication Blog. Retrieved July 22, 2019.

Muñoz, John and Cristobal Young. 2018. “We Ran 9 Billion Regressions: Eliminating False Positives through Computational Model Robustness.” Sociological Methodology 48(1):1–33.

Rohrer, Julia M. 2018. “Run All the Models! Dealing With Data Analytic Flexibility.” APS Observer 31(3).