Its not just a ‘replication crisis‘, its a reproducibility crisis. Forget about falsification if one can’t even reproduce the workflow of another. Efforts are underway to improve transparency which allows reproduction, good. But what if even with the original data, researchers cannot reproduce numerical results simply because of where they are in time and space? What if different statistical programming languages or different packages and versions of these languages actually lead to different findings? Based on results of the Crowdsourced Replication Initiative we demonstrated that even using the same data and models, only 80% of independent replicator teams could reproduce the numerical results, even when they had access to the original Stata code.
Based on my experiences in this area, I cannot but help have a strong hunch that software or package versions threaten the computational reproducibility of our research. The fact that Stata rounds up at .5 and R rounds down already suggests that the same models might produce different results across software simply from rounding variations. Recently, I found another reason to believe this hunch.
I am a participant in SCORE, a massive collaboration to systematically investigate the reproducibility of social science research – led by the Center for Open Science. I agreed to attempt a computational reproduction of the study “Age, Inequality, and Reactions to Marketization in Post-Communist Central and Eastern Europe” by Horvat and Evans (2011).
Despite some potential differences in the data I received from the original authors and those reported in their tables, I was able to produce similar results. Similar but not exactly close. The particular coefficient I was interested in came out at -0.39 in my ordered probit (cumulative link) model, whereas their original was -0.22. As a logistic coefficient or (translated into a percentage probability), this is potentially a large difference. I used R, but was not satisfied with my results. The original authors most likely used Stata, as their public data were a .dta file – Stata’s native file storage format. Also, lets be honest, no social scientist was using R before 2010 when they likely did their analyses. My curiosity and suspicion led me to run the model in Stata. Here I got a -0.21 coefficient. Almost identical to their original study, and not surprising given that there was minor variation in case numbers they reported and in the public data file. I was still not convinced that these coefficients were actually different because the models were quite complex. So I plotted the predicted probabilities to be sure that this was not statistical artifact of some model components (intercept cut-points for example).
But my hunch was supported. These same models run in R and Stata, led to different predicted probabilities. The figure below shows that among many former Communist Eastern and Central-Eastern European countries those who are aged 60+ were less optimistic about their living standards over the next 5 years. One of their critical findings was that this gap increased form 1993 to 2007, in particular because the 60+ group became even less optimistic; although in fairness this is difficult to conclude because of all the other variables in the model. What is clear, is that they were 0.45 points apart in 1993 and 0.65 apart in 2007 (see Table 5 in their original study). When I plot the predicted values I find that these results are similar but not identical in terms of the 1993 to 2007 change, but that the probabilities are quite different across software.
I used Stata v15 and the ‘oprobit package, and I used R’s ‘MASS’ package and the ‘polr‘ function. Despite identical data (and case numbers!) the ‘polr’ routine predicts the probability of respondents answering that their standard of living will fall or fall a great deal as much higher than that of Stata. Although the relative change between age groups is similar – with a slightly steeper negative slope in R – the lines are pretty far apart, and even further apart for the age group 60+. Without unpacking each package and the exact estimation strategy taking place therein, I cannot as of yet say why. My own statistical and software abilities are by no means exceptional, but certainly above the average social scientist. Thus, it would be totally unrealistic to expect any social scientist to understand the entire routine taking place within the polr or oprobit packages. If they did, they wouldn’t need the packages and could just write their own cumulative link routine!
The implications are that using different software leads to a lack of reproducibility. As if we did not have enough to worry about in the reproducibility area already.