Its not just a ‘replication crisis‘, its a reproducibility crisis. Forget about falsification if one can’t even reproduce the workflow of another. Efforts are underway to improve transparency which allows reproduction, good. But what if even with the original data, researchers cannot reproduce numerical results simply because of where they are in time and space? What if different statistical programming languages or different packages and versions of these languages actually lead to different findings? Based on results of the Crowdsourced Replication Initiative we demonstrated that even using the same data and models, only 80% of independent replicator teams could reproduce the numerical results, even when they had access to the original Stata code.
Based on my experiences in this area, I cannot but help have a strong hunch that software or package versions threaten the computational reproducibility of our research. The fact that Stata rounds up at .5 and R rounds down already suggests that the same models might produce different results across software simply from rounding variations. Recently, I found another reason to believe this hunch.
I am a participant in SCORE, a massive collaboration to systematically investigate the reproducibility of social science research – led by the Center for Open Science. I agreed to attempt a computational reproduction of the study “Age, Inequality, and Reactions to Marketization in Post-Communist Central and Eastern Europe” by Horvat and Evans (2011).
Despite some potential differences in the data I received from the original authors and those reported in their tables, I was able to produce similar results. Similar but not exactly close. The particular coefficient I was interested in came out at -0.39 in my ordered probit (cumulative link) model, whereas their original was -0.22. As a logistic coefficient or (translated into a percentage probability), this is potentially a large difference. I used R, but was not satisfied with my results. The original authors most likely used Stata, as their public data were a .dta file – Stata’s native file storage format. Also, lets be honest, no social scientist was using R before 2010 when they likely did their analyses. My curiosity and suspicion led me to run the model in Stata. Here I got a -0.21 coefficient. Almost identical to their original study, and not surprising given that there was minor variation in case numbers they reported and in the public data file. I was still not convinced that these coefficients were actually different because the models were quite complex. So I plotted the predicted probabilities to be sure that this was not statistical artifact of some model components (intercept cut-points for example).
But my hunch was supported. These same models run in R and Stata, led to different predicted probabilities. The figure below shows that among many former Communist Eastern and Central-Eastern European countries those who are aged 60+ were less optimistic about their living standards over the next 5 years. One of their critical findings was that this gap increased form 1993 to 2007, in particular because the 60+ group became even less optimistic; although in fairness this is difficult to conclude because of all the other variables in the model. What is clear, is that they were 0.45 points apart in 1993 and 0.65 apart in 2007 (see Table 5 in their original study). When I plot the predicted values I find that these results are similar but not identical in terms of the 1993 to 2007 change, but that the probabilities are quite different across software.
I used Stata v15 and the ‘oprobit package, and I used R’s ‘MASS’ package and the ‘polr‘ function. Despite identical data (and case numbers!) the ‘polr’ routine predicts the probability of respondents answering that their standard of living will fall or fall a great deal as much higher than that of Stata. Although the relative change between age groups is similar – with a slightly steeper negative slope in R – the lines are pretty far apart, and even further apart for the age group 60+. Without unpacking each package and the exact estimation strategy taking place therein, I cannot as of yet say why. My own statistical and software abilities are by no means exceptional, but certainly above the average social scientist. Thus, it would be totally unrealistic to expect any social scientist to understand the entire routine taking place within the polr or oprobit packages. If they did, they wouldn’t need the packages and could just write their own cumulative link routine!
The implications are that using different software leads to a lack of reproducibility. As if we did not have enough to worry about in the reproducibility area already.
Certain hypotheses are constantly tested in social science. The impact of income inequality on health, racial bias on police brutality and public opinion on elections, just to name a few. At some point more tests of the same hypothesis stop contributing to scientific knowledge, and may even harm it by introducing more ‘noise’ into the scientific discourse.
I study social policy preferences and the impact immigration has on them. In this area there has been sustained efforts to test the hypothesis that immigration has a negative impact on support for policies of the welfare state; things related to protecting against risks of aging, unemployment and health. To justify this hypothesis, scholars construct theoretical variations of group dynamics arguments, often drawing on resource competition, nationalism and social identity. Despite claiming to test the hypothesis, the formal models applied to data suggest any number of data-generating processes. They often have little in common other than some measure of immigration and some measure of policy preferences. The results of their tests go in all directions, i.e., a positive, negative or nil effect of immigration. It would appear that the topic is at a standstill, new analyses of the same handful of cross-national survey data sink in the mire. How to break through such a scientific impasse?
In designing the Crowdsourced Replication Initiative (CRI) with co-PIs, Alexander Wuttke and Eike Mark Rinke, we asked researchers to to do research; and we gave them semi-structured tasks and observed them. Specifically they were supposed to come up with the best possible way to test the immigration hypothesis given the same International Social Survey Data source. Although we are currently meta-analyzing the hypothesis test-results (see our virtual APSA poster) to determine which modelling decisions impact the outcomes, we also have a second goal in mind: to discover what is behind the specification curve.
Each research team had to design a best possible test. This is at once a statistical question and a theoretical question. They needed to think carefully about the data-generating process and attempt to recover it in a model. We asked them to write down their research designs after doing this thought exercise, but before analyzing any data. From their researcher choices we can identify where key consensus and disagreements exist about the data-generating model, thus is not only evident in their designs but also in a structured deliberation and voting procedure. This process offers a major advantage over ‘normal’ theoretical discussion and debate among academics, because we have the results that go along with the different modeling choices; and, let’s be honest, when else do over 150 researchers get together and focus on a single hypothesis? By observing this process we can identify where data-generating theories differ and how important these differences are for the results. This will allow us to map where immigration and social policy scholars should focus their theoretical efforts in the future to reduce the most uncertainty, i.e., the largest gains in knowledge.
We have a sound piece of scientific research from Brady and Finnigan (2014) from which we draw our working hypothesis for the CRI crowdsourced researchers: That immigration undermines support for social policies. Brady and Finnigan found little or no support of this hypothesis, at least not in a generalizable macro-comparative sense. This was the launching point for the research of the 77 teams who by now managed to submit replicable results (yes there are still a few out there we are hoping will submit a final model or fix issues we identified in our replication of their models).
Although we are in the process of analyzing the ocean of data generated by this project; a sneak preview offers exciting evidence of the possibility for meta-construction of theory.
Here are two glimpses of what’s to come. One are the deliberation and voting results summarized (Figure 1). The other are differences in definitions of ‘immigration’ (Table 1). We used Kialo, an online structured deliberation platform, to allow participants to discuss the data-generating model after they proposed their own ideas for how to best test the hypothesis. Readers can observe how this deliberation unfolded as we divided the participants into two groups: here and here. Later (after they had the possibility to update their models based on the deliberation) they were given other teams’ models or our own variations on those models to vote on and rank in terms of their appropriateness for testing the hypothesis without having seen the results of those models. Figure 1 quantifies both the Kialo veracity scoring and survey-based voting into one overall scale and then plots the average score of models by their features. Each different color is a discrete set of model features with the zero (y-axis) set to the average support of models choosing an OLS estimator (among the least preferred).
Figure 1. Researcher Preferences for Recovering the Data-Generating Model “Model” is the hypothesized general impact of immigration on support for social policy. Data and code still being prepared for online sharing, stay tuned.
In Figure 1, it becomes clear looking at the longest bars in each color category that models that incorporate all 5 waves of the ISSP data, include countries of Eastern Europe, include heterogeneous error variation by country-year and year (like a cross-classified model), and incorporate survey sampling weights are preferred over the others. Some of this runs counter to the state of the art. For example, most research follows a logic that major immigrant destination societies – the “Rich 13” and “Rich 17” advanced democracies – should be where “public opinion is likely most influential for the politics of social policy” (Brady and Finnigan 2014:24).
These comments demonstrate the majority voice in the CRI that if immigration has a an impact on social policy preferences we should see it across all countries of the globe, not restricting our analysis to only very rich, strong welfare states.
Although Brady and Finnigan and all other research in this area comes to no consensus on whether there is a negative impact of immigration on support for social policy preferences, we should remain skeptical of results if we do not trust the data-generating model. In other words, if our tests do not match what most researchers see as the appropriate theoretical perspective, results are inconclusive and thus uninformative. The deliberation and voting offer us clues where to focus theoretical effort, namely specifying why more countries of the world should (or should not) show a causal effect of immigration on social policy preferences and whether this should (or should not) appear across several decades or only certain times. I am not aware of extensive theory that attempts to tackle these issues. Now is the time to write it!
Even more productive for the possibility of meta-construction of theory is the correspondence between the actual decisions made by the researchers and the subjective and objective outcomes of those decisions. Again, our results are in progress, but we offer a snapshot in Table 1 of different ways the researchers chose to measure immigration as their main hypothesis test variable (1 out of dozens of model decisions to compare). In the first row, 67 out of 77 teams used a “Stock of Foreign-Born” measure in at least one of their models, and 27% of their models using the “Stock” variable showed support of immigration having a negative and significant statistical impact on support for social policy at p<0.05.
Table 1. Crowdsourced Researcher Decisions, Deliberations and Results. Five different measurement strategies for the immigration test variable.
In the column ‘Positive Test Result Rate’, we see that the ‘Difference’ between “Stock” models (referenced as [1] in Table 1) and those instead using “Flow” to measure immigration models (referenced as [2]) is 3.6. In other words, “Stock” models arrive at support of the hypothesis 3.6 percentage points more than “Flow” models, all else equal. “Stock” models were not more or less popular than “Flow” models, with the average vote score of 0.43 on a scale of 0 (worst) to 1 (best equipped to test the hypothesis) versus 0.45 for “Flow”.
The values in bold indicate that “Change in Flow” models (those measuring derivatives of “Flow”) were among the most popular in the voting process. So the rate of change of the flow of immigrants is seen as an important component in testing this hypothesis. Interestingly, these models were 4 percentage points more likely than “Stock” and “Flow” models to support the hypothesis. When measuring immigration as specific to certain outgroups (from Muslim-majority countries, non-Western countries or refugees), the “Flow” of these various ‘Outgroups’ was seen as more popular than “Stock” of ‘Outgroups’ by a large margin, but the results were over 10 percentage points less supportive of the hypothesis.
What can we learn from this. We argue that a full analysis of the massive range of modeling decisions will give us a guide to move this entire research area forward. Some other decisions for example were different social policy domains, whether ethnic and fractionalization is the ‘real’ cause of the ‘immigration’ effect, construction of latent social policy preference measures, whether or not GDP and unemployment are part of the data-generating assumptions just to name a few out of hundreds. We are only scratching the surface here, but it seems that observing researchers make research decisions, deliberating them, voting and making final choices, we will gain immense knowledge as to where better theory is necessary. As such we see meta-constructing of social theory as a promising avenue for social science. This would be the concept of theory designed replication writ large.
Analytical errors are a normal part of social science, as David Brady pointed out in the December 2019 IPM Newsletter. Whether conscious, unconscious or simply mistakes, we take certain paths in our research processes. All possible paths constitute researcher degrees of freedom, and they can lead to different findings. In fact, we can intentionally exploit paths to the results we want. Replications can work against this by detecting errors or identifying questionable decisions. We have few replications in sociology, those working in IPM being no exception. Thus just one replication of any given published should be a scientific improvement.
But is it?
Let’s imagine a secondary data study that a
replicator wants to reproduce. We now know from a major social science journal,
that even when researchers provide their code it rarely runs ‘right out of the
box’; replicators often need additional information or materials from the
authors (Janz 2015). Moreover, consider an R
user trying to replicate Stata code.
This requires writing brand new code. Replications with the simple goal of
reproduction are not as straightforward as we think! Now let’s imagine a
replicator has more complicated goals of testing generalizability or
scrutinizing the original study. A replication is as prone to researcher
degrees of freedom problems as the original study. This means replications might
not be as useful as we hoped, and alone cannot alleviate the ‘crisis of
science’.
To be more effective, original studies and replications alike need to focus on model specification. As Muñoz and Young (2018) demonstrate, the cost of running billions of models is roughly zero. Unless they are super-complicated, a computer can do this for us very quickly. Given that researchers accidentally or intentionally report only models supporting their claims, it is a useful exercise to check how alternative specifications might affect the findings. This does not mean running all possible models. Doing so throws models into the results pool that are impossible causally speaking. We need to carefully, logically select models based on plausible research decisions. For example, if a researcher does not weight their survey data, we should ask, ‘why not?’. If a researcher measures poverty as half the median income, we should explore alternative measures.
This figure demonstrates effect sizes (top) and model specifications (bottom) of analyses of the outcome variable “positive reciprocity” (a tendency to pay back favors). This figure was produced by Julia Rohrer (2018) and demonstrates that specification curves are both visually appealing and scientifically useful
Putting together all combinations of plausible decisions we come to a set of models that are theoretically defensible. A major gain is that we can see which decisions have an impact on the results. This gives an indication of where we need to focus our future research. One of the newest ways to do this is p-curve analysis, also known as “specification curve” modeling. These methods were originally introduced as ways to detect p-hacking and publication bias. But we can extend them to standard research and replications, leading to gains in theory. These methods caught on in psychology recently (Rohrer 2018), and it is time for sociology to get behind the curve and move ahead our discipline’s reliability.
Crowdsourcing researchers. A new use for an old tactic. When Silberzahn and colleagues asked if football referees are skin-color biased in their assignment of red cards, they brought a new meta to social research. Rather than the typical one research team, one project; they got together twenty-nine teams. All got the same research question and same data. What can we learn?
Hold on. A single academic strapped with programming skills can run just about every possible statistical model configuration. Level up this academic and they can program machine learning routines to tell us almost anything we need to know. So why do we need a crowd of researchers to analyze the same data?
The answer: theory.
Computers do not do theory. Computers crunch data. They can’t tell us where the data came from. Running every possible in an effort to ‘test robustness’ means testing for things that do not exist. Even worse, it means taking the results of tests for things that cannot possibly exist and using them to draw conclusions on the robustness of a test for something that could exist. Throwing all possible variables into a model or ordering variables in every possible configuration will maximize statistical predictions, yes. But what good is predicting something that cannot exist?
An example: Policymakers decide they want to increase the number of females in society. A computer determines that bearing children is a good predictor of being biologically female. Now, imagine that in a statistical model, being pregnant or ever having had biological children explains 75% of the variance in the sex of a given population (its probably about 100% at this point in human history, but lets allow room for error). The policymakers thanks to the computer, conclude that if 1,000,000 more people were pregnant, a predicted 750,000 of them should be female and only 250,000 male, plus a margin of error. Thus, they conclude that getting 1,000,000 people pregnant is likely to increase the number of females by 250,000 in their society, assuming the society is currently 50% female.
Epic fail.
If we want to develop causal theories that provide useful knowledge for societies, human logic is necessary. Rather than having one computer report 8.8 million false positives after running 9 billion different regression models, humans can identify correct, or at least ‘better’, model specifications. In fact, we would not even know they are ‘false positives’ without a human to understand that getting someone pregnant does not turn them into a female, for example.
Sounds like a meta-problem. So what good is crowdsourcing if we cannot rely on the crowd?
Answer: social interaction.
Crowdsourcing when done with careful planning and central organization, allows participants to comment on, if not deliberate, each others research choices. Suddenly meta-uncertainty turns into the power of meta-logic. Not just one team and their narrow ideas, but a communal debate with diverse inputs. Both the Silberzahn et al study and our CRI involved deliberation and voting on research designs. Combined with the growing area of specification curve analysis, crowdsourcing increases credibility for social research at the level of the population under study and the meta-level of the researchers themselves.
Finally, the relevance of crowdsourcing for collaborative theory construction is an untapped but promising avenue for the future. Crowd research departs from the current system that favors individualism by rewarding novelty of individual researchers. Crowdsourcing instead isa system of consensus building and direct responsiveness to theoretical claims. It could resolve the perpetual problem of scholars, areas and disciplines talking ‘past’ each other. If not consensus, it can identify critical unresolved questions to guide future research.
Crowdsourcing can move us toward open science in the Mertonian sense of a communalistic endeavor. To achieve this, all participants should be co-authors on the project, get to discuss each others’ models and theory, and get to update their own results during the process. We need machines to facilitate this kind of large scale research, but they cannot produce communal, logical exchanges. For that we need to stick with the crowd.
Meta-science, social inequality, methods, open science.
At inception this blog catalogs the wracking process of organizing, administering and evaluating the project and its massive amount of generated data in the Crowdsourced Replication Initiative (CRI).
A technical blog, looking at the ‘nitty-gritty’ of the methods necessary to carry out and present the findings of a project involving 88 research teams, almost 200 researchers, an online deliberation, four survey waves during the process, an experimental condition, a replication, an original research condition, and analysis and meta-analysis of the results.
A research question blog, asking bigger questions about meta-science, researcher reliability, reproducibility, social inequality and crowdsourcing.
In the future…. this blog will address more.
Why blog? The future of science may involve a more interactive, hyperlinked and faster disseminated format. Blogs can facilitate this. In the social science the turnaround time from research findings to published papers is somewhere around 3 to 4 years (counting rejections and R&R’s). Moreover, blogs offer space for discussing things that simply don’t fit in our awfully restrictive 6-12,000 word journal articles.
Open science calls for blogging. Its free, fast and owes no debts. It circumvents the institutional problems of science. As an Open Science Fellow as part of the Freies Wissen program of Wikimedia Germany, a Catalyst for the Berkeley Initiative for Transparency in the Social Sciences (BITSS) and a concerned sociologist, this blog is a contribution to the open science movement.