Analytical errors are a normal part of social science, as David Brady pointed out in the December 2019 IPM Newsletter. Whether conscious, unconscious or simply mistakes, we take certain paths in our research processes. All possible paths constitute researcher degrees of freedom, and they can lead to different findings. In fact, we can intentionally exploit paths to the results we want. Replications can work against this by detecting errors or identifying questionable decisions. We have few replications in sociology, those working in IPM being no exception. Thus just one replication of any given published should be a scientific improvement.
But is it?
Let’s imagine a secondary data study that a replicator wants to reproduce. We now know from a major social science journal, that even when researchers provide their code it rarely runs ‘right out of the box’; replicators often need additional information or materials from the authors (Janz 2015). Moreover, consider an R user trying to replicate Stata code. This requires writing brand new code. Replications with the simple goal of reproduction are not as straightforward as we think! Now let’s imagine a replicator has more complicated goals of testing generalizability or scrutinizing the original study. A replication is as prone to researcher degrees of freedom problems as the original study. This means replications might not be as useful as we hoped, and alone cannot alleviate the ‘crisis of science’.
To be more effective, original studies and replications alike need to focus on model specification. As Muñoz and Young (2018) demonstrate, the cost of running billions of models is roughly zero. Unless they are super-complicated, a computer can do this for us very quickly. Given that researchers accidentally or intentionally report only models supporting their claims, it is a useful exercise to check how alternative specifications might affect the findings. This does not mean running all possible models. Doing so throws models into the results pool that are impossible causally speaking. We need to carefully, logically select models based on plausible research decisions. For example, if a researcher does not weight their survey data, we should ask, ‘why not?’. If a researcher measures poverty as half the median income, we should explore alternative measures.
Putting together all combinations of plausible decisions we come to a set of models that are theoretically defensible. A major gain is that we can see which decisions have an impact on the results. This gives an indication of where we need to focus our future research. One of the newest ways to do this is p-curve analysis, also known as “specification curve” modeling. These methods were originally introduced as ways to detect p-hacking and publication bias. But we can extend them to standard research and replications, leading to gains in theory. These methods caught on in psychology recently (Rohrer 2018), and it is time for sociology to get behind the curve and move ahead our discipline’s reliability.
References:
Janz, Nicole. 2015. “Leading Journal Verifies Articles before Publication – So Far, All Replications Failed.” Political Science Replication Blog. Retrieved July 22, 2019.
Muñoz, John and Cristobal Young. 2018. “We Ran 9 Billion Regressions: Eliminating False Positives through Computational Model Robustness.” Sociological Methodology 48(1):1–33.
Rohrer, Julia M. 2018. “Run All the Models! Dealing With Data Analytic Flexibility.” APS Observer 31(3).