Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Overcoming replication fears

Fear of rejection, part I

To replicate a study, you need information. Probably information that is not fully disclosed in a 6-12,000 word journal article. Except for a recent trend, information such as data and analytical procedure are not going to be available publicly. This means you should, or must in case the data are not retrievable from some other source, contact the original author. Be prepared for rejection. One study demonstrated that among the top sociology journals, less than 30% of replication materials were available (even though as many as 75% claimed otherwise). Political science was only marginally better at around 50% as of 2015. Professors are likely to ignore emails asking for their data and code. One group of sociology students contacted 53 different authors asking for replication materials and only 15 provided them (28%). Ten never responded to the requests at all, despite several follow up emails. So don’t take it personally, social scientists are not known for their forthcomingness in this area.

Verification is not affirmation

Imagine being a student who tries to verify the results of a prolific, senior scholar and cannot. If it were me, I would be anxious that I made a mistake. But the only real mistake would be to assume my lack of verification is a refutation of my own skills. Of course, it’s good to double check everything. Have a colleague look at your work if you are unsure, a teacher or supervisor if you are a student. Un-verifiable results are common, no need for self-doubt. Things like reverse coding biological sex so that women appear less supportive of welfare state policies or accidentally analyzing values of 88 (a missing code) as a relevant value of coital frequency leading to a surprising rate for older persons are actually a normal part of social science.

When replicating a study just assume there will be at least one mistake. Like a treasure hunt.

Verification comes down to the availability of the materials. If the data and code are not fully available, it really is a treasure hunt because you will be unsure what you are going to find or learn. On the other hand, if the data and code are available and in good order, then it is more like cooking than hunting. This often comes down to the difference between teaching replication – the recipe approach, where students should come to the same results every time when following the exact same steps, and replication as a form of social research – the treasure hunt approach, where researchers (i.e., students) may not have a coherent recipe from the original ‘chef’. But make no mistake(!) even fully transparent studies often come with mistakes in the code or data.

Fear of mistakes

If I am not making mistakes, I am not doing research. You will make mistakes and there is nothing to fear. There are all kinds of reasons that replication results will diverge, not all of them are mistakes. Recently a well-known and well-respected sociologist retracted his own paper after someone trying to replicate the study identified coding errors. One journal started checking that data and code produced results in accepted papers, and almost none were verifiable on the first attempt. In a crowdsourced replication, mostly PhD students, postdocs and a few professors came to an exact verification of the original study only 82% of the time, despite having the original code!

Fear of the unknown

Designing statistical models using a software is like learning a new language. Student replications often involve methods unfamiliar to the them. This is a great didactic tool – learning by doing. There is nothing to fear here. Professors’ original studies often involve methods that they are not experts in. One extremely famous scholar and his colleague ran a regression with an interaction term in it and botched the interpretation of the effects, the results were basically the opposite of what they reported.

Science is a process of exploring the unknown. Replications use what is known as a tool for finding what is unknown.

Fear of rejection, Part II

Students may be interested in publishing their replications, they should be, because how else will others put their knowledge into practical use? Get prepared again for rejection. Journals and reviewers across the social sciences are not very excited about replications. A pair of researchers studied the instructions and aims of 1,151 psychology journals in 2016 and discovered that only 3% explicitly accepted replications. One sociologist pointed out not so long ago that replication is just not the norm in sociology, and another one recently came to the same conclusion. The good news is that we don’t need journals anymore to make useful science, at least in theory. Students can immediately publish their results as preprints and share data and code in a public repository. If a student elects to use Open Science Framework preprint servers, their work will be immediately found in scholarly search engines.

Fear of ego

Scientists tend to overestimate the impact of a negative replication on their reputations. Ego-alert. Assume a scientist worried about a replication is a professor. This is a person who is most like tenured, certainly the highly cited professors are. This is also a person who “professes” knowledge on a topic, meaning that they should be an expert and engage in teaching students, policymakers, the public and really anyone interested about this topic. If any of this professor’s results were shown to be unreliable or false, this would be a critical piece of information if that professor’s goal was to actually profess knowledge on that topic. Unfortunately, professors regularly suffer from some kind of ‘rock-star syndrome’ or ego-mania where they are doing science as a means to get recognition and fame. This leads them to react aggressively against anything that contradicts them. This is very bad for science. If a student replicator can help deplete a runaway professor ego through replication, then that student is doing a great service to science.

Fear of not addressing fear

In a typical primary or secondary school chemistry class, students repeat the basic experiments of chemical reactions that have been done for hundreds of years. These students are learning through replication. They are gaining knowledge in a way that cannot be simply taught in a lecture or by reading a book. They are also affirming the act of science, thus developing a faith that science works. In social science especially, we face a reliability crisis if not a public image crisis. Students should be reassured that there is a repetitive and reliable nature to doing social science, whether they will continue as a social scientist or (in the most likely case) not. Part of this reliability can be a lack of reliability. Science is simply a process of trying to understand the unknown, and even quantify this unknown. I fear that without more student replications, we are diminishing the value of social science and contributing to the perception that social science is unreliable.

Good social science should be reliably able to identify unreliability, and this is best taught through conducting replications.

Meta-constructing social theory

Certain hypotheses are constantly tested in social science. The impact of income inequality on health, racial bias on police brutality and public opinion on elections, just to name a few. At some point more tests of the same hypothesis stop contributing to scientific knowledge, and may even harm it by introducing more ‘noise’ into the scientific discourse.

I study social policy preferences and the impact immigration has on them. In this area there has been sustained efforts to test the hypothesis that immigration has a negative impact on support for policies of the welfare state; things related to protecting against risks of aging, unemployment and health. To justify this hypothesis, scholars construct theoretical variations of group dynamics arguments, often drawing on resource competition, nationalism and social identity. Despite claiming to test the hypothesis, the formal models applied to data suggest any number of data-generating processes. They often have little in common other than some measure of immigration and some measure of policy preferences. The results of their tests go in all directions, i.e., a positive, negative or nil effect of immigration. It would appear that the topic is at a standstill, new analyses of the same handful of cross-national survey data sink in the mire. How to break through such a scientific impasse?

In designing the Crowdsourced Replication Initiative (CRI) with co-PIs, Alexander Wuttke and Eike Mark Rinke, we asked researchers to to do research; and we gave them semi-structured tasks and observed them. Specifically they were supposed to come up with the best possible way to test the immigration hypothesis given the same International Social Survey Data source. Although we are currently meta-analyzing the hypothesis test-results (see our virtual APSA poster) to determine which modelling decisions impact the outcomes, we also have a second goal in mind: to discover what is behind the specification curve.

Each research team had to design a best possible test. This is at once a statistical question and a theoretical question. They needed to think carefully about the data-generating process and attempt to recover it in a model. We asked them to write down their research designs after doing this thought exercise, but before analyzing any data. From their researcher choices we can identify where key consensus and disagreements exist about the data-generating model, thus is not only evident in their designs but also in a structured deliberation and voting procedure. This process offers a major advantage over ‘normal’ theoretical discussion and debate among academics, because we have the results that go along with the different modeling choices; and, let’s be honest, when else do over 150 researchers get together and focus on a single hypothesis? By observing this process we can identify where data-generating theories differ and how important these differences are for the results. This will allow us to map where immigration and social policy scholars should focus their theoretical efforts in the future to reduce the most uncertainty, i.e., the largest gains in knowledge.

We have a sound piece of scientific research from Brady and Finnigan (2014) from which we draw our working hypothesis for the CRI crowdsourced researchers: That immigration undermines support for social policies. Brady and Finnigan found little or no support of this hypothesis, at least not in a generalizable macro-comparative sense. This was the launching point for the research of the 77 teams who by now managed to submit replicable results (yes there are still a few out there we are hoping will submit a final model or fix issues we identified in our replication of their models).

Although we are in the process of analyzing the ocean of data generated by this project; a sneak preview offers exciting evidence of the possibility for meta-construction of theory.

Here are two glimpses of what’s to come. One are the deliberation and voting results summarized (Figure 1). The other are differences in definitions of ‘immigration’ (Table 1). We used Kialo, an online structured deliberation platform, to allow participants to discuss the data-generating model after they proposed their own ideas for how to best test the hypothesis. Readers can observe how this deliberation unfolded as we divided the participants into two groups: here and here. Later (after they had the possibility to update their models based on the deliberation) they were given other teams’ models or our own variations on those models to vote on and rank in terms of their appropriateness for testing the hypothesis without having seen the results of those models. Figure 1 quantifies both the Kialo veracity scoring and survey-based voting into one overall scale and then plots the average score of models by their features. Each different color is a discrete set of model features with the zero (y-axis) set to the average support of models choosing an OLS estimator (among the least preferred).

Figure 1. Researcher Preferences for Recovering the Data-Generating Model
“Model” is the hypothesized general impact of immigration on support for social policy. Data and code still being prepared for online sharing, stay tuned.

In Figure 1, it becomes clear looking at the longest bars in each color category that models that incorporate all 5 waves of the ISSP data, include countries of Eastern Europe, include heterogeneous error variation by country-year and year (like a cross-classified model), and incorporate survey sampling weights are preferred over the others. Some of this runs counter to the state of the art. For example, most research follows a logic that major immigrant destination societies – the “Rich 13” and “Rich 17” advanced democracies – should be where “public opinion is likely most influential for the politics of social policy” (Brady and Finnigan 2014:24).

To summarize the motivation for looking across all possible countries, especially Eastern Europe, one crowdsourced researcher put it like this: “Either there is an effect of ‘immigration stock (increase)’ or not“.

Another followed up on this point stating: “To test the general hypothesis we should use as many countries as available and account for variations in GDP and social welfare expenditures in the models.”

These comments demonstrate the majority voice in the CRI that if immigration has a an impact on social policy preferences we should see it across all countries of the globe, not restricting our analysis to only very rich, strong welfare states.

Although Brady and Finnigan and all other research in this area comes to no consensus on whether there is a negative impact of immigration on support for social policy preferences, we should remain skeptical of results if we do not trust the data-generating model. In other words, if our tests do not match what most researchers see as the appropriate theoretical perspective, results are inconclusive and thus uninformative. The deliberation and voting offer us clues where to focus theoretical effort, namely specifying why more countries of the world should (or should not) show a causal effect of immigration on social policy preferences and whether this should (or should not) appear across several decades or only certain times. I am not aware of extensive theory that attempts to tackle these issues. Now is the time to write it!

Even more productive for the possibility of meta-construction of theory is the correspondence between the actual decisions made by the researchers and the subjective and objective outcomes of those decisions. Again, our results are in progress, but we offer a snapshot in Table 1 of different ways the researchers chose to measure immigration as their main hypothesis test variable (1 out of dozens of model decisions to compare). In the first row, 67 out of 77 teams used a “Stock of Foreign-Born” measure in at least one of their models, and 27% of their models using the “Stock” variable showed support of immigration having a negative and significant statistical impact on support for social policy at p<0.05.

Table 1. Crowdsourced Researcher Decisions, Deliberations and Results.
Five different measurement strategies for the immigration test variable.

In the column ‘Positive Test Result Rate’, we see that the ‘Difference’ between “Stock” models (referenced as [1] in Table 1) and those instead using “Flow” to measure immigration models (referenced as [2]) is 3.6. In other words, “Stock” models arrive at support of the hypothesis 3.6 percentage points more than “Flow” models, all else equal. “Stock” models were not more or less popular than “Flow” models, with the average vote score of 0.43 on a scale of 0 (worst) to 1 (best equipped to test the hypothesis) versus 0.45 for “Flow”.

The values in bold indicate that “Change in Flow” models (those measuring derivatives of “Flow”) were among the most popular in the voting process. So the rate of change of the flow of immigrants is seen as an important component in testing this hypothesis. Interestingly, these models were 4 percentage points more likely than “Stock” and “Flow” models to support the hypothesis. When measuring immigration as specific to certain outgroups (from Muslim-majority countries, non-Western countries or refugees), the “Flow” of these various ‘Outgroups’ was seen as more popular than “Stock” of ‘Outgroups’ by a large margin, but the results were over 10 percentage points less supportive of the hypothesis.

What can we learn from this. We argue that a full analysis of the massive range of modeling decisions will give us a guide to move this entire research area forward. Some other decisions for example were different social policy domains, whether ethnic and fractionalization is the ‘real’ cause of the ‘immigration’ effect, construction of latent social policy preference measures, whether or not GDP and unemployment are part of the data-generating assumptions just to name a few out of hundreds. We are only scratching the surface here, but it seems that observing researchers make research decisions, deliberating them, voting and making final choices, we will gain immense knowledge as to where better theory is necessary. As such we see meta-constructing of social theory as a promising avenue for social science. This would be the concept of theory designed replication writ large.

Love is in the error term

A major segment of social science uses formal models and quantitative methods to explain social phenomena. That means they use mathematical symbols to define how they think the world works, and then find data and test it. For example, researchers want to explain social outcomes, like committing a crime, changing jobs, moving or having children. Why do some people do these things and not others? Researchers then speculate that other things cause these outcomes like getting married, having a job or losing a job, how much money a person makes, how old they are and all kinds of other stuff.

Next, researchers change their theoretical ideas into a formal model. This means an equation. Before you stop reading, maybe pause to consider that an equation is just a theory expressed with symbols instead of standard language. For example,

Y = X

and this might represent moving homes (“Y“) is caused by getting married (“X“). Another way of saying this is, Y depends on X or moving depends on getting married.

But seriously, getting married does not always cause moving. Sometimes it does. The correct claim is that moving is a function of getting married. Function means that marriage probably increases the likelihood of moving by some amount. Therefore, researchers add a modifier to X, like the letter b, so if b was 0.3 getting married would increase the likelihood of moving by 30%. People also move when they don’t get married, so researchers add a constant a to account for the likelihood of a person moving for any other reason. So if a was 0.3 then the average person would have a 30% chance of moving at any point.

Y = a + bX

Now if a and bX could be used to perfectly predict whether a person moves or not. The researcher would have made a monumental achievement here. Instead what happens is the researcher goes and observes moving and marriage behaviors. Tries to do this with a random sample of the population of a society. Then applies the above equation to the sample data. Does the equation fit perfectly to the data? No. Never. Researchers must admit that their theories come with uncertainty. Maybe moving depends on the type of home the person always lives in, maybe it depends on whether a couple can afford to move or if it is even a couple or a single parent. There are nearly unlimited things that could cause moving. Also, getting a perfectly random sample, mistakes when observing or coding data, etc. leads to uncertainty in the results. This means the formal equation has to have e, an error term.

Y = a + bX + e

Researchers spend their time trying to reduce what is in the error term. So they are basically tasked with the job of reducing uncertainty. If scientists wants to fly a spaceship to mars they have to reduce all possible forms of uncertainty in the take off, flight path, landing, electronics and so forth. But a social scientist who wants to predict how many people will move, where they will move or which people will move, cannot reduce uncertainty that much. Sure, they can explain moving with job changes, family members getting sick, weather, hobbies, wars, and so many observable things. But they will never fully explain moving. What I’m really saying here is that social scientists will never be able to perfectly explain human behaviors. Its like this:

An Enlightened Physicist describes Sociology

Why is it so hard? If we could just measure all the things that people are doing and thinking we could predict precisely what humans will do next, right? Maybe so, maybe not; but irrelevant because we cannot measure everything about humans. Even if we could measure the precise actions of the network of neurons in one human brain numbering greater than all the stars in the universe, we probably couldn’t measure love. But something that we might refer to as love seems to be a major force in guiding human actions. Both the effect of love:

“Love affects more than our thinking and our behavior toward those we love. It transforms our entire life”

– Thomas Merton

And the lack of love:

“Intense spiritual and emotional lack in our lives is the perfect breeding ground for material greed and overconsumption.”

– bell hooks

(both quotes from hooks 2000, pp. 187 & 105)

These suggest love is a prime mover of human decisions and behaviors. But we don’t know what it is exactly and we have made no attempts, that I am aware of, in any large population surveys to measure it. So,

When we add love as an X variable to our formal model we cannot test anything, because we have not developed instruments to measure it. So it remains in the void of all that variance in Y that we simply cannot explain. Maybe we should start focus some of our effort on that.

hooks, bell. 2000. All About Love: New Visions. William Morrow and Company, New York, NY. ISBN 0-688-16844-2.