Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Media outlet and Q&A for ‘Ideological bias in the production of research findings’ by Borjas and Breznau

  1. Süddeutsche Zeitung (SZ) article by Sebastian Herrmann “Warum Forscher aus denselben Daten entgegengesetzte Schlüsse ziehen” (Why researchers reach different conclusions from the same data).
  2. An eloquently argued substack post by Laurenz Gunther describing our work and digging deeper into bias among researchers (here those working on the topic of immigration) .
  3. A news report by Luca Rehse-Knauf for Deutschlandfunk (German radio) and their Forschung Aktuelle series “Migrationspolitik: Einstellungen können Forschungsergebnisse beeinflussen” (Migration policy: Attitudes can influence research results).
  4. A PsyPost report by Eric W. Dolan “158 scientists used the same data, but their politics predicted the results“.
  5. A podcast on The Last Show with David Cooper. Apple / Youtube.
  6. A podcast in Allegedly Does Not Replicate with the Institute for Replication (I4R) with Abel Brodeur and Juan Pablo Posada Aparicio.
  7. There was a Neuer Züricher Zeitung article about the original ‘Hidden Universe’ study (paywalled) ‘Das Experiment: Wer bekommt die rote Karte?.

— How did you arrive at your research question, and what is the study about?

George emailed me and had a few questions about our original study. He was at that time analyzing our data and had found a statistical association between pre-existing preferences for more or less migration among the teams in our study, and their findings. I was very skeptical. I have now worked on replication and reproducibility themes for almost a decade. I am acutely aware of what we often refer to as ‘researcher degrees of freedom’, also known as ‘the garden of forking paths’. This refers to choices that researchers can make during the research process that can lead to different outcomes. I assumed that the statistical association he found would not hold under different but equally plausible model specifications. I began testing many different models. Basically, they all showed the same result. Therefore, I became convinced that this was more than a fluke.

Actually, George had already run most of the same models. We present all of our models in a multiverse analysis in our paper. Out of 883 models 88% showed a significant statistical effect suggesting that we should reject the null hypothesis that ‘ideology has zero effect on the teams’ research findings’. If we take the assumption that we should only trust models that control for researchers’ educational experiences – something we believe impacts their results – then we find that roughly 93% of the models show a significant statistical effect.

— Could you explain the experimental design and methodology?

This study is an exploratory secondary analysis of the data generated by the experiment of myself, Eike Mark Rinke and Alexander Wuttke. We gave 71 research teams the same data and hypothesis – that immigration reduces support for social welfare policies. We surveyed them on their backgrounds and research experience and asked them if they believed the hypothesis was true and what they thought about immigration policy. In the original study, again the one that I helped lead, we did not find any important impact of immigration preferences. We essentially found that the results went in all directions, and we could not easily explain the variation.

After working together, George and I agree that the statistical analysis in the original study was not a clean test of immigration on the research teams’ results, because it controlled for the statistical model specifications that the teams’ made. The original study was were searching for key decisions that might explain why results went in different directions, so this naturally made sense. But in hindsight, these decisions are the mechanism through which ideology gets transmitted into statistical results. Thus, the original study introduced what is known in statistics and causal analysis as a ‘confounder problem’. If someone has an ideological bias, they will choose statistical models that will lead to more desirable results. The original study was controlling for both the test variable (ideology) and its mechanism (the statistical models), and in doing so it suppressed the impact of the test variable we were trying to observe.

This time, George and I conducted regression analyses in which the research findings were the dependent variable and ideology the independent variable – without model specifications as control variables, but with further controls.

— What are the limitations?

A key limitation is that this study is exploratory, not confirmatory. It relies on secondary data from a study that was not specifically designed to test the impact of ideological bias. It cannot confirm that this bias exists, instead it demonstrates robustly that a statistical association exists between ideology and researchers’ findings. We are not aware of any other way to explain this association other than an ideological bias. But we can only confirm with confidence, that it is prudent to reject the null hypothesis that in this particular sample and study is that there is no association between preexisting preferences for immigration policy and research findings. More specifically, the likelihood of observing the data in this experiment if the null hypothesis were true is very low.

Another limitation is that the size of the effect we found is unclear. It points in a positive direction – more pro- immigration policy stances associate with findings that show immigration has a more positive effect on social policy preferences among the public, and vice-versa with more anti-immigration policy stances and a more negative effect. But because of the great variation in results and the small sample size, the standard errors of the estimated statistical effects are very large. This means that the true effect might be anywhere from miniscule and near-zero, to moderate, to very large. We simply cannot say much about this here. More research is necessary, although this is an implicitly difficult topic to study, because if we inform researchers that we are studying their ideological bias, they might behave differently and this would take away ecological validity.

— According to the study, ideology influences model specifications. Could you provide a concrete example to illustrate how a single design decision (or a combination thereof) can have an impact?

I cannot, and this is another limitation of the study. If I could, it would be something we would have found in the original experiment that collected the data. But we can only point at patterns here. There are certain model specifications, unique combinations of statistical modelling choices that produce more negative results. The teams with more anti-immigration ideologies were more likely to choose these. But there are far more model specifications than there are teams. This leads to a sparse data problem. There are many empty cells in the matrix of all possible model specification combinations that teams would plausibly make. This makes it roughly impossible to pinpoint exact specifications’ effects. and there are many different model specifications that can lead to a positive or negative statistical effect. The point is that the only thing that happened between the teams asked to test the hypothesis with the same data, was different modelling choices. Therefore, this is the only way they could arrive at different results. There was no cheating or result faking, we checked that their statistical code produced the results they reported to us.


— To what extent is this a problem, and to what extent is it normal that decisions, based on analytical decisions, depend on who you are and how you think?

This is nothing that our study answers. And it may not be fully possible to answer because we do not yet know the nature of consciousness. We also cannot measure what is happening inside a human neural network – a brain in other words. But it is clear to me that experience, ideology and preferences shape results. A simple example is statistical training. Many researchers have limited statistical training, and they build only those statistical models that they learned about in their studies. This impacts results.

But more generally idiosyncrasies of people, like ideology, shape what research questions that people are willing to pursue and how, and they shape the reporting of those results. Some could look at our study and think that the estimated statistical impact is large and highly concerning. Others, might look at it and think it is tiny and of no concern at all. Our study suggests that ideology can explain somewhere between 1 and 3% of the variance in the results. If scientific findings are on average 1 to 3% off of what they would be without bias, is that a big problem? I mean… what do you think?

— If I understand correctly, the experiment was originally intended to show how much the results diverged, not why. How did you arrive at ideology as a possible cause?

As I already mentioned, this is something that George noticed in our data. He already had this hypothesis in his mind. I cannot blame him for thinking this. The Open Science Movement and Metascience work reveals many so called ‘Questionable Research Practices’. These include everything from faking data, to tampering with statistical models or stopping the collection of data during an experiment to produce a desired result. These practices are designed to produce certain results in order to obtain a publication or support a pet hypothesis (confirmation bias). Obviously some of these studies were motivated by ideological goals.

— How could ideological bias be reduced? Is this even desirable, or should we simply be aware that it can exist?

The impact of ideology can be reduced by following some clear recommendations of the Open Science Movement. Studies should be pre-preregistered, they should provide all code and materials, they should not be conducted in isolation or in hiding, and researchers should cooperate. Some of us are engaging in so called ‚Adversarial Collaborations‘, where researchers who do not agree – those with different priors about a given hypothesis like the impact of immigration – collaborate. They lay out all the aspects of a study in advance, and they agree on what evidence would count as support of either of the positions. I highly recommend this form of science. It takes competition and turns it into collaboration with the goal of knowledge seeking prioritized above all else.


— You are investigating ideological bias in science using scientific methods. How do you deal with this tension in meta-scientific questions, where you are essentially also your own subject of investigation?

Similar to my last answer, one cannot fully understand or deal with one’s own bias, and therefore needs to build in checks into the process. Things that would reduce this bias, like preregistration. I am working currently on a project that is an Autoethnography of my own questionable research practices and the perverse incentives I encountered during my career in science. I hope that by doing this, and revealing my own behaviors, I can improve them. I also want to be a role model for others, to make it desirable to be highly critical of one’s own work. My goal is to get this study published in a high quality journal and thereby prove that self-criticism, something researchers mostly try to avoid to protect their theories, findings and careers, is something that can be used in a positive way in the scientific process.

Can one’s own attitudes always play a role, even in this study?

Sure. Definitely. That is why it was important for me to take a so called ‘multiverse’ approach to this study, and many other studies I am working on recently. I want to ensure that I, or one of my colleagues, has not simply selected a statistical model that produces certain results. This practice, known as hacking, or p-hacking, is prevalent in science, especially in secondary data analysis. I essentially learned to do this during my graduate studies. We would find a result we liked and then develop convincing logical arguments why the model producing it must be the best model. So, the idea with multiverse analysis, is to run all or at least all plausible alternative models. This helps reveal if my model is an outlier. Whether it represents something very unique or unusual in the distribution of model results. If it does, this is a cause for great concern. If not, it is evidence of a robust statistical association.

— There have also been critical reactions, for example in this online Bluesky thread, which raise concerns about George Borjas’s views and background. How do you assess this criticism?

I have read the discussion thread by Michael Clemens, an economist at George Mason University. One line of criticism appears to concern the fact that George Borjas recently conducted research for the executive branch of government.

Another criticism seems to focus on the observation that different model specifications yield different results. This is precisely what our study confirms, and it is also a well-established fact in the history of empirical social science.

Both points are orthogonal to our study. Even if we were to assume, hypothetically, that George held some form of ideological bias and that this bias influenced his analytical choices, which I cannot confirm and do not claim, this would not undermine our findings. The reason is that we adopted a deliberately robust research design. We conducted a multiverse analysis comprising 883 regression models. The consistency of results across this large set of plausible specifications makes it implausible that our conclusions are driven by special highly selective model specifications – those that would be selected due to ideological bias.

It is also important to note that George and I do not share the same political views. Precisely for that reason, ideology was an additional motivation for us to adopt a highly robust analytical strategy. I explicitly advocated for the multiverse approach in order to minimize the influence of individual priors, including our own potential ideological orientations. I see this as a great strength of the study.

The purpose of our approach is to decouple empirical results from personal or ideological preferences as much as possible. And to estimate the robustness of our finding to any kind of bias, not just ideological. I would encourage critics to apply the same standards of robustness to their own work. To date, Mr. Clemens has not presented empirical evidence that contradicts our findings. Moreover, criticisms referring to modeling choices in studies conducted by George decades ago are not relevant to the validity of the present analysis.

— What does it mean when you say that only 3% of the variance can be explained by your results.

That has to do with the regression coefficient and the r-squared values. A coefficient of 0.03 shows that a one-point higher (more positive) ideology mathematically predicts a change in results of 0.03. This sounds meaningless, but we know that 0 would be none (and we can equate this with zero percent change) and that 1 would be 1-point on a standardized scale. This is 1 standard deviation in the distribution of the dependent variable. It would be possible to move the results more than 1 standard deviation, but this would be quite preposterous. There is nothing in the complex nature of social science, which lacks laws, that would do that. So I will set the upper bound of the largest possible effect at 1, meaning that 1 would equal 100% of the distance in the distribution of variance.  Therefore, 0.03 is like 3 percent of the distance. At the same time, the r-squared, which tells us how much of the error is reduced from this particular variable, is around 0.03 or less depending on how we measure this variable. This suggests that fitting the observed ideology values into the observed results from the teams, reduces the unexplained variance from 100% down to around 97%.

— Can you explain — very, very simply — what you did? 

In a study that I co-led starting in 2018 (Breznau, Rinke and Wuttke et al. 2022), we designed an experiment that allowed us to observe researchers doing research on the impact of immigration on social policy preferences. We gave them the same data and asked them to answer the same research question: whether immigration reduces support for social policies or not. We documented the researchers’ pre-existing methodological training, experience with and expectations about the topic, and their personal preferences for looser or tighter immigration laws in their own countries. We shared all of the data and documentation of our work publicly. Because we shared our data, a few years later George J. Borjas was able to reanalyze our data and find new evidence of a correlation between the researchers’ ideological positions on immigration and their findings. Those with more pro-immigration positions tended to find evidence that immigration had a positive impact on support for social policy, and those with more anti-immigration positions tended to find evidence that immigration had a negative impact on support for social policy. Here “social policy” means support for a more extensive welfare state providing social security via the government or not – many would call this support for social cohesion. I was skeptical of George’s initial work, and together we vetted George’s findings. We ran almost a thousand alternative statistical models to test George’s finding and of these 88% suggest that we should reject the null hypothesis. The null hypothesis is that if there is no impact of ideology on researchers’ findings, that we should not observe what we observed based on probability. But we did observe this association, and by rejecting the null hypothesis we have evidence that something more is going on, not just random luck in the data. Crucially, we should reject the idea that ideology has no impact on researchers’ results when analyzing the same data. 

— What motivated you to do this study?

As I said, George found this association between researchers’ ideological positions on immigration and their research findings. This is an important scientific observation. I was a bit more skeptical, in particular because my initial analysis of the data as part of the original experiment did not show this association. Together we became very motivated to tackle this problem, and in the end, we have relatively strong evidence of something. The exact nature of this should be subject to further research.

— What are the most striking findings? 

The main finding is that researchers’ own preferences for tighter or looser immigration predicts what they went on to find in their work. This appears striking, but when put into context it is maybe not so surprising. We know that there are many reasons that researchers may consciously or even unconsciously exert influence on their own findings. There are many cases of researchers engaging in questionable research practices to ‘fudge’ their data and results in ways that make them appear stronger than they really are. This has occurred frequently in biomedicine, for example in studies of products whose approval would net the researchers great personal profits. Consider also what we know from psychology, namely confirmation bias. People tend to seek out evidence of what they already believe is true. Researchers are people too. When presented with various forms of competing evidence, a researcher might gravitate toward that which supports their preexisting beliefs or preferences. 

— What are the implications for the social sciences, which are already reeling from the replication crisis? 

The implications reinforce what we already know. We should focus on refining two areas of science. The first is scientific training. We need to make transparent and open workflows and data sharing the norm. This includes researchers stating in advance what they plan to do and what they expect to find. When researchers do not do this, they can run several experiments or analyze hundreds of datasets with millions of different statistical models and simply choose one finding that looks exciting or sexy to them. Generations of researchers before us have learned to do exactly this in order to get published. They learned to ‘sell’ a single selected finding as confirmatory evidence of something in the real world, when in fact it is simply a highly selected, exploration of data leading to a unique event; one that probably does not generalize and is not reproducible (a.k.a. luck). I bring up the idea of getting published here because this is the second problem we must urgently address in science. A researcher’s worth or ranking as a scientist is judged almost entirely on their publication record. In particular, publications in journals that are considered higher status. These higher status journals should be publishing studies because the studies contain higher quality science. But there are ways to game this system. There are a a host of questionable research practices that makes findings look more exciting than they actually are. These practices often lead to irreproducible findings. Essentially fake science. Big publishing is a major profit industry and this increases the pressure to publish, as the publishers of the journals engage in questionable, sometimes unethical practices to sell more journals, as opposed to solid science which is often quite boring and tends to find that new drugs or treatments do not work and that our theories are wrong. Ideally we need to end big publishing’s control over science. Their role should be simply production and distribution, but they currently copyright much of the material and force universities to pay twice, once for the researchers’ salaries and again so that researchers can read what they are publishing. And this often done with public money. Its really wrong and generates perverse incentives among researchers and profit-seeking publishers. 

— It seems teams didn’t falsify data or cherry-pick numbers in any obvious way; instead, ideology appeared to influence judgement calls — is that a reasonable explanation or is it too charitable?

It is a reasonable explanation, but we must be very cautious with it. Ideology might explain about 3% of the variance in researchers’ findings. The rest has to do with other factors or random noise. Consider that many scientists have specific methodological training. Through this training they simply do what they know how to do. And this can influence results. For example, someone who only knows how to use a hammer, will treat things as a hammering problem, when it might be better solved with a different tool. This takes us back to better, broader methodological training, which scientists would have more time for if they weren’t under constant pressure to publish. 

— Are there ways to guard against the ideological bias highlighted in the study? It seems that peer review can spot poorly defined studies — but does it work well enough in the real world? 

Peer review is a poor solution. There are studies out there that show that peer review is not reliable. Just like giving the same researchers the same data, if you give different peer reviewers the same paper, they will come to a huge range of judgements about the paper. The publishing system is in some ways a lottery. One solution for this problem is what we call “adversarial collaboration”. This is a type of research where scientists who disagree about a topic work together. They design a study together and agree on all methods and on all criteria with which to judge the outcomes. Then after this is all agreed in advance, the study is conducted, ideally from a third-party, and then the results speak for themselves.

— What do you think the message here is to the public and to policy makers?

Everything we have in society that works is based on science. Smartphones, that open heart surgery that saved the life of a loved one, airbags, planes that don’t crash. We need science for every decision we make collectively. But the public and policymakers can be highly politicized, and this can influence science, we see this even in the scientists in our study whose own politics seemingly played a role. To cut through this, we need policymakers that support science conducted by scholars who have different political ideologies, who do not agree. For example, George and I are somewhat different in our own assessment of the impact of immigration on society and the labor market. This made us a very strong team. It meant that we could focus on the scientific process and try to get to the best, most reliable answer. Crucially, we need to never rely on single studies or single science teams. Before we declare evidence of anything, we need many studies. We should not just rely on single papers or scholars. We need dozens or hundreds of studies on a topic conducted by inter-disciplinary and inter-ideological teams.

Meta-reproducibility crisis: Software edition

Its not just a ‘replication crisis‘, its a reproducibility crisis. Forget about falsification if one can’t even reproduce the workflow of another. Efforts are underway to improve transparency which allows reproduction, good. But what if even with the original data, researchers cannot reproduce numerical results simply because of where they are in time and space? What if different statistical programming languages or different packages and versions of these languages actually lead to different findings? Based on results of the Crowdsourced Replication Initiative we demonstrated that even using the same data and models, only 80% of independent replicator teams could reproduce the numerical results, even when they had access to the original Stata code.

Based on my experiences in this area, I cannot but help have a strong hunch that software or package versions threaten the computational reproducibility of our research. The fact that Stata rounds up at .5 and R rounds down already suggests that the same models might produce different results across software simply from rounding variations. Recently, I found another reason to believe this hunch.

I am a participant in SCORE, a massive collaboration to systematically investigate the reproducibility of social science research – led by the Center for Open Science. I agreed to attempt a computational reproduction of the study “Age, Inequality, and Reactions to Marketization in Post-Communist Central and Eastern Europe” by Horvat and Evans (2011).

Despite some potential differences in the data I received from the original authors and those reported in their tables, I was able to produce similar results. Similar but not exactly close. The particular coefficient I was interested in came out at -0.39 in my ordered probit (cumulative link) model, whereas their original was -0.22. As a logistic coefficient or (translated into a percentage probability), this is potentially a large difference. I used R, but was not satisfied with my results. The original authors most likely used Stata, as their public data were a .dta file – Stata’s native file storage format. Also, lets be honest, no social scientist was using R before 2010 when they likely did their analyses. My curiosity and suspicion led me to run the model in Stata. Here I got a -0.21 coefficient. Almost identical to their original study, and not surprising given that there was minor variation in case numbers they reported and in the public data file. I was still not convinced that these coefficients were actually different because the models were quite complex. So I plotted the predicted probabilities to be sure that this was not statistical artifact of some model components (intercept cut-points for example).

But my hunch was supported. These same models run in R and Stata, led to different predicted probabilities. The figure below shows that among many former Communist Eastern and Central-Eastern European countries those who are aged 60+ were less optimistic about their living standards over the next 5 years. One of their critical findings was that this gap increased form 1993 to 2007, in particular because the 60+ group became even less optimistic; although in fairness this is difficult to conclude because of all the other variables in the model. What is clear, is that they were 0.45 points apart in 1993 and 0.65 apart in 2007 (see Table 5 in their original study). When I plot the predicted values I find that these results are similar but not identical in terms of the 1993 to 2007 change, but that the probabilities are quite different across software.

I used Stata v15 and the ‘oprobit package, and I used R’s ‘MASS’ package and the ‘polr‘ function. Despite identical data (and case numbers!) the ‘polr’ routine predicts the probability of respondents answering that their standard of living will fall or fall a great deal as much higher than that of Stata. Although the relative change between age groups is similar – with a slightly steeper negative slope in R – the lines are pretty far apart, and even further apart for the age group 60+. Without unpacking each package and the exact estimation strategy taking place therein, I cannot as of yet say why. My own statistical and software abilities are by no means exceptional, but certainly above the average social scientist. Thus, it would be totally unrealistic to expect any social scientist to understand the entire routine taking place within the polr or oprobit packages. If they did, they wouldn’t need the packages and could just write their own cumulative link routine!

The implications are that using different software leads to a lack of reproducibility. As if we did not have enough to worry about in the reproducibility area already.

Overcoming replication fears

Fear of rejection, part I

To replicate a study, you need information. Probably information that is not fully disclosed in a 6-12,000 word journal article. Except for a recent trend, information such as data and analytical procedure are not going to be available publicly. This means you should, or must in case the data are not retrievable from some other source, contact the original author. Be prepared for rejection. One study demonstrated that among the top sociology journals, less than 30% of replication materials were available (even though as many as 75% claimed otherwise). Political science was only marginally better at around 50% as of 2015. Professors are likely to ignore emails asking for their data and code. One group of sociology students contacted 53 different authors asking for replication materials and only 15 provided them (28%). Ten never responded to the requests at all, despite several follow up emails. So don’t take it personally, social scientists are not known for their forthcomingness in this area.

Verification is not affirmation

Imagine being a student who tries to verify the results of a prolific, senior scholar and cannot. If it were me, I would be anxious that I made a mistake. But the only real mistake would be to assume my lack of verification is a refutation of my own skills. Of course, it’s good to double check everything. Have a colleague look at your work if you are unsure, a teacher or supervisor if you are a student. Un-verifiable results are common, no need for self-doubt. Things like reverse coding biological sex so that women appear less supportive of welfare state policies or accidentally analyzing values of 88 (a missing code) as a relevant value of coital frequency leading to a surprising rate for older persons are actually a normal part of social science.

When replicating a study just assume there will be at least one mistake. Like a treasure hunt.

Verification comes down to the availability of the materials. If the data and code are not fully available, it really is a treasure hunt because you will be unsure what you are going to find or learn. On the other hand, if the data and code are available and in good order, then it is more like cooking than hunting. This often comes down to the difference between teaching replication – the recipe approach, where students should come to the same results every time when following the exact same steps, and replication as a form of social research – the treasure hunt approach, where researchers (i.e., students) may not have a coherent recipe from the original ‘chef’. But make no mistake(!) even fully transparent studies often come with mistakes in the code or data.

Fear of mistakes

If I am not making mistakes, I am not doing research. You will make mistakes and there is nothing to fear. There are all kinds of reasons that replication results will diverge, not all of them are mistakes. Recently a well-known and well-respected sociologist retracted his own paper after someone trying to replicate the study identified coding errors. One journal started checking that data and code produced results in accepted papers, and almost none were verifiable on the first attempt. In a crowdsourced replication, mostly PhD students, postdocs and a few professors came to an exact verification of the original study only 82% of the time, despite having the original code!

Fear of the unknown

Designing statistical models using a software is like learning a new language. Student replications often involve methods unfamiliar to the them. This is a great didactic tool – learning by doing. There is nothing to fear here. Professors’ original studies often involve methods that they are not experts in. One extremely famous scholar and his colleague ran a regression with an interaction term in it and botched the interpretation of the effects, the results were basically the opposite of what they reported.

Science is a process of exploring the unknown. Replications use what is known as a tool for finding what is unknown.

Fear of rejection, Part II

Students may be interested in publishing their replications, they should be, because how else will others put their knowledge into practical use? Get prepared again for rejection. Journals and reviewers across the social sciences are not very excited about replications. A pair of researchers studied the instructions and aims of 1,151 psychology journals in 2016 and discovered that only 3% explicitly accepted replications. One sociologist pointed out not so long ago that replication is just not the norm in sociology, and another one recently came to the same conclusion. The good news is that we don’t need journals anymore to make useful science, at least in theory. Students can immediately publish their results as preprints and share data and code in a public repository. If a student elects to use Open Science Framework preprint servers, their work will be immediately found in scholarly search engines.

Fear of ego

Scientists tend to overestimate the impact of a negative replication on their reputations. Ego-alert. Assume a scientist worried about a replication is a professor. This is a person who is most like tenured, certainly the highly cited professors are. This is also a person who “professes” knowledge on a topic, meaning that they should be an expert and engage in teaching students, policymakers, the public and really anyone interested about this topic. If any of this professor’s results were shown to be unreliable or false, this would be a critical piece of information if that professor’s goal was to actually profess knowledge on that topic. Unfortunately, professors regularly suffer from some kind of ‘rock-star syndrome’ or ego-mania where they are doing science as a means to get recognition and fame. This leads them to react aggressively against anything that contradicts them. This is very bad for science. If a student replicator can help deplete a runaway professor ego through replication, then that student is doing a great service to science.

Fear of not addressing fear

In a typical primary or secondary school chemistry class, students repeat the basic experiments of chemical reactions that have been done for hundreds of years. These students are learning through replication. They are gaining knowledge in a way that cannot be simply taught in a lecture or by reading a book. They are also affirming the act of science, thus developing a faith that science works. In social science especially, we face a reliability crisis if not a public image crisis. Students should be reassured that there is a repetitive and reliable nature to doing social science, whether they will continue as a social scientist or (in the most likely case) not. Part of this reliability can be a lack of reliability. Science is simply a process of trying to understand the unknown, and even quantify this unknown. I fear that without more student replications, we are diminishing the value of social science and contributing to the perception that social science is unreliable.

Good social science should be reliably able to identify unreliability, and this is best taught through conducting replications.

Behind the specification curve

Analytical errors are a normal part of social science, as David Brady pointed out in the December 2019 IPM Newsletter. Whether conscious, unconscious or simply mistakes, we take certain paths in our research processes. All possible paths constitute researcher degrees of freedom, and they can lead to different findings. In fact, we can intentionally exploit paths to the results we want. Replications can work against this by detecting errors or identifying questionable decisions. We have few replications in sociology, those working in IPM being no exception. Thus just one replication of any given published should be a scientific improvement.

But is it?

Let’s imagine a secondary data study that a replicator wants to reproduce. We now know from a major social science journal, that even when researchers provide their code it rarely runs ‘right out of the box’; replicators often need additional information or materials from the authors (Janz 2015). Moreover, consider an R user trying to replicate Stata code. This requires writing brand new code. Replications with the simple goal of reproduction are not as straightforward as we think! Now let’s imagine a replicator has more complicated goals of testing generalizability or scrutinizing the original study. A replication is as prone to researcher degrees of freedom problems as the original study. This means replications might not be as useful as we hoped, and alone cannot alleviate the ‘crisis of science’.

To be more effective, original studies and replications alike need to focus on model specification. As Muñoz and Young (2018) demonstrate, the cost of running billions of models is roughly zero. Unless they are super-complicated, a computer can do this for us very quickly. Given that researchers accidentally or intentionally report only models supporting their claims, it is a useful exercise to check how alternative specifications might affect the findings. This does not mean running all possible models. Doing so throws models into the results pool that are impossible causally speaking. We need to carefully, logically select models based on plausible research decisions. For example, if a researcher does not weight their survey data, we should ask, ‘why not?’. If a researcher measures poverty as half the median income, we should explore alternative measures.

This figure demonstrates effect sizes (top) and model specifications (bottom) of analyses of the outcome variable “positive reciprocity” (a tendency to pay back favors). This figure was produced by Julia Rohrer (2018) and demonstrates that specification curves are both visually appealing and scientifically useful

Putting together all combinations of plausible decisions we come to a set of models that are theoretically defensible. A major gain is that we can see which decisions have an impact on the results. This gives an indication of where we need to focus our future research. One of the newest ways to do this is p-curve analysis, also known as “specification curve” modeling. These methods were originally introduced as ways to detect p-hacking and publication bias. But we can extend them to standard research and replications, leading to gains in theory. These methods caught on in psychology recently (Rohrer 2018), and it is time for sociology to get behind the curve and move ahead our discipline’s reliability.

References:

Janz, Nicole. 2015. “Leading Journal Verifies Articles before Publication – So Far, All Replications Failed.” Political Science Replication Blog. Retrieved July 22, 2019.

Muñoz, John and Cristobal Young. 2018. “We Ran 9 Billion Regressions: Eliminating False Positives through Computational Model Robustness.” Sociological Methodology 48(1):1–33.

Rohrer, Julia M. 2018. “Run All the Models! Dealing With Data Analytic Flexibility.” APS Observer 31(3).