Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Sociologists tend to be left politically. Is this a problem?

My thoughts: If it does represent an epistemological problem, I have no idea how to solve it. I can’t imagine trying to push right-leaning persons into sociology. At the same time, we should consider that the definition of what science is and should be varies, especially across disciplines. In sociology for example, there is a strong sense of communality. Robert Merton wrote about scientific norms that would be necessary to support the scientific method (test, re-test), because alone the scientific method does not guarantee the most efficient and effective knowledge production and/or can be used to maintain power over others. One of these norms he called “communism” which we today label as “communality” because he did not at all mean a system of governance or centralized economic planning or the Communist Party. What he meant was that everyone should have equal access to participate in science. If everyone does not have equal access to participate in society, then they by default do not have equal access to science. Why should everyone have access? Because this leads to the maximum efficiency in scientific gains and breakthroughs, because everyone can contribute consistent with their abilities. Therefore, many sociologists would define science in a way that demands social and political change in order to support scientific progress (scientific progress means also economic productivity and abundance to provide for everyone’s needs). But trying to change societies in ways that maximize science opens the door for political agendas within science, which can increase bias in findings. Therefore, I am not sure if this is an epistemological problem. It may be that epistemologies correlate with ideologies, but that otherwise the epistemological approaches in sociology and political science (and related disciplines) that are among the broadest in science (ethnography, participant observation, econometrics and psychometrics, survey data research, interviews, ethnomethodology, etc.) are beneficial to collective knowledge.

Regarding the photo above: In the Tuskegee Experiment, the scientific method was followed carefully. But the scientific method alone does not guarantee that humans are safe from, and/or have equal access to participate in science. The participants were deceived and treatment for their disease was withheld. Unethical and abusive, yet scientific if we take the narrowest possible epistemology of the scientific method.

Image source (public domain): https://www.flickr.com/photos/pingnews/441531333

Behind the specification curve

Analytical errors are a normal part of social science, as David Brady pointed out in the December 2019 IPM Newsletter. Whether conscious, unconscious or simply mistakes, we take certain paths in our research processes. All possible paths constitute researcher degrees of freedom, and they can lead to different findings. In fact, we can intentionally exploit paths to the results we want. Replications can work against this by detecting errors or identifying questionable decisions. We have few replications in sociology, those working in IPM being no exception. Thus just one replication of any given published should be a scientific improvement.

But is it?

Let’s imagine a secondary data study that a replicator wants to reproduce. We now know from a major social science journal, that even when researchers provide their code it rarely runs ‘right out of the box’; replicators often need additional information or materials from the authors (Janz 2015). Moreover, consider an R user trying to replicate Stata code. This requires writing brand new code. Replications with the simple goal of reproduction are not as straightforward as we think! Now let’s imagine a replicator has more complicated goals of testing generalizability or scrutinizing the original study. A replication is as prone to researcher degrees of freedom problems as the original study. This means replications might not be as useful as we hoped, and alone cannot alleviate the ‘crisis of science’.

To be more effective, original studies and replications alike need to focus on model specification. As Muñoz and Young (2018) demonstrate, the cost of running billions of models is roughly zero. Unless they are super-complicated, a computer can do this for us very quickly. Given that researchers accidentally or intentionally report only models supporting their claims, it is a useful exercise to check how alternative specifications might affect the findings. This does not mean running all possible models. Doing so throws models into the results pool that are impossible causally speaking. We need to carefully, logically select models based on plausible research decisions. For example, if a researcher does not weight their survey data, we should ask, ‘why not?’. If a researcher measures poverty as half the median income, we should explore alternative measures.

This figure demonstrates effect sizes (top) and model specifications (bottom) of analyses of the outcome variable “positive reciprocity” (a tendency to pay back favors). This figure was produced by Julia Rohrer (2018) and demonstrates that specification curves are both visually appealing and scientifically useful

Putting together all combinations of plausible decisions we come to a set of models that are theoretically defensible. A major gain is that we can see which decisions have an impact on the results. This gives an indication of where we need to focus our future research. One of the newest ways to do this is p-curve analysis, also known as “specification curve” modeling. These methods were originally introduced as ways to detect p-hacking and publication bias. But we can extend them to standard research and replications, leading to gains in theory. These methods caught on in psychology recently (Rohrer 2018), and it is time for sociology to get behind the curve and move ahead our discipline’s reliability.

References:

Janz, Nicole. 2015. “Leading Journal Verifies Articles before Publication – So Far, All Replications Failed.” Political Science Replication Blog. Retrieved July 22, 2019.

Muñoz, John and Cristobal Young. 2018. “We Ran 9 Billion Regressions: Eliminating False Positives through Computational Model Robustness.” Sociological Methodology 48(1):1–33.

Rohrer, Julia M. 2018. “Run All the Models! Dealing With Data Analytic Flexibility.” APS Observer 31(3).