Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Behind the specification curve

Analytical errors are a normal part of social science, as David Brady pointed out in the December 2019 IPM Newsletter. Whether conscious, unconscious or simply mistakes, we take certain paths in our research processes. All possible paths constitute researcher degrees of freedom, and they can lead to different findings. In fact, we can intentionally exploit paths to the results we want. Replications can work against this by detecting errors or identifying questionable decisions. We have few replications in sociology, those working in IPM being no exception. Thus just one replication of any given published should be a scientific improvement.

But is it?

Let’s imagine a secondary data study that a replicator wants to reproduce. We now know from a major social science journal, that even when researchers provide their code it rarely runs ‘right out of the box’; replicators often need additional information or materials from the authors (Janz 2015). Moreover, consider an R user trying to replicate Stata code. This requires writing brand new code. Replications with the simple goal of reproduction are not as straightforward as we think! Now let’s imagine a replicator has more complicated goals of testing generalizability or scrutinizing the original study. A replication is as prone to researcher degrees of freedom problems as the original study. This means replications might not be as useful as we hoped, and alone cannot alleviate the ‘crisis of science’.

To be more effective, original studies and replications alike need to focus on model specification. As Muñoz and Young (2018) demonstrate, the cost of running billions of models is roughly zero. Unless they are super-complicated, a computer can do this for us very quickly. Given that researchers accidentally or intentionally report only models supporting their claims, it is a useful exercise to check how alternative specifications might affect the findings. This does not mean running all possible models. Doing so throws models into the results pool that are impossible causally speaking. We need to carefully, logically select models based on plausible research decisions. For example, if a researcher does not weight their survey data, we should ask, ‘why not?’. If a researcher measures poverty as half the median income, we should explore alternative measures.

This figure demonstrates effect sizes (top) and model specifications (bottom) of analyses of the outcome variable “positive reciprocity” (a tendency to pay back favors). This figure was produced by Julia Rohrer (2018) and demonstrates that specification curves are both visually appealing and scientifically useful

Putting together all combinations of plausible decisions we come to a set of models that are theoretically defensible. A major gain is that we can see which decisions have an impact on the results. This gives an indication of where we need to focus our future research. One of the newest ways to do this is p-curve analysis, also known as “specification curve” modeling. These methods were originally introduced as ways to detect p-hacking and publication bias. But we can extend them to standard research and replications, leading to gains in theory. These methods caught on in psychology recently (Rohrer 2018), and it is time for sociology to get behind the curve and move ahead our discipline’s reliability.

References:

Janz, Nicole. 2015. “Leading Journal Verifies Articles before Publication – So Far, All Replications Failed.” Political Science Replication Blog. Retrieved July 22, 2019.

Muñoz, John and Cristobal Young. 2018. “We Ran 9 Billion Regressions: Eliminating False Positives through Computational Model Robustness.” Sociological Methodology 48(1):1–33.

Rohrer, Julia M. 2018. “Run All the Models! Dealing With Data Analytic Flexibility.” APS Observer 31(3).

Love is in the error term

A major segment of social science uses formal models and quantitative methods to explain social phenomena. That means they use mathematical symbols to define how they think the world works, and then find data and test it. For example, researchers want to explain social outcomes, like committing a crime, changing jobs, moving or having children. Why do some people do these things and not others? Researchers then speculate that other things cause these outcomes like getting married, having a job or losing a job, how much money a person makes, how old they are and all kinds of other stuff.

Next, researchers change their theoretical ideas into a formal model. This means an equation. Before you stop reading, maybe pause to consider that an equation is just a theory expressed with symbols instead of standard language. For example,

Y = X

and this might represent moving homes (“Y“) is caused by getting married (“X“). Another way of saying this is, Y depends on X or moving depends on getting married.

But seriously, getting married does not always cause moving. Sometimes it does. The correct claim is that moving is a function of getting married. Function means that marriage probably increases the likelihood of moving by some amount. Therefore, researchers add a modifier to X, like the letter b, so if b was 0.3 getting married would increase the likelihood of moving by 30%. People also move when they don’t get married, so researchers add a constant a to account for the likelihood of a person moving for any other reason. So if a was 0.3 then the average person would have a 30% chance of moving at any point.

Y = a + bX

Now if a and bX could be used to perfectly predict whether a person moves or not. The researcher would have made a monumental achievement here. Instead what happens is the researcher goes and observes moving and marriage behaviors. Tries to do this with a random sample of the population of a society. Then applies the above equation to the sample data. Does the equation fit perfectly to the data? No. Never. Researchers must admit that their theories come with uncertainty. Maybe moving depends on the type of home the person always lives in, maybe it depends on whether a couple can afford to move or if it is even a couple or a single parent. There are nearly unlimited things that could cause moving. Also, getting a perfectly random sample, mistakes when observing or coding data, etc. leads to uncertainty in the results. This means the formal equation has to have e, an error term.

Y = a + bX + e

Researchers spend their time trying to reduce what is in the error term. So they are basically tasked with the job of reducing uncertainty. If scientists wants to fly a spaceship to mars they have to reduce all possible forms of uncertainty in the take off, flight path, landing, electronics and so forth. But a social scientist who wants to predict how many people will move, where they will move or which people will move, cannot reduce uncertainty that much. Sure, they can explain moving with job changes, family members getting sick, weather, hobbies, wars, and so many observable things. But they will never fully explain moving. What I’m really saying here is that social scientists will never be able to perfectly explain human behaviors. Its like this:

An Enlightened Physicist describes Sociology

Why is it so hard? If we could just measure all the things that people are doing and thinking we could predict precisely what humans will do next, right? Maybe so, maybe not; but irrelevant because we cannot measure everything about humans. Even if we could measure the precise actions of the network of neurons in one human brain numbering greater than all the stars in the universe, we probably couldn’t measure love. But something that we might refer to as love seems to be a major force in guiding human actions. Both the effect of love:

“Love affects more than our thinking and our behavior toward those we love. It transforms our entire life”

– Thomas Merton

And the lack of love:

“Intense spiritual and emotional lack in our lives is the perfect breeding ground for material greed and overconsumption.”

– bell hooks

(both quotes from hooks 2000, pp. 187 & 105)

These suggest love is a prime mover of human decisions and behaviors. But we don’t know what it is exactly and we have made no attempts, that I am aware of, in any large population surveys to measure it. So,

When we add love as an X variable to our formal model we cannot test anything, because we have not developed instruments to measure it. So it remains in the void of all that variance in Y that we simply cannot explain. Maybe we should start focus some of our effort on that.

hooks, bell. 2000. All About Love: New Visions. William Morrow and Company, New York, NY. ISBN 0-688-16844-2.