Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Help. We can’t know the data-generating model.

Lack of data-generating models. A problem ransacking macro-comparative research efforts. We got strong data-generating theories. Institutions, politics, materials, procedures, conflicts and symbols help explain macro-level events, like the introduction of social security or the onset of war. But we can’t model these theories as causes of the data we observe. There is far more theoretical complexity than there are countries.

There just aren’t enough countries to provide reliable measures of central tendency given theoretical complexities. Even if our theory were as simple as Y will be significantly different from some value (e.g., zero) given X, we should have at least 10 countries. If we were to actually conduct a power analysis it would tell us we need more like 200 countries depending on how big of a difference we expect in Y given X. I’m just gonna leave aside that countries are not a population in the typical sense (what is a country again exactly? exactly.).

What should we do?

  • Option A: stick to qualitative case studies.
  • Option B: continue running various country-level regressions and ignore the problem.
  • Option C: wait until data from 180+ countries covering a period of 50 years becomes available.

Maybe we just ask a better question. Why do we need to do macro-comparative analysis if we already got good theory?

Answer: Reliability and utility.

Without systematic evidence we don’t actually have theories, only ideas or conjectures. Logic and in-depth case study help define data-generating theories for one country in one time period. So we want to test if this might apply to most countries in most time periods. In testing this we get Pomeranian.

Even with the same variables, regressions on country data over time go all over the place given only tiny changes to the estimation procedure. That was a finding of the CRI. The effect of immigration on social policy preferences could be anything.

To the right: Mr. Summerbottom showcases the size and direction of effects from macro-comparative regressions using the same data. Outliers cropped.

Even if we run all possible model configurations, and even if we consider Bayesian posterior distributions. We still don’t know if we have a correct model specification, because we cannot truly test it. We can only make sweeping statements like, ‘in all possible model combinations, variable X was not significant so it probably does not have an effect’. Too bad we might have some kind of suppression from an unobserved variable!

Don’t drink the water. There is no method that can substitute for human logic. Zombie modeling where the researcher lumbers aimlessly through thousands of models looking for blood doesn’t work. The answer is therefore Answer D, none of the above.

We need new school meta. We need to logically analyze researcher’s decisions. We can’t do much with the limited data that are out there for macro-comparative research. Thus, we need to get down with specifications. By comparing specification curves and researcher decision trees, we can identify the difference between critical and benign. Some programmers offer us R packages for this (thanks Joachim Gassen for rdfanalysis!).

Some models come to the same results regardless of whether the researchers apply weighting, use latent variables or correct for autocorrelation; others not. When we identify the decisions a researcher made that carry the potential to influence the results to the greatest degree, we simultaneously identify where to focus our theoretical work. These decisions cannot be improved by running more models, instead they require, as Andrew Abbott once put it, ‘more sitting in our offices and staring at the wall’. We need to dig in and use logic, reflection and mental energy to improve our theories in small-N macro-comparative research.

Better than a computer. A crowd of researchers.

Crowdsourcing researchers. A new use for an old tactic. When Silberzahn and colleagues asked if football referees are skin-color biased in their assignment of red cards, they brought a new meta to social research. Rather than the typical one research team, one project; they got together twenty-nine teams. All got the same research question and same data. What can we learn?

Hold on. A single academic strapped with programming skills can run just about every possible statistical model configuration. Level up this academic and they can program machine learning routines to tell us almost anything we need to know. So why do we need a crowd of researchers to analyze the same data?

The answer: theory.

Computers do not do theory. Computers crunch data. They can’t tell us where the data came from. Running every possible in an effort to ‘test robustness’ means testing for things that do not exist. Even worse, it means taking the results of tests for things that cannot possibly exist and using them to draw conclusions on the robustness of a test for something that could exist. Throwing all possible variables into a model or ordering variables in every possible configuration will maximize statistical predictions, yes. But what good is predicting something that cannot exist?

An example: Policymakers decide they want to increase the number of females in society. A computer determines that bearing children is a good predictor of being biologically female. Now, imagine that in a statistical model, being pregnant or ever having had biological children explains 75% of the variance in the sex of a given population (its probably about 100% at this point in human history, but lets allow room for error). The policymakers thanks to the computer, conclude that if 1,000,000 more people were pregnant, a predicted 750,000 of them should be female and only 250,000 male, plus a margin of error. Thus, they conclude that getting 1,000,000 people pregnant is likely to increase the number of females by 250,000 in their society, assuming the society is currently 50% female.

Epic fail.

If we want to develop causal theories that provide useful knowledge for societies, human logic is necessary. Rather than having one computer report 8.8 million false positives after running 9 billion different regression models, humans can identify correct, or at least ‘better’, model specifications. In fact, we would not even know they are ‘false positives’ without a human to understand that getting someone pregnant does not turn them into a female, for example.

But relying on the logic of one human, or even one team of humans, is risky. There are researcher degrees of freedom that make their research unreliable. Their prior beliefs, knowledge, experience and context lead to variation in results. These priors lead them along different paths through the garden of research. With the same research question and even the same data researchers often come to different results as demonstrated by the Silberzahn study and our Crowdsourced Replication Initiative (CRI).

Sounds like a meta-problem. So what good is crowdsourcing if we cannot rely on the crowd?

Answer: social interaction.

Crowdsourcing when done with careful planning and central organization, allows participants to comment on, if not deliberate, each others research choices. Suddenly meta-uncertainty turns into the power of meta-logic. Not just one team and their narrow ideas, but a communal debate with diverse inputs. Both the Silberzahn et al study and our CRI involved deliberation and voting on research designs. Combined with the growing area of specification curve analysis, crowdsourcing increases credibility for social research at the level of the population under study and the meta-level of the researchers themselves.

Finally, the relevance of crowdsourcing for collaborative theory construction is an untapped but promising avenue for the future. Crowd research departs from the current system that favors individualism by rewarding novelty of individual researchers. Crowdsourcing instead isa system of consensus building and direct responsiveness to theoretical claims. It could resolve the perpetual problem of scholars, areas and disciplines talking ‘past’ each other. If not consensus, it can identify critical unresolved questions to guide future research.

Crowdsourcing can move us toward open science in the Mertonian sense of a communalistic endeavor. To achieve this, all participants should be co-authors on the project, get to discuss each others’ models and theory, and get to update their own results during the process. We need machines to facilitate this kind of large scale research, but they cannot produce communal, logical exchanges. For that we need to stick with the crowd.

It’s Crowdid

Meta-science, social inequality, methods, open science.

At inception this blog catalogs the wracking process of organizing, administering and evaluating the project and its massive amount of generated data in the Crowdsourced Replication Initiative (CRI).

A technical blog, looking at the ‘nitty-gritty’ of the methods necessary to carry out and present the findings of a project involving 88 research teams, almost 200 researchers, an online deliberation, four survey waves during the process, an experimental condition, a replication, an original research condition, and analysis and meta-analysis of the results.

A research question blog, asking bigger questions about meta-science, researcher reliability, reproducibility, social inequality and crowdsourcing.

In the future…. this blog will address more.

Why blog? The future of science may involve a more interactive, hyperlinked and faster disseminated format. Blogs can facilitate this. In the social science the turnaround time from research findings to published papers is somewhere around 3 to 4 years (counting rejections and R&R’s). Moreover, blogs offer space for discussing things that simply don’t fit in our awfully restrictive 6-12,000 word journal articles.

Open science calls for blogging. Its free, fast and owes no debts. It circumvents the institutional problems of science. As an Open Science Fellow as part of the Freies Wissen program of Wikimedia Germany, a Catalyst for the Berkeley Initiative for Transparency in the Social Sciences (BITSS) and a concerned sociologist, this blog is a contribution to the open science movement.