Crowdsourcing researchers. A new use for an old tactic. When Silberzahn and colleagues asked if football referees are skin-color biased in their assignment of red cards, they brought a new meta to social research. Rather than the typical one research team, one project; they got together twenty-nine teams. All got the same research question and same data. What can we learn?
Hold on. A single academic strapped with programming skills can run just about every possible statistical model configuration. Level up this academic and they can program machine learning routines to tell us almost anything we need to know. So why do we need a crowd of researchers to analyze the same data?
The answer: theory.
Computers do not do theory. Computers crunch data. They can’t tell us where the data came from. Running every possible in an effort to ‘test robustness’ means testing for things that do not exist. Even worse, it means taking the results of tests for things that cannot possibly exist and using them to draw conclusions on the robustness of a test for something that could exist. Throwing all possible variables into a model or ordering variables in every possible configuration will maximize statistical predictions, yes. But what good is predicting something that cannot exist?
An example: Policymakers decide they want to increase the number of females in society. A computer determines that bearing children is a good predictor of being biologically female. Now, imagine that in a statistical model, being pregnant or ever having had biological children explains 75% of the variance in the sex of a given population (its probably about 100% at this point in human history, but lets allow room for error). The policymakers thanks to the computer, conclude that if 1,000,000 more people were pregnant, a predicted 750,000 of them should be female and only 250,000 male, plus a margin of error. Thus, they conclude that getting 1,000,000 people pregnant is likely to increase the number of females by 250,000 in their society, assuming the society is currently 50% female.
Epic fail.
If we want to develop causal theories that provide useful knowledge for societies, human logic is necessary. Rather than having one computer report 8.8 million false positives after running 9 billion different regression models, humans can identify correct, or at least ‘better’, model specifications. In fact, we would not even know they are ‘false positives’ without a human to understand that getting someone pregnant does not turn them into a female, for example.
But relying on the logic of one human, or even one team of humans, is risky. There are researcher degrees of freedom that make their research unreliable. Their prior beliefs, knowledge, experience and context lead to variation in results. These priors lead them along different paths through the garden of research. With the same research question and even the same data researchers often come to different results as demonstrated by the Silberzahn study and our Crowdsourced Replication Initiative (CRI).
Sounds like a meta-problem. So what good is crowdsourcing if we cannot rely on the crowd?
Answer: social interaction.
Crowdsourcing when done with careful planning and central organization, allows participants to comment on, if not deliberate, each others research choices. Suddenly meta-uncertainty turns into the power of meta-logic. Not just one team and their narrow ideas, but a communal debate with diverse inputs. Both the Silberzahn et al study and our CRI involved deliberation and voting on research designs. Combined with the growing area of specification curve analysis, crowdsourcing increases credibility for social research at the level of the population under study and the meta-level of the researchers themselves.
Finally, the relevance of crowdsourcing for collaborative theory construction is an untapped but promising avenue for the future. Crowd research departs from the current system that favors individualism by rewarding novelty of individual researchers. Crowdsourcing instead isa system of consensus building and direct responsiveness to theoretical claims. It could resolve the perpetual problem of scholars, areas and disciplines talking ‘past’ each other. If not consensus, it can identify critical unresolved questions to guide future research.
Crowdsourcing can move us toward open science in the Mertonian sense of a communalistic endeavor. To achieve this, all participants should be co-authors on the project, get to discuss each others’ models and theory, and get to update their own results during the process. We need machines to facilitate this kind of large scale research, but they cannot produce communal, logical exchanges. For that we need to stick with the crowd.
OpenEdition suggests that you cite this post as follows:
Nate Breznau (October 26, 2019). Better than a computer. A crowd of researchers. Crowdid. Retrieved October 6, 2024 from https://doi.org/10.58079/ne82