Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Overcoming replication fears

Fear of rejection, part I

To replicate a study, you need information. Probably information that is not fully disclosed in a 6-12,000 word journal article. Except for a recent trend, information such as data and analytical procedure are not going to be available publicly. This means you should, or must in case the data are not retrievable from some other source, contact the original author. Be prepared for rejection. One study demonstrated that among the top sociology journals, less than 30% of replication materials were available (even though as many as 75% claimed otherwise). Political science was only marginally better at around 50% as of 2015. Professors are likely to ignore emails asking for their data and code. One group of sociology students contacted 53 different authors asking for replication materials and only 15 provided them (28%). Ten never responded to the requests at all, despite several follow up emails. So don’t take it personally, social scientists are not known for their forthcomingness in this area.

Verification is not affirmation

Imagine being a student who tries to verify the results of a prolific, senior scholar and cannot. If it were me, I would be anxious that I made a mistake. But the only real mistake would be to assume my lack of verification is a refutation of my own skills. Of course, it’s good to double check everything. Have a colleague look at your work if you are unsure, a teacher or supervisor if you are a student. Un-verifiable results are common, no need for self-doubt. Things like reverse coding biological sex so that women appear less supportive of welfare state policies or accidentally analyzing values of 88 (a missing code) as a relevant value of coital frequency leading to a surprising rate for older persons are actually a normal part of social science.

When replicating a study just assume there will be at least one mistake. Like a treasure hunt.

Verification comes down to the availability of the materials. If the data and code are not fully available, it really is a treasure hunt because you will be unsure what you are going to find or learn. On the other hand, if the data and code are available and in good order, then it is more like cooking than hunting. This often comes down to the difference between teaching replication – the recipe approach, where students should come to the same results every time when following the exact same steps, and replication as a form of social research – the treasure hunt approach, where researchers (i.e., students) may not have a coherent recipe from the original ‘chef’. But make no mistake(!) even fully transparent studies often come with mistakes in the code or data.

Fear of mistakes

If I am not making mistakes, I am not doing research. You will make mistakes and there is nothing to fear. There are all kinds of reasons that replication results will diverge, not all of them are mistakes. Recently a well-known and well-respected sociologist retracted his own paper after someone trying to replicate the study identified coding errors. One journal started checking that data and code produced results in accepted papers, and almost none were verifiable on the first attempt. In a crowdsourced replication, mostly PhD students, postdocs and a few professors came to an exact verification of the original study only 82% of the time, despite having the original code!

Fear of the unknown

Designing statistical models using a software is like learning a new language. Student replications often involve methods unfamiliar to the them. This is a great didactic tool – learning by doing. There is nothing to fear here. Professors’ original studies often involve methods that they are not experts in. One extremely famous scholar and his colleague ran a regression with an interaction term in it and botched the interpretation of the effects, the results were basically the opposite of what they reported.

Science is a process of exploring the unknown. Replications use what is known as a tool for finding what is unknown.

Fear of rejection, Part II

Students may be interested in publishing their replications, they should be, because how else will others put their knowledge into practical use? Get prepared again for rejection. Journals and reviewers across the social sciences are not very excited about replications. A pair of researchers studied the instructions and aims of 1,151 psychology journals in 2016 and discovered that only 3% explicitly accepted replications. One sociologist pointed out not so long ago that replication is just not the norm in sociology, and another one recently came to the same conclusion. The good news is that we don’t need journals anymore to make useful science, at least in theory. Students can immediately publish their results as preprints and share data and code in a public repository. If a student elects to use Open Science Framework preprint servers, their work will be immediately found in scholarly search engines.

Fear of ego

Scientists tend to overestimate the impact of a negative replication on their reputations. Ego-alert. Assume a scientist worried about a replication is a professor. This is a person who is most like tenured, certainly the highly cited professors are. This is also a person who “professes” knowledge on a topic, meaning that they should be an expert and engage in teaching students, policymakers, the public and really anyone interested about this topic. If any of this professor’s results were shown to be unreliable or false, this would be a critical piece of information if that professor’s goal was to actually profess knowledge on that topic. Unfortunately, professors regularly suffer from some kind of ‘rock-star syndrome’ or ego-mania where they are doing science as a means to get recognition and fame. This leads them to react aggressively against anything that contradicts them. This is very bad for science. If a student replicator can help deplete a runaway professor ego through replication, then that student is doing a great service to science.

Fear of not addressing fear

In a typical primary or secondary school chemistry class, students repeat the basic experiments of chemical reactions that have been done for hundreds of years. These students are learning through replication. They are gaining knowledge in a way that cannot be simply taught in a lecture or by reading a book. They are also affirming the act of science, thus developing a faith that science works. In social science especially, we face a reliability crisis if not a public image crisis. Students should be reassured that there is a repetitive and reliable nature to doing social science, whether they will continue as a social scientist or (in the most likely case) not. Part of this reliability can be a lack of reliability. Science is simply a process of trying to understand the unknown, and even quantify this unknown. I fear that without more student replications, we are diminishing the value of social science and contributing to the perception that social science is unreliable.

Good social science should be reliably able to identify unreliability, and this is best taught through conducting replications.

Open science in sociology. What, why and now.

WHAT

By now you’ve heard the term “open science”. Although it has no global definition, its advocates tend toward certain agreements. Most definitions focus on the practical aspects of accessibility.

“…the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods.”


FORSTER, open science teaching resource

Some definitions enter the realm of ethics, feminism and social justice.

“…to imagine and design inclusive infrastructures, practices, and workflows for scientific practice that intentionally enable meaningful participation and redress (these new) forms of exclusion.


Denisse Albornoz,OCSDNet

Others focus on the communicative interplay between scientists and the public.

“Openness in Open Science also means opening up science to society… The democratic ideal of Open Science argues for equal two-way communication with the public: one should not solely focus on the question of how to foster the uptake of science in society, but also on how to foster the uptake of societal insights in science.


Anne-Floor Scholvinck,ZBW Mediatalk

Whatever the ontology, open science is inevitably something that challenges the status quo in science. Usage of term indicates there is something undesirable about science, otherwise advocates would simply advocate “science”.

The “open” part of the concept refers to any number of things depending on whom you ask. Commonly it means:

Open access – making the results of scientific techniques, research and theory accessible to everyone; as opposed to only in paywalled journals.

Transparency <open process> – making all methods, code, data and any biases or conflicts of interest known before and after the research is conducted. So long as doing this does not harm human subjects or violate any laws.

Open source – on the technology side of science, all programs, apps, algorithms, tools and scripts should be transparent and usable by others. This means that when a scientist develops a new technology, anyone else’s technologies can interact and interface with it. Moreover, anyone can modify the technology to better suit their own needs.

Open academia <open communication/democracy/feminism> – allowing anyone to participate in academia. That academia has the goal of eliminating inequalities, prejudice and domination from academia that take place in the social world. That academia embraces feminism and critical race theory in its methods and institutional practices. That everyone has the same place in scientific discussions, and no science is conducted by pressuring others or taking advantage of existing power structures. That no science takes place in secret, except for research that requires obfuscation for its completion.

Again, the definitions can cover a broad range. The above are just a snippet, although they strike me as the most common usages; except for ‘open academia’, this is reserved for certain justice motivated scholars.

WHY

Although I do not proclaim to be the arbiter or knower of right or wrong in academia (and life in general), the following facts seem wrong to me.

Double-work and the co-opting of journals

Scientists provide their work as editors and reviewers, because the peer review and publication process is the centerpiece of all of science. Peer reviewers and editors are the only consistent form of quality control in science. The academic journal was a functional response to previous forms of knowledge transmission that required direct scientist/practitioner to student interactions which were geographically limited and reached a very narrow audience.

The journal made it possible to transmit knowledge across the globe. Moreover, the journal reduced the simultaneous discovery and re-discovery problems of science, because no one could prove they discovered something first, and others worked on problems that were already solved unknowingly. It represents one of the first ‘open science’ movements because it was driven by the idea that science was at an impasse and could only move forward through transparent and open exchange of ideas arbitrated by being part of the public record through publishing.

Ironically, the journal format came full circle and began to undermine science. After over two centuries of journals run by non-profit academic associations, for-profit publishing houses began ‘offering’ their services to meet the growing global demand for journals and their content and the rising costs of editing and distribution. In many cases, these publishing houses were able to purchase the journals by offering the academic societies the exclusive right to determine what went in them. Within just 30 years, five conglomerates owned the titles, content or certain features of over 50% of all journal articles published globally.

The content, as always, is still a product of the scientists and the voluntary work of editors and peer reviewers. The publishing houses make large profits, but pay nothing to these workers. The editors and peer reviewers earn their income from universities mostly. The very universities that pay high fees to purchase the right to provide the journals in their libraries. This is a double tax on the universities – paying the producers of content to produce and then paying the distributors of that content to consume it. The content does not change at any point in between these two forms of payment, in other words, the publishers do not add any scientific value to this content.

Matters got even worse with the publishing houses over the past decades. As creative and deceitful profit seekers, some publishing houses realized they could generate even more profit by collaborating with the private sector. For example pharmaceutical companies’ profits were directly determined by the findings of studies published in journals. Pharmaceutical companies, or any companies whose profits were determined by the outcomes of scientific experiments, would be willing to invest in shaping those outcomes if they could. Enter a novel concept pioneered by Elsevier: selling journals or journal space to private companies to boost their profits. Win-win for them. Elsevier also pioneered the process of monetizing open science by purchasing SSRN, engaging in massive lawsuits designed to stop the free sharing of (their) copyrighted knowledge and tries to copyright intellectual activities such as peer review.

Other ventures create journals that prey on scholars who do not know better, or seek to get easy publications to add to their CV. These publishers are often labeled “predatory publishers” and they “publish work without proper peer review and which charge scholars sometimes huge fees to submit should not be allowed to share space with legitimate journals and publishers, whether open access or not” (predatoryjournals.com). They also sometimes mimic reputable journals by copying their styles and their names and soliciting content from scholars, a procedure known as “hijacking“.

Publish-or-perish begets questionable research practices

Thanks to the advent of the scientific journal, knowledge could be evaluated, used and further transmitted across space and time. The utility of the journal and other forms of academic publication such as books, proved so effective that they became the primary source for others to evaluate the importance of scientists and their work. This gave rise to the norm we are all familiar with, publish-or-perish.

In a survey of psychologists, John et al. (2012) found that 50% claimed they had selectively reported studies that supported their hypothesis (as in, selectively excluding those that didn’t). Moreover, 35% admitted to reporting unexpected findings as having been predicted from the start. Nearly 2% outright admitted to faking data.

Publish-or-perish and questionable research practices have a causal relationship. Except for occasional sociopathic or psychotic individuals, there is no reason for a scientist to engage in questionable research practices. No reason, except scientists’ very existence on scientists may depend on it. So many studies in reality lead to results that go in all directions, support the null or (most importantly) do not provide groundbreaking new results.

Through the peer review and editorial process, journals select studies that are path-breaking. Studies that will move knowledge forward and be of the greatest interest to readers. When faced with prospects of not getting tenured, not getting grant funding and being forced out of academia, a human’s (scientist’s) rational calculations change. Suddenly, rounding that p-value from 0.054 to < 0.05 or even adding some cases to the data becomes a cognitively defensible decision.

Like any profession, science is competitive. Those who publish more, or get more citations to their publications tend to get ahead. Those who don’t, don’t. Professional athletes use incredible tactics to gain competitive advantage. Of course steroids are well-known, but other tactics are much harder to detect. For example, endurance athletes often use blood transfusions to boost recovery and performance. This is what it means to be human, scientist or not.

One of the most radical events in the social and behavioral sciences is Diederik Stapel’s entire career faking data and results that were published in at least 54 articles that consumed millions of Euro in funding. It took almost two decades for critics and whistleblowers to finally out him. Psychology is not alone. In political science LaCour and Green published a study in Science that attitudes toward gay marriage could be changed if heterosexual people listened to a homosexual person’s story, but it turns out LaCour fabricated results of a follow up survey that never took place as uncovered by Broockman. In economics Reinhart and Rogoff published numerous studies identifying a negative impact of high debt rates on national economic growth, when in fact several points in their dataset had conspicuously missing values. When these values were added there was no longer support for their claim as identified by Herndon, Ash and Pollin.

I suspect that most questionable research practices are not intentional. The sociopathic (~psychotic) Stapel’s of the world are rare. This pressure to find a job after doing doctoral studies and then to get tenured, means a trade off between conducting science in its ideal form – so learning as much as possible about the existing literature on a subject, mastering the necessary methods to perform the research and executing the research, possibly with several iterations, and facing the prospect of null results – with science in a form that will lead to publication as fast as possible.

This ‘fast as possible’ leads to amateur science. For example, in the rush to get my first publication I attempted to use “multiple imputation”, but lacked the time to properly learn this method. Instead I simply generated several datasets and averaged them into one and re-ran the analysis on this one. This was not an intentional misuse of a method. It is a questionable research practice as a result of context. Think about matrix algebra. It is the basis of many advanced statistical techniques regularly used by social scientists. How many of us have a strong grasp of matrix mathematics? I don’t. And yet I’ve published several studies using structural equation modeling.

WHAT & WHY in SOCIOLOGY

I am aware of nothing about sociology that suggests it needs a special adaptation of open science. Most research cannot be strictly delineated as sociology or not sociology anyways. The boundaries of a discipline, especially within the social sciences, exist mostly in the institutional structure of universities. Eliason suggested that sociology is unique because it overemphasizes quantitative techniques, has needlessly long articles, lacks writing for the popular press and emphasizes research at the expense of teaching. In my experience the previous sentence perfectly describes all social and behavioral science disciplines at once. Even article length, something I thought might be peculiar to sociology, is not special. Political science and management research have very long articles. Consider that and ASR and ESR for example, limit words to 9,000 and 8,000 or less – this is relatively average if not short for social science.

Actually, I would argue the most unique thing about sociology at the moment relates to open science. Two points in particular: (A) that sociology has not had the same incredible scandals as other disciplines and (B) that sociology lags behind other social sciences in promoting open science.

A lack of scandals, not scandalousness

Could sociologists be more scientific and ethical in their research behaviors than those in other disciplines? Given identical institutional and career structures that favor productivity and innovation over replicating or checking each other’s work, I doubt it. Sociology journals and their editors, for example, rarely retract articles despite evidence of serious methodological mistakes. Carina Mood once accurately pointed out mistakes in the interpretation of odds-ratios in some American Sociological Review articles, but the editors refused to publish her comments, much less consider retractions. She shared her exchange with ASR in an email to me and discusses some of it in a working paper. An exceptional recent event was the retraction of one of Legewie’s sociological studies, but this required he himself to initiate the retraction after someone pointed out errors in his work. Until 2020, the Retraction Watch database (www.retractiondatabase.org) listed no retractions from the top sociology journals, and only two among the well-known, one in Sociology and another in Social Indicators Research.

This year, something new happened. Five articles published in Social Problems, Criminology, and Law & Society Review were retracted. These articles had the common co-author Eric Stewart. It turns out that the data he provided were faked. There is no other logical conclusion that this after exceptionally rigorous work by Pickett (a co-author of Stewart) provided evidence that the Stewart studies had consistently incorrect means and standard deviations, unverifiable surveys (sources, methods, original materials), magically changing case numbers despite identical statistical results, sometimes half the data had duplicate cases and impossible clustering structures in the data.

As an aside, one of Pickett’s findings was that the data had non-uniform terminal digit distributions. This means that the right-most digits in the reported statistics differs markedly from a uniform distribution. In particular, at the third-digit numbers should be uniformly distributed with 0-9 appearing roughly 10% of the time. In one of the papers, zeros appear less than 2% of the time. If you are considering faking data, keep in mind that it is roughly impossible to do it in a way that cannot be detected by careful investigation. Any algorithm used to generate results (even copying and pasting) leaves is statistical marks.

Perhaps we sociologists should be partly relieved, as this is just confirmation that we are as much a part of social science and its problems, as any other discipline. However, the Stewart retractions which should have been breaking news for sociology, went mostly unnoticed. The results of the investigation leading to the retractions is not published in a flagship sociology journal where it belongs. Instead it appears in Econ Journal Watch – something unlikely to be read by any sociologist. Moreover, the retraction notices from the original journals do not cite outright fraud. Stewart continues to promote his work in print claiming the main findings still hold, and several other of his studies with similar irregularities have not been retracted.

Another, extremely important event was a case of ethnomethodological research conducted by Lindsay, Boghossian, and Pluckrose in the mid 2010s. This is sociological self-examination at its best, although their backgrounds are mostly outside of the discipline of sociology. They wrote a series of 20 papers presenting fake results and making arguably unethical claims. They invented the papers to mimic the style of articles published in journals well-known for sociological research on topics of identity, hegemony and marginalization. Seven of their papers were published or had revise and resubmit recommendations before whistleblowing forced them to cancel the project. Some highlights: one paper contained sections from Hitler’s Mein Kampf. Another suggested men should be trained similar to dogs to prevent rape, and a third that white men should be forced to sit in chains on the floors of university classrooms, instead of normal desks. I am not commenting on the merit contained in these ideas, only that they all contained faked data, non-existent methods or conclusions not supported by the data. That these studies easily flew under the radar of a number of high impact journals points out how easy it is to publish without doing the necessary research work.

Lagging behind closed doors

October 6th, 2020. I entered the search terms “open science” (with quotations to search the exact phrase) and “sociology” (with quotations to only return results that contain the word) into Google Scholar. Six pages of results without a single sociology journal. On page 7, Merton’s “Priorities in scientific discovery: a chapter in the sociology of science” appears. Publication date 1957.

In 1973, Wilson, Smoke and Martin found that 80% of studies published in the top three sociology journals of that time rejected the null hypothesis, in other words they had p-values below a threshold. This suggests publication bias, if not p-hacking. Sahner (Table 5) analyzed all article submissions to the Zeitschrift für Soziologie, 1972-1980. Of those that contained significance tests, 70% were significant at p < 0.05 suggesting that authors prefer to submit significant results. More recently, Gerber and Malhotra (2008) reviewed articles published in American Journal of Sociology, American Sociological Review and The Sociological Quarterly, and specifically looked at the boundary of t = 1,96 (i.e., p<0.05) to find that as many as 4-out-of-5 studies were ‘significant’. This suggests publication bias as well. Sociology has yet to have a systematic review of p-hacking by comparing p-values within ‘significant’ results. Meanwhile psychology and political science for example are teeming with papers on “p-hacking” and “publication bias”.

Sociology is rather intransparent. An estimated 78% of the major sociology journals have long-standing transparency policies. Unfortunately, these policies are mostly artifacts on paper without much enforcement. For example, only 37% of sociology articles published in the mainstream journals between 2012-2014 include shared data and/or materials. In 2015, a small group of sociologists tried to obtain materials from the authors of 53 prominent sociological studies. They obtained these from just 19%, and only 20% of all the authors they contacted bothered to respond despite several requests. This suggests sociologists are free to hide the data and materials that led to their findings without recourse, despite such guidelines.

Other disciplines have embraced the Transparency and Openness Promotion Guidelines (TOP). The TOP guidelines with help of the Center for Open Science support journals to improve science. Journals can become signatories of TOP, and in doing so they either adopt and enforce new transparency guidelines, or certify that they already meet certain transparency standards. Most of the top psychology journals and several political science journals signed on. Other major journals such as the Journal of Applied Econometrics and later the American Economic Review adopted their own enforced transparency guidelines.

Until 2017, the only higher ranking sociology journals that signed TOP were Sociological Methods and Research and American Journal of Cultural Sociology. In 2017, Elsevier dictated that all its journals adopt guidelines and this added Social Science Research to the list. At the time of writing this, the flagship journals American Journal of Sociology and American Sociological Review neither signed TOP nor enforce their own guidelines. Of top German sociology journals, the Kölner Zeitschrift für Soziologie und Sozialpsychologie is the only signatory.

If intransparency is pervasive in sociology, then research cannot be (a) checked for errors, (b) reproduced or (c) simply critiqued. Even when exact reproducibility is not the goal, as often is the case with context-specific interpretive research, most research methods remain shrouded in mystery. This requires readers to take a giant leap to trust what others report. Part of the problem is that sociologists express little interest in reproduction or checking others’ works. There are few replications in the history of sociology, and if anything, they decreased over time until recently. For example, searching the articles in American Journal of Sociology and American Sociological Review reveals 22 replication studies from 1950-1980 and only 8 from 1981-2010.

Something telling about a lack of willingness to open sociology comes from sociology’s most ‘powerful’ society, the American Sociological Association. They collectively petitioned the US government to not make data transparency a requirement attached to grant funding in 2019.

NOW

What to do about it? Here are some simple steps to consider especially for sociologists. Similar to steps advocated by many others for graduate students and academic institutions, or all of us for example.

Transparency

Make all the materials – research design, methodological steps, data (when legally and ethically possible), analyses, conflict of interest and any software code – available online. The practical reason is that others can follow your work and expand it in the future. Doubly practical is that you don’t need to respond to email requests for your materials. So long as you are not a deceitful sociopath, you want others interested in your work and to replicate your work. Even if a study, seems to ‘prove you wrong’, the fact that it replicated your work is evidence of how important your work is and the topic of study. You are a piece of a much larger community of knowledge construction. Constructive exchange can lead to collaboration with critics to generate better future research without personal conflicts.

The immediate value of transparency is that being transparent forces you to be careful. Knowing everything will be public information increases the value of attention to detail. Put in its converse: not sharing your workflow publicly can indirectly foster lower quality standards, in addition to creating possibilities for misconduct. All this enables rather than hinders knowledge, and increases inter-researcher trust.

Transparency should not be much extra work. During the research process you should take high quality notes for yourself. You will often return to your data and research in the future and thus need those notes. This is a best practice with or without sharing your work. When you engage in this best practice, you have a deep familiarity with your data and can draw meaningful conclusions and easily redact identifying characteristics in your data in the case of qualitative research. In case you cannot share data, you can still reveal the design and expectations; or allow controlled access to the data. Human subjects must be protected at all costs, and yes this often means data sharing is not possible .

The ‘transparency work’ of the qualitative research process can be reduced by software platforms that provide semi-automated annotation and coding. Even if you do not share data, you can build an open workflow from the beginning that allows others to understand every step of the data generating process. However, this work can also be extremely tedious and the incentives not immediately clear. More fruitful discussion if not research assistant funding is needed in this area moving forward.

If you are using quantitative methods, immediately stop hiding your work. If you ran 100 models and 99 did not support your hypothesis, then this is your finding. If a journal does not want to publish this, point the editors and reviewers to the importance of null results and the problems of publication bias. If they still refuse, consider boycotting this journal and sharing your negative experience in public.

Preregistration

Preregistration can drastically reduce bias and hacking prior to collecting data. When you clearly outline your plans including how you will analyze the data, before conducting the research, there is little room for hacking so long as you stick to the plan. Moreover, preregistration can be done directly with a journal although sociology journals are laggards here because they generally do not offer this option. In a preregistration, even if you just put an pre-analysis plan or research design and goals online, you must think much harder about factors such as meaning, causality, inter-subjectivity and ‘how the world probably works’. You cannot hide behind results in this process and therefore you must anticipate counterarguments and explore counterfactual logic. This improves the clarity of theory and research, creating an immense gain in efficiency and effectiveness.

Regardless of the methods you use there are many opportunities to take advantage of preregistration. Some forms of qualitative research, for example those involving grounded theory and interpretivist methods, require decisions during the research process that cannot be foreseen. This uncertainty can be outlined in a preregistration stating explicitly when flexibility is and is not admissible. Moreover, simply putting a qualitative research plan online prior to conducting the research is equivalent to a pre-analysis plan. This research design need not compromise your data collection work because you can register the plan on a platform like the Open Science Framework and then embargo it, so that it is preserved but not made public until after the research concludes. Some scholars using quantitative methods might assume that preregistration is not possible because they work with secondary survey data. But the regularity and release of these survey data are known in advance, and these scholars can preregister their studies before the next round of data are collected with the knowledge of which questions and countries will be available.

Decommodify science

The central functions of the scientific publishing industry are printing and disseminating knowledge, which historically solved a problem of how to share knowledge across universities and countries. The business functions of publishing, however, come with harmful byproducts. Publishing firms extract profits from scientists twice. First, scientists provide free labor in the form of editing and peer reviewing, in addition to producing the results for the articles to be printed. Next, researchers, or their employers, must purchase the product of their own labor; labor not paid for by the publishers. The journal article as a product comes at a high cost, and often only in packages of journals meaning that universities have to pay for extra material their scholars do not use.

Sometimes publishing houses neglect science in favor of profits, but Elsevier has been particularly problematic. They sponsored weapon fairs, created and sold ‘fake’ journals to pharmaceutical companies to publish ‘results’ supporting their drugs, purchased the Social Science Research Network and created paywalls or removed legally shared working versions of articles, charge fees for open access articles, and actively lobbied against open access legislation (For a concise summary with links see Tal Yarkoni’s blog entry). This brought massive counter movements against Elsevier in the scientific community (for example, The Cost of Knowledge). You can take action and refuse to review for or publish with unethical publishers if you feel it is justified. Thus, you should inform yourself about the publishers. Your libraries are a source of information, because they deal with the business side of publishers.

If you are in Europe, check if your institution is a signatory of ProjektDEAL. A consortium of universities are collectively bargaining with publishers via ProjektDEAL demanding that publishers reduce fees and eliminate the double paying of universities. The primary objective is that publishers sign country-wide subscription agreements that enable access for all universities at once. Wiley agreed to such a model and this marks a paradigm change. It indicates how the publishing industry looks in the future, so long as the OS Movement proceeds. If you are not in Europe, consider starting a similar initiative, for example the entire University of California system of 10 universities, 5 medical centers and several research institutions that collectively produce roughly 10% of the world’s academic publications recently followed ProjektDEAL and boycotted Elsevier.

You can work around the publishing business. Prior to submitting an article or after it is published, you have the right to share a preprint – a draft of the paper you share publicly so long as it is not published elsewhere or sold for profit. Posting preprints reduces the power that publishing firms have over science, in addition to giving others immediate access to your work. But simply posting preprints on your academic website is not open enough. Use a preprint service, for example through the Open Science Framework, to ensure that your preprints appear in search engines such as Google Scholar. SocArXiv for example, is the go to location for sociology. This enables scholars to find and directly access research results based on the words they contain, uninhibited by paywalls – a crucial aspect to practicing sociology in the Global South. Preprint services are free and open access.

Meta-constructing social theory

Certain hypotheses are constantly tested in social science. The impact of income inequality on health, racial bias on police brutality and public opinion on elections, just to name a few. At some point more tests of the same hypothesis stop contributing to scientific knowledge, and may even harm it by introducing more ‘noise’ into the scientific discourse.

I study social policy preferences and the impact immigration has on them. In this area there has been sustained efforts to test the hypothesis that immigration has a negative impact on support for policies of the welfare state; things related to protecting against risks of aging, unemployment and health. To justify this hypothesis, scholars construct theoretical variations of group dynamics arguments, often drawing on resource competition, nationalism and social identity. Despite claiming to test the hypothesis, the formal models applied to data suggest any number of data-generating processes. They often have little in common other than some measure of immigration and some measure of policy preferences. The results of their tests go in all directions, i.e., a positive, negative or nil effect of immigration. It would appear that the topic is at a standstill, new analyses of the same handful of cross-national survey data sink in the mire. How to break through such a scientific impasse?

In designing the Crowdsourced Replication Initiative (CRI) with co-PIs, Alexander Wuttke and Eike Mark Rinke, we asked researchers to to do research; and we gave them semi-structured tasks and observed them. Specifically they were supposed to come up with the best possible way to test the immigration hypothesis given the same International Social Survey Data source. Although we are currently meta-analyzing the hypothesis test-results (see our virtual APSA poster) to determine which modelling decisions impact the outcomes, we also have a second goal in mind: to discover what is behind the specification curve.

Each research team had to design a best possible test. This is at once a statistical question and a theoretical question. They needed to think carefully about the data-generating process and attempt to recover it in a model. We asked them to write down their research designs after doing this thought exercise, but before analyzing any data. From their researcher choices we can identify where key consensus and disagreements exist about the data-generating model, thus is not only evident in their designs but also in a structured deliberation and voting procedure. This process offers a major advantage over ‘normal’ theoretical discussion and debate among academics, because we have the results that go along with the different modeling choices; and, let’s be honest, when else do over 150 researchers get together and focus on a single hypothesis? By observing this process we can identify where data-generating theories differ and how important these differences are for the results. This will allow us to map where immigration and social policy scholars should focus their theoretical efforts in the future to reduce the most uncertainty, i.e., the largest gains in knowledge.

We have a sound piece of scientific research from Brady and Finnigan (2014) from which we draw our working hypothesis for the CRI crowdsourced researchers: That immigration undermines support for social policies. Brady and Finnigan found little or no support of this hypothesis, at least not in a generalizable macro-comparative sense. This was the launching point for the research of the 77 teams who by now managed to submit replicable results (yes there are still a few out there we are hoping will submit a final model or fix issues we identified in our replication of their models).

Although we are in the process of analyzing the ocean of data generated by this project; a sneak preview offers exciting evidence of the possibility for meta-construction of theory.

Here are two glimpses of what’s to come. One are the deliberation and voting results summarized (Figure 1). The other are differences in definitions of ‘immigration’ (Table 1). We used Kialo, an online structured deliberation platform, to allow participants to discuss the data-generating model after they proposed their own ideas for how to best test the hypothesis. Readers can observe how this deliberation unfolded as we divided the participants into two groups: here and here. Later (after they had the possibility to update their models based on the deliberation) they were given other teams’ models or our own variations on those models to vote on and rank in terms of their appropriateness for testing the hypothesis without having seen the results of those models. Figure 1 quantifies both the Kialo veracity scoring and survey-based voting into one overall scale and then plots the average score of models by their features. Each different color is a discrete set of model features with the zero (y-axis) set to the average support of models choosing an OLS estimator (among the least preferred).

Figure 1. Researcher Preferences for Recovering the Data-Generating Model
“Model” is the hypothesized general impact of immigration on support for social policy. Data and code still being prepared for online sharing, stay tuned.

In Figure 1, it becomes clear looking at the longest bars in each color category that models that incorporate all 5 waves of the ISSP data, include countries of Eastern Europe, include heterogeneous error variation by country-year and year (like a cross-classified model), and incorporate survey sampling weights are preferred over the others. Some of this runs counter to the state of the art. For example, most research follows a logic that major immigrant destination societies – the “Rich 13” and “Rich 17” advanced democracies – should be where “public opinion is likely most influential for the politics of social policy” (Brady and Finnigan 2014:24).

To summarize the motivation for looking across all possible countries, especially Eastern Europe, one crowdsourced researcher put it like this: “Either there is an effect of ‘immigration stock (increase)’ or not“.

Another followed up on this point stating: “To test the general hypothesis we should use as many countries as available and account for variations in GDP and social welfare expenditures in the models.”

These comments demonstrate the majority voice in the CRI that if immigration has a an impact on social policy preferences we should see it across all countries of the globe, not restricting our analysis to only very rich, strong welfare states.

Although Brady and Finnigan and all other research in this area comes to no consensus on whether there is a negative impact of immigration on support for social policy preferences, we should remain skeptical of results if we do not trust the data-generating model. In other words, if our tests do not match what most researchers see as the appropriate theoretical perspective, results are inconclusive and thus uninformative. The deliberation and voting offer us clues where to focus theoretical effort, namely specifying why more countries of the world should (or should not) show a causal effect of immigration on social policy preferences and whether this should (or should not) appear across several decades or only certain times. I am not aware of extensive theory that attempts to tackle these issues. Now is the time to write it!

Even more productive for the possibility of meta-construction of theory is the correspondence between the actual decisions made by the researchers and the subjective and objective outcomes of those decisions. Again, our results are in progress, but we offer a snapshot in Table 1 of different ways the researchers chose to measure immigration as their main hypothesis test variable (1 out of dozens of model decisions to compare). In the first row, 67 out of 77 teams used a “Stock of Foreign-Born” measure in at least one of their models, and 27% of their models using the “Stock” variable showed support of immigration having a negative and significant statistical impact on support for social policy at p<0.05.

Table 1. Crowdsourced Researcher Decisions, Deliberations and Results.
Five different measurement strategies for the immigration test variable.

In the column ‘Positive Test Result Rate’, we see that the ‘Difference’ between “Stock” models (referenced as [1] in Table 1) and those instead using “Flow” to measure immigration models (referenced as [2]) is 3.6. In other words, “Stock” models arrive at support of the hypothesis 3.6 percentage points more than “Flow” models, all else equal. “Stock” models were not more or less popular than “Flow” models, with the average vote score of 0.43 on a scale of 0 (worst) to 1 (best equipped to test the hypothesis) versus 0.45 for “Flow”.

The values in bold indicate that “Change in Flow” models (those measuring derivatives of “Flow”) were among the most popular in the voting process. So the rate of change of the flow of immigrants is seen as an important component in testing this hypothesis. Interestingly, these models were 4 percentage points more likely than “Stock” and “Flow” models to support the hypothesis. When measuring immigration as specific to certain outgroups (from Muslim-majority countries, non-Western countries or refugees), the “Flow” of these various ‘Outgroups’ was seen as more popular than “Stock” of ‘Outgroups’ by a large margin, but the results were over 10 percentage points less supportive of the hypothesis.

What can we learn from this. We argue that a full analysis of the massive range of modeling decisions will give us a guide to move this entire research area forward. Some other decisions for example were different social policy domains, whether ethnic and fractionalization is the ‘real’ cause of the ‘immigration’ effect, construction of latent social policy preference measures, whether or not GDP and unemployment are part of the data-generating assumptions just to name a few out of hundreds. We are only scratching the surface here, but it seems that observing researchers make research decisions, deliberating them, voting and making final choices, we will gain immense knowledge as to where better theory is necessary. As such we see meta-constructing of social theory as a promising avenue for social science. This would be the concept of theory designed replication writ large.

P-hacking. Religion and science aren’t that different

Science and religion parted ways long ago. This is a historical struggle over power. If science claims to disprove that the earth is the center of the universe or that evolution undermines creation, it might falsify religious doctrine, said to be the word of a God or Gods and thus the ultimate Truth. Religions rely on their claims to this Truth to convert people to submit to their institutions. If science undermines this Truth, it undermines religious power. And power is something that changes human behavior; they might lie, cheat, steal and kill to get or preserve it.

Wikimedia Commons: Thinker; Passion

But science and religion followers are not that different. Actually, they are the same. They are human.

Power is another way of describing status and prestige. In science, we know all about status. Scientific status comes from recognition. From making scientific discoveries and claims that garner attention. In particular, attention in the form of citations.

The absence of market prices results in prestige becoming the main reward and high prestige becoming the measure of exceptional ability. Rent seeking in academia, therefore, produces ego-maniacs and much destructive behavior

Sørensen (1996, p. 1358)

The seeking of status, what economists and Sørensen label as a form of ‘rent-seeking’, is presumably the reason scientists p-hack, and engage in other forms of malpractice. In some cases they ‘must’ p-hack in order to meet the demands of reviewers. Mostly, statistical research requires significance stars to attain publication. This is changing with the Open Science Movement in recent times, but only in the margins. Research using qualitative methods also requires its own form of statistical significance ‘p-hacking’. To be published, a paper must extract novel ideas from observational data, whether these reflect the actual data or are even based on actual data at all seems to be irrelevant as long as the story looks good to reviewers. Just like the significance stars that look all sparkly and comforting to reviewers of quantitative research.

So humans (scientists) cheat to attain status; intentionally or even unintentionally — without malicious intent because they are conditioned to play with their data until the stars appear. Therefore it should be no surprise to humans (scientists) that other humans (religious followers) also cheat.

If p-hacking in science is playing around with models so that they represent the data in a way that matches the researcher’s desire for status, rather than portray the results of scientific tests, then p-hacking in religion must be to interpret the dictates of God (or Gods) to fit one’s, or one’s group’s own status goals.

The conflict of science and religion it like p-hacking. Its a power struggle. Who has the power to make claims about the way the world is and the way it should be? Religious followers would attribute this authority to God, and then themselves as seekers and messengers of God. Science followers attribute this to factual knowledge about the world and then themselves as the testers and reducers of uncertainty to ‘uncover’ those facts. In both cases the process is corrupted by status seeking, a fundamental fallibility of humans. When acting as scientists and spiritual seekers, we are fundamentally still primates, and as such tend toward hierarchy, with many of us human-primates willing to cause harm to others in order to attain higher and higher positions.

For religious followers to gain status through p-hacking they would have to adjust the ‘word of God’ or the ultimate Truth in a way that it (a) is no longer a religious or spiritual truth so that it (b) serves their own ends. Do we have evidence of this practice? Wars fought in the name of religion do not really fit the criteria, as wars can be justified as right, as God’s (or the Gods’) will, for example Christian New Testament Revelation 19:11 about the righteous warring against the (presumably) non-righteous (i.e., ‘evil’); Christian/Jewish Old Testament Deuteronomy 20 calls Israelites to war against cities that do not accept their terms; Islam Qur’an 22:39 advocates war in self-defense and possibly 4:74 to fight in the name of God; and Buddhism taking the stance that war might be necessary in defense but not justified as an aggressor.

The point is that it is difficult to find direct evidence of p-hacking by religious followers in order to gain status for themselves or a group. The same problem lies with detecting p-hacking in scientists. Given that all sides in all wars tend to claim righteousness under God (or Gods) it seems obvious that some (or all) are misinterpreting what should be God’s will for their own gain. Given that so many p-values lay below 0.05 in published research, we can assume that not all are derived from a clean research design, method and presentation of results.

Openness is not needed because we are untrustworthy; it is needed because we are human

(Nosek, Spies and Motyl 2012, p. 626)

It is not only religious and scientific institutions that are antagonistic given their seeking of power. Political institutions, who wield a monopoly on force in the modern world divided into sovereign nation states, also do not always get along with both religious and scientific institutions as they have their own p-values to guard.

Die ‚Novel Coronavirus‘ Pandemie und die Grenzen von Open Science

Deutsche Übersetzung von Novel coronavirus pandemic and the limits of open science (8. April 2020).

Am 30. Januar 2020 erklärte die WHO einen ‚Global Health Emergency‘ basierend auf Hinweisen auf ein Virus, das sich schnell verbreitet. Das Coronavirus aus der SARS-Familie (Sars-CoV-2, und die Krankheit Covid-19). Die Beweise, mit denen die WHO diesen Notfall erklärte, stammten fast ausschließlich aus chinesischen Daten.

Die chinesischen Daten zeigten im Januar eine alarmierende Ausbreitungsrate, wie in Abbildung 1 dargestellt. Ohne die chinesischen Daten hätte die WHO wenig Anlass zur Sorge gehabt, da in allen anderen Ländern zusammen kaum 90 Fälle bekannt waren und kein einziger Todesfall.

Abbildung 1. Die Verbreitung von Covid-19, die zur Notstandserklärung der WHO am 30. Januar führte. Johns Hopkins Daten.

Anfang Januar ergriff die chinesische Regierung Maßnahmen, um Nachrichten und Daten1 im Zusammenhang mit dem Virus zu blockieren. Trotzdem gelang es chinesischen Wissenschaftler*innen, offenen wissenschaftlichen Praktiken zu folgen, einschließlich des Teilens partiell-genetischer Sequenzdaten mit der Welt. Dies ermöglichte der WHO, geeignete Maßnahmen zu ergreifen, und befähigte  Wissenschaftler*innen in Deutschland, Tests zur Identifizierung des ,novel Coronavirus‘ zu entwickeln. Das deutsche Team veröffentlichte seine Methoden am 13. Januar auf der WHO-Website. Technologie und globale Kommunikation haben sich zu einem Punkt entwickelt, an dem Regierungen den freien Informationsfluss verlangsamen, aber nicht stoppen können.

Die gemeinsame Nutzung aller Daten und Erkenntnisse ist die beste Form der Wissenschaft, wird aber nicht immer praktiziert. Die Open Science Movement hat das Ziel, dies zu ändern. Wenn jeder auf der Welt gleichermaßen Zugang zu Theorie, Methoden, Daten und Ergebnissen aller anderen wissenschaftlichen Forschung hat, steigen Qualität und Effizienz exponentiell an. Dies zeigt sich in den offenen wissenschaftlichen Praktiken hinter dem globalen Kampf gegen Covid-19, die Leben retten und retten werden, möglicherweise Millionen von Leben.

Abbildung 2 ist eine Simulation, die vorhersagt, wie viele Menschen in einem bestimmten Land als Ergebnis des Zeitpunkts der staatlichen Intervention an dem Virus sterben würden. Intervention heißt wann die Regierungen die von der WHO empfohlenen Vorgehensweisen befolgen, wie z. B.: Anweisungen für den Aufenthalt zu Hause, Durchführung umfassender Tests und Quarantäne für diejenigen, die positiv auf das Virus getestet wurden und die, mit denen sie in Kontakt waren. “Tag 0” in Abbildung 2 ist der Moment, in dem mindestens 3 symptomatische Fälle pro Million Menschen auftreten, normalerweise etwa 2 Monate nach dem ersten Fall in einem Land, aber natürlich viel schneller, wenn mehrere Fälle gleichzeitig auftreten.

Abbildung 2. Die Auswirkungen staatlicher Interventionen auf die Reduzierung der Todesfälle durch Covid-19. Quelle: Gabriel Goh, und eigene Berechnungen des Autors (* vorhergesagte Todesfälle)

Der Leser sollte bedenken, dass Abbildung 2 eine vereinfachte Simulation ist. Die Realität ist äußerst komplex. Insbesondere gehen die Regierungen nicht an einem Tag vom normalen Betrieb zur vollständigen Stilllegung der Gesellschaft über, dies geschieht normalerweise schrittweise. Diese Simulation basiert jedoch auf den bekanntesten Modellen der prädiktiven Epidemiologie und zeigt, wie selbst ein Tag der Unentschlossenheit Tausende von Menschenleben kosten kann.

Als Reaktion auf diesen Ausbruch in China und das rasche Auftreten von Covid-19 weltweit folgte Südkorea den standardisierten „Emergency Operating Procedures“ der WHO. Das heißt: möglichst viele Personen testen, alle Fälle isolieren, Reisen und Versammlungen beschränken, nicht notwendige Geschäfte schließen. Das Virus war eingedämmt und nur 200 Menschen starben. Natürlich haben frühere Virusausbrüche in Südkorea die Bereitschaft verbessert. Ebenso war Deutschland gut vorbereitet, weil es schnell Tests entwickelt hatte  und weil es aus den Erfahrungen Italiens als Europas „Ground Zero“ gelernt hatte.

Grob gesagt hat Italien um den 15. Februar herum die Schwelle für „Tag 0“ in Abbildung 2 überschritten. Als Land war es am wenigsten vorbereitet, weil es das erste in Europa war und ein Ort ist, zu dem Menschen aus der ganzen Welt als Touristen, wenn nicht als Fußballfans, strömen. Somit ist der Fall Italiens keine Geschichte eines großen Versagens der Regierung, auch da es Gründe gab, dem chinesischen Fall misstrauisch gegenüberzustehen.

Die Schwelle für “Tag 0” lag in Deutschland um den 2. März herum, und “Tag 0” war um den 8. März herum in New York, zumindest auf dem Papier. New York begann jedoch erst am 1. März mit dem erfolgreichen Testen von Personen, da die Anfang Februar veröffentlichten CDC-eigenen Testkits fehlschlugen. ‘Tag 0’ in New York war wahrscheinlich Mitte Februar oder früher. Dennoch hätte New York aus „pandemischer Sicht“ noch viel Zeit gehabt, Maßnahmen zu ergreifen. Der Rest der Welt hatte seit Ende Januar, dank des offenen Datenaustauschs auf der WHO-Website, genaue Tests durchgeführt. Dies geschah jedoch weder in New York noch in den USA als Ganzes. So wurde New York völlig unvorbereitet getroffen, aber nicht, weil das Virus überraschend aufgetaucht  war.

In Kombination mit den Daten aus China, Südkorea und mehreren anderen Ländern erklärte die WHO am 12. März, dass der globale Notfall nun eine „Global Pandemic“ sei. New York hatte den Ausnahmezustand verhängt, aber erst ab dem 20. März Anordnungen für den Aufenthalt zu Hause erteilt. Erst eine Woche später wurden die meisten Schulen geschlossen und die Polizei autorisiert, diese Anweisungen durchzusetzen (der blaue Pfeil um „Tag 31“ in Abbildung 2). Trotz massiver offener wissenschaftlicher Bemühungen, die durch die WHO kanalisiert wurden, haben New York und ein Großteil der USA offensichtliche wissenschaftliche Beweise und Vorhersagen einfach nicht beachtet. Dies ist umso schockierender, als Seattle und nicht New York in den USA „Ground Zero“ war. Der gesamte Bundesstaat Washington hatte frühzeitig und erfolgreich Sofortmaßnahmen ergriffen.

Die Überprüfung des Versagens von Ländern, Staaten oder Städten, vor oder am 30. Januar (globaler Notfall) oder 12. März (Pandemie) sofort drastische Notfallmaßnahmen zu ergreifen, ist nicht Gegenstand dieses Blogposts. Dank Open Science Praktiken, der WHO und mehrerer Partnerorganisationen und Websites hatte die Welt Zugang zu denselben Daten und Kenntnissen darüber, wie man auf das Virus testet.

Die Botschaft, die ich vermitteln möchte, ist, dass Open Science nicht ausreicht. Ihre Grenzen liegen in den Regierungen. In vielen Ländern hat die Wissenschaft wenig Platz in der Entscheidungsfindung der Regierung. Dies ist vielleicht in einem dysfunktionalen autoritären Regime verständlich, in dem fast alle politischen Entscheidungen getroffen werden, um die Macht aufrechtzuerhalten und zu konzentrieren. Dies ist sicherlich ein Grund dafür, dass die schlimmsten Schrecken des Virus in Afrika südlich der Sahara und in Zentralasien noch bevorstehen. Aber es ist schockierend in Demokratien, in denen es eine Schar von Wissenschaftler*innen und Agenturen gibt, die die Regierung dabei beaufsichtigen und beraten sollen, was zu tun ist, um ihre Bevölkerung zu schützen.

Die Vereinigten Staaten hatten reichlich Informationen darüber, dass sich Covid-19 in den USA befand und sich schnell verbreitete, wie man wirksame Tests entwirft und was genau zu tun ist, um die Ausbreitung des Virus und die Zahl der Todesopfer zu verringern, Monate vor dem Ergreifen größerer Maßnahmen – dieselben Informationen, die der Staat Washington zur Eindämmung der Ausbreitung nutzte. Aber diese wissenschaftlichen Informationen, die in einem Umfang und einer Geschwindigkeit geteilt wurden, die in der Weltgeschichte noch nie zuvor gesehen wurden, reichten einfach nicht aus.

Die Open Science Movement hat ethische Grundsätze, die ihrem offenen Zugang, den offenen Daten, den offenen Methoden und Empfehlungen zum Austausch von zugrunde liegen. Es ist nicht nur so, dass offene wissenschaftliche Praktiken die Wissenschaft zuverlässiger und effektiver machen. Sie fördern soziale Gerechtigkeit oder wissenschaftliche Gerechtigkeit, wenn Sie so wollen. Wenn jede*r Wissenschaftler*in auf der Welt auf alle Informationen zugreifen kann, über die jede*r andere Wissenschaftler*in auf der Welt verfügt, besteht wissenschaftliche Gleichheit. Während reiche Universitäten Elsevier boykottieren, können sich ärmere Universitäten nicht einmal ein Abonnement leisten. Open Access würde also der Welt eine globale Nord-Süd und eine dotierte vs. nicht dotierte Universitätsgleichheit bringen. Aber es kann denjenigen, die potenzielle Virusopfer sind, keine Gerechtigkeit bringen.

Im Fall der Covid-19-Pandemie schien die offene Wissenschaft zunächst das Gezänke und die Tiraden der Regierungen zu untergraben, konnte aber nur an der Tür klingeln. Einige Regierungen weigerten sich einfach, die Tür zu öffnen und Maßnahmen zu ergreifen. Dies wirft die Frage auf, ob die Open Science Bewegung politische Handlungsprinzipien verabschieden muss, die über Maßnahmen zur Förderung von Transparenz und Reproduzierbarkeit hinausgehen. Muss die Open Science Bewegung die Regierungen dazu drängen, administrative, wenn nicht verfassungsrechtliche Verfahren einzuführen, die die Regierungen bei einer Naturkatastrophe oder einem Notfall wie einem Hurrikan oder einer Pandemie den Wissenschaftler*innen gegenüber rechenschaftspflichtig machen?

Ich sage ja aus ethischer Sicht. Aber es ist nicht so einfach. Sobald wir anfangen, Dinge wie Verfahrensreformen voranzutreiben, werden tiefsitzende Sonderinteressen einbezogen und es wird hässlich. Als Wissenschaftler*innen sind wir wahrscheinlich nicht für Schlammschlachten und politisches Manövrieren geeignet. Ganz zu schweigen davon, dass wir umso weniger Zeit für die Wissenschaft haben, je mehr Zeit wir für Lobbying aufwenden. Einige von uns haben die Fähigkeit, die Bewegung zu führen und Regierungen zu beeinflussen, aber die meisten von uns sind schlecht gerüstet, um die Mächte zu bekämpfen, die hinter der Politik stehen.

Das wirft die Frage nach dem Endspiel auf: Reicht es aus, den Regierungen die richtigen Antworten zu geben, auch wenn sie sie ignorieren? Haben wir unsere Pflicht als Wissenschaftler*innen erfüllt, wenn wir nur vor der Haustür auftauchen und Regierungsbeamte entscheiden lassen, ob wir eintreten dürfen?

1 Der ursprüngliche Nachrichtenartikel wurde von der Website der chinesischen Nachrichtenagenturen gelöscht, kann aber im Internet Archive gefunden werden.

2 Quelle: Goh, Gabriel. „COVID Epidemic Calculator“. Tag 0 ist mindestens 3 symptomatische Fälle pro Million Menschen, was bedeutet, dass aufgrund der Inkubationszeit möglicherweise Hunderte infiziert sind. Für Vorhersagen verwendete Parameter: 106 mio. Bevölkerung, ein einziger Erstfall, Ansteckungsgefahr pro Person von 2,2, Übertragungsrate 0,73, Inkubationszeit 5,2 Tage und Sterblichkeitsrate 2%.

Ein Hinweis von mir: Ich habe versucht, die empirischen Beweise und den historischen Zeitplan so genau wie möglich zu erfassen, aber alle Fehler in diesem Blog-Beitrag sind meine eigenen. Ich bin dankbar für die Kommentare von Lisa Heukamp.

Talking inequality to your politically mixed US American family

My family is a mixture of Democrats, Republicans and swing voters. This can make for interesting emails, calls or reunions. It seems clear to me that partisan discussions, especially involving blame, are a no go. In fact, family in-fighting is pretty much like public in-fighting. It distracts us from some of our common problems. Like the unbridled increase in income and wealth inequality in the United States since the late 1970s. Especially the top 1%. The question is how to find common ground when one or both sides are past their breaking points.

I suggest three things that would benefit almost any American, except for the ultra rich (top 1%) or relatively rich (top 10%). These are things that encroach on freedom without touching on more polarizing topics.

1. End the winner-take-all electoral system. Why should the winner take all, when the winner is rich people? Low and middle-class workers have not had a real wage increase since the 1970s on average, but wall street provided huge profits.

There is nothing wrong with profits, but shouldn’t everyone working at profiting companies profit? A similar story unfolds in politics. The upper classes control both parties. Republicans favor the ultra-rich, and Democrats favor the quite-rich in general. We do not have parties that represent all voters’ interests. If you are lower to lower-middle class, these parties are both against you, for the most part. If you support the Tea Party, Libertarian, Social Democrats, Green Party, or other parties, you have no chance to get representation in government at the national level. We need a representative democracy where parties get power that equals their vote share. If Republicans get 55% of the vote they should get 55% of the legislative and executive positions. That is real democracy, where the government reflects the preferences of the people. When parties get proportional vote shares, they are forced to work together to solve problems. More than half of those who identify as Republicans and Democrats favored having a third party in a recent Gallup poll.

The two party system is now so deeply divided that democracy itself is faltering. The abuse of power the Constitution hoped to prevent is now rampant with executive dominance, a Supreme Court of partisan judges, and deep segregation of voter districts due to gerrymandering. Adding more parties will instantly restore coalitions and cooperation.


2. End Super PACs. This is how both parties came to be dominated by rich-peoples’ interests.

Citizens United and Speechnow.org made it possible that parties and candidates can get unlimited secret funding. This raised the stakes so high that the only way to get elected is to have over 1 billion dollars, or to accept donations to equal 1 billion dollars (that is one thousand times one million dollars – a $hit ton! The only way to get such big donations is to promise things to the rich that benefit them. In more plain English, this is known as corruption, by definition. These are simply ‘legal’ bribes being paid to politicians, made legal because of Citizens United. 

“for the first time since at least the 1960s, the majority of Americans were not in the middle class”

– PEW Research Center

3. Overturn the 1987 repeal of the FCC Fairness Doctrine. Political conflict is an opportunity to create economic conflict.

Until 1987, news companies were legally bound to report news in a balanced manner, providing different sides of each story. Today they are allowed to say anything they want and claim it as ‘fact’ without any repercussion. All of our political and factual beliefs have been shaped by distortions of reality. For example, try this on for size at the next mixed-political gathering. Obama, what many political news agencies call the ‘socialist-Muslim’, was actually a relatively hawkish military president. He was the first president in history to have an American citizen assassinated without any trial. He also give a large pay and benefit raise to the military and made aggressive moves of the US to contain China militaristically and economically. These facts might perk the attention of even the most ‘libtard’-hating-family-members. But that is not my main point here. The media outlets are mostly owned by corporations with special interests in keeping you and I fighting over politics, so that corporations can keep paying low wages. The news in the US sows seeds of hate and misinformation so that working-class people end up in constant conflict. This conflict keeps the focus away from the ultra-wealthy making decisions that harm them, such as paying them miserable wages with low benefits. Why not end fake news?

In case you are not sure, what not to do. Abortion, gun control, racism, blame of any sort. Not gonna go over well in mixed political company. If you are not ultra wealthy, we are on the same side in the things that matter most – like getting fair wages for fair work.

Notes to the reader:

As news media companies tend to be on one side or the other, I tried to use neutral sources as much as possible in this post.

Full disclosure. I have never voted Republican. But I am no fan of the Democrats either. Having to choose the lesser of two evils is not an exciting political reality to face. Especially while watching the rich get richer, and the working-class continually get the shaft.

Novel coronavirus pandemic and the limits of open science

German version available.

On January 30th, 2020, the WHO declared a global health emergency based on scientific evidence of a rapidly spreading coronavirus from the SARS family (Sars-CoV-2, and the disease Covid-19). The evidence the WHO used to declare this emergency came almost entirely from Chinese data.

The Chinese data demonstrated an alarming spread rate in January, as shown in Figure 1. Without the Chinese data, there would have been little cause for alarm as all other countries combined had barely 90 known cases in that period, and not a single death.

Figure 1. The spread of Covid-19 leading to the WHO emergency declaration Janurary 30th. Johns Hopkins data.

In early January, the Chinese government took measures to block news and data1 related to the virus; however, Chinese scientists still managed to follow open science practices (updated news on this here) including sharing partial-gene sequence data with the world. This allowed the WHO to take appropriate measures and enabled scientists in Germany to develop tests to identify the novel coronavirus. The German team shared publicly their methods on the WHO website on January 13th. Technology and global communications have evolved to the point where governments can slow but not stop the free flow of information.

Sharing all data and findings is the best form of science, but not always practiced. The Open Science Movement has the goal of changing this. If everyone in the world has equal access to the theory, methods, data and results of all other scientific research, quality and efficiency increases exponentially. This is evidenced in the open science practices behind the global fight against Covid-19 that saved and will save lives, potentially millions of them.

Figure 2 is a simulation predicting how many people would die of the virus in any given country depending on when governments follow WHO recommended operating procedures, as in: issue stay-at-home orders, engage in widespread testing and quarantine both individuals with the virus and those they were in contact with. ‘Day 0’ in Figure 2 is the moment when there are at least 3 symptomatic cases per million people, usually about 2 months after the first case in a country but much faster if several cases arrive at once.

Figure 2. The impact of government intervention in reducing deaths from Covid-19. Source: Gabriel Goh & author’s calculations
(*indicates a predicted death toll)

The reader should keep in mind that Figure 2 is a simplified simulation. The reality of the situation is extremely complex. In particular, governments do not go from normal operations to full lockdown of society in one day, this usually proceeds in stages. Nonetheless, this simulation comes from the best known predictive epidemiology models and helps demonstrate how even one day of indecision can cost thousands of lives.

In response to this outbreak in China and rapid appearance of Covid-19 globally, South Korea followed the WHO’s standard emergency operating procedures. Meaning: Test everyone possible, isolate all cases, restrict travel and gatherings, close non-essential businesses. The virus was contained and only 200 people died. Of course, previous virus outbreaks heightened their preparedness level. Germany was also well prepared given its rapid development of tests, and because they learned from the experience of Italy as Europe’s ‘ground zero’.

Roughly speaking, Italy crossed the ‘Day 0’ threshold in Figure 2 around February 15th. It was the least prepared as a country because it was the first in Europe, and a place where people from around the globe flock as tourists if not football fans. Thus, Italy’s case is not a story of major government failure, also given that there were reasons to be suspicious of the Chinese case.

The ‘Day 0’ threshold came around March 2nd in Germany, and ‘Day 0’ was around March 8th in New York at least on paper. But New York only started testing people with success around March 1st because the CDC’s own test kits released in early February failed. This left plenty of time, in ‘pandemic terms’, to source the accurate tests being deployed in the rest of the world since January. This did not happen in New York or the US as a whole. Thus, New York was caught completely unprepared but not because the virus was a surprise arrival.

When combined with the data from China, South Korea and several other countries, the WHO upgraded the global emergency to a global pandemic on March 12th. New York had issued a state of emergency but only gave stay at home orders as of March 20th. It was not until a week later that most schools were closed and police authorized to enforce these orders (the blue arrow around ‘Day 31’ in Figure 2). Despite massive open science efforts channeled through the WHO, New York and much of the US simply failed to heed obvious scientific evidence and predictions. This is even more shocking because Seattle, not New York, was ‘ground zero’ in the US. Washington State as a whole implemented early and successful emergency measures.

Reviewing the failures of countries, states or cities to immediately take drastic emergency measures before or on January 30th (global emergency) or March 12th (pandemic) is not the subject of this blog post. The world had access to all the same data and knowledge of how to test for the virus thanks to open science practices, the WHO and several partner organizations and websites.

The message I want to convey is that open science is not enough. Its limits are found in governments. In many countries, science has little place in government decision making. This is perhaps understandable in a dysfunctional authoritarian regime where nearly all political decisions are made to maintain and concentrate power. This is certainly a reason that the worst horrors of the virus are yet to come in sub-Saharan Africa and Central Asia. But it is shocking in democracies where there are throngs of scientists and agencies tasked with monitoring and advising the government on what to do to protect its people.

The United States had ample information that Covid-19 was in the US and spreading rapidly, how to design effective tests, and exactly what to do to reduce the spread of the virus and its death toll months before any major actions were taken – the same information Washington State used to stem the spread. But this scientific information, shared at a scale and speed not seen before in world history, was simply not enough.

The Open Science Movement has ethical principles underlying its open access, data, methods and sharing recommendations. It is not just that open science practices make science more reliable and effective; they promote social justice, or scientific justice if you will. When every scientist in the world can access all the information that every other scientist in the world has, there is scientific equality. While rich universities boycott Elsevier, poorer universities cannot even afford a subscription. Thus, open access would bring a global North-South and a endowed v. not-endowed university equality to the world. But it can’t bring justice to those who are potential virus victims.

In the case of the Covid-19 pandemic, open science looked to undermine the bickering and buffonery of governments at first, but it could only ring the doorbell. Some governments simply refused to open the door, to take action. This begs the question if the Open Science Movement needs to adopt principles of political action that extend beyond policies promoting transparency and reproducibility. Does the Open Science Movement need to push governments to adopt administrative, if not constitutional procedures that make governments accountable to scientists in a natural disaster or emergency like a hurricane or pandemic?

I say yes from an ethical stand point. But its not so simple. As soon as we start pushing things like procedural reform, deep-pocketed special interests get involved and it gets ugly. As scientists we are not likely suited to mudslinging and political maneuvering. Not to mention that the more time we spend on lobbying, the less time we have for science. There are some of us with the ability to lead the Movement and influence governments, but most of us are ill-equipped to combat the powers-that-be behind politics.

That brings up the end game question: Is it enough to give the right answers to governments even if they ignore them? Have we done our duty as scientists if we just ‘show up at the doorstep’ and let government officials decide if we get to come in?

1 The original news article was deleted from the Chinese News Agencies’ website but can be found in the Internet Archive.

2 Source: Goh, Gabriel. “COVID Epidemic Calculator“. Day 0 is at least 3 symptomatic cases per million people, meaning there are potentially hundreds infected given the incubation period. Parameters used for predictions: 106 mil. Population, a single initial case, contagiousness per person of 2.2, rate of transmission 0.73, incubation period 5.2 days and mortality rate 2%.

A note from me: I have sought to capture the empirical evidence and historical timeline as accurate as possible, but any errors in this blog post are my own.

.

Behind the specification curve

Analytical errors are a normal part of social science, as David Brady pointed out in the December 2019 IPM Newsletter. Whether conscious, unconscious or simply mistakes, we take certain paths in our research processes. All possible paths constitute researcher degrees of freedom, and they can lead to different findings. In fact, we can intentionally exploit paths to the results we want. Replications can work against this by detecting errors or identifying questionable decisions. We have few replications in sociology, those working in IPM being no exception. Thus just one replication of any given published should be a scientific improvement.

But is it?

Let’s imagine a secondary data study that a replicator wants to reproduce. We now know from a major social science journal, that even when researchers provide their code it rarely runs ‘right out of the box’; replicators often need additional information or materials from the authors (Janz 2015). Moreover, consider an R user trying to replicate Stata code. This requires writing brand new code. Replications with the simple goal of reproduction are not as straightforward as we think! Now let’s imagine a replicator has more complicated goals of testing generalizability or scrutinizing the original study. A replication is as prone to researcher degrees of freedom problems as the original study. This means replications might not be as useful as we hoped, and alone cannot alleviate the ‘crisis of science’.

To be more effective, original studies and replications alike need to focus on model specification. As Muñoz and Young (2018) demonstrate, the cost of running billions of models is roughly zero. Unless they are super-complicated, a computer can do this for us very quickly. Given that researchers accidentally or intentionally report only models supporting their claims, it is a useful exercise to check how alternative specifications might affect the findings. This does not mean running all possible models. Doing so throws models into the results pool that are impossible causally speaking. We need to carefully, logically select models based on plausible research decisions. For example, if a researcher does not weight their survey data, we should ask, ‘why not?’. If a researcher measures poverty as half the median income, we should explore alternative measures.

This figure demonstrates effect sizes (top) and model specifications (bottom) of analyses of the outcome variable “positive reciprocity” (a tendency to pay back favors). This figure was produced by Julia Rohrer (2018) and demonstrates that specification curves are both visually appealing and scientifically useful

Putting together all combinations of plausible decisions we come to a set of models that are theoretically defensible. A major gain is that we can see which decisions have an impact on the results. This gives an indication of where we need to focus our future research. One of the newest ways to do this is p-curve analysis, also known as “specification curve” modeling. These methods were originally introduced as ways to detect p-hacking and publication bias. But we can extend them to standard research and replications, leading to gains in theory. These methods caught on in psychology recently (Rohrer 2018), and it is time for sociology to get behind the curve and move ahead our discipline’s reliability.

References:

Janz, Nicole. 2015. “Leading Journal Verifies Articles before Publication – So Far, All Replications Failed.” Political Science Replication Blog. Retrieved July 22, 2019.

Muñoz, John and Cristobal Young. 2018. “We Ran 9 Billion Regressions: Eliminating False Positives through Computational Model Robustness.” Sociological Methodology 48(1):1–33.

Rohrer, Julia M. 2018. “Run All the Models! Dealing With Data Analytic Flexibility.” APS Observer 31(3).

Love is in the error term

A major segment of social science uses formal models and quantitative methods to explain social phenomena. That means they use mathematical symbols to define how they think the world works, and then find data and test it. For example, researchers want to explain social outcomes, like committing a crime, changing jobs, moving or having children. Why do some people do these things and not others? Researchers then speculate that other things cause these outcomes like getting married, having a job or losing a job, how much money a person makes, how old they are and all kinds of other stuff.

Next, researchers change their theoretical ideas into a formal model. This means an equation. Before you stop reading, maybe pause to consider that an equation is just a theory expressed with symbols instead of standard language. For example,

Y = X

and this might represent moving homes (“Y“) is caused by getting married (“X“). Another way of saying this is, Y depends on X or moving depends on getting married.

But seriously, getting married does not always cause moving. Sometimes it does. The correct claim is that moving is a function of getting married. Function means that marriage probably increases the likelihood of moving by some amount. Therefore, researchers add a modifier to X, like the letter b, so if b was 0.3 getting married would increase the likelihood of moving by 30%. People also move when they don’t get married, so researchers add a constant a to account for the likelihood of a person moving for any other reason. So if a was 0.3 then the average person would have a 30% chance of moving at any point.

Y = a + bX

Now if a and bX could be used to perfectly predict whether a person moves or not. The researcher would have made a monumental achievement here. Instead what happens is the researcher goes and observes moving and marriage behaviors. Tries to do this with a random sample of the population of a society. Then applies the above equation to the sample data. Does the equation fit perfectly to the data? No. Never. Researchers must admit that their theories come with uncertainty. Maybe moving depends on the type of home the person always lives in, maybe it depends on whether a couple can afford to move or if it is even a couple or a single parent. There are nearly unlimited things that could cause moving. Also, getting a perfectly random sample, mistakes when observing or coding data, etc. leads to uncertainty in the results. This means the formal equation has to have e, an error term.

Y = a + bX + e

Researchers spend their time trying to reduce what is in the error term. So they are basically tasked with the job of reducing uncertainty. If scientists wants to fly a spaceship to mars they have to reduce all possible forms of uncertainty in the take off, flight path, landing, electronics and so forth. But a social scientist who wants to predict how many people will move, where they will move or which people will move, cannot reduce uncertainty that much. Sure, they can explain moving with job changes, family members getting sick, weather, hobbies, wars, and so many observable things. But they will never fully explain moving. What I’m really saying here is that social scientists will never be able to perfectly explain human behaviors. Its like this:

An Enlightened Physicist describes Sociology

Why is it so hard? If we could just measure all the things that people are doing and thinking we could predict precisely what humans will do next, right? Maybe so, maybe not; but irrelevant because we cannot measure everything about humans. Even if we could measure the precise actions of the network of neurons in one human brain numbering greater than all the stars in the universe, we probably couldn’t measure love. But something that we might refer to as love seems to be a major force in guiding human actions. Both the effect of love:

“Love affects more than our thinking and our behavior toward those we love. It transforms our entire life”

– Thomas Merton

And the lack of love:

“Intense spiritual and emotional lack in our lives is the perfect breeding ground for material greed and overconsumption.”

– bell hooks

(both quotes from hooks 2000, pp. 187 & 105)

These suggest love is a prime mover of human decisions and behaviors. But we don’t know what it is exactly and we have made no attempts, that I am aware of, in any large population surveys to measure it. So,

When we add love as an X variable to our formal model we cannot test anything, because we have not developed instruments to measure it. So it remains in the void of all that variance in Y that we simply cannot explain. Maybe we should start focus some of our effort on that.

hooks, bell. 2000. All About Love: New Visions. William Morrow and Company, New York, NY. ISBN 0-688-16844-2.

Help. We can’t know the data-generating model.

Lack of data-generating models. A problem ransacking macro-comparative research efforts. We got strong data-generating theories. Institutions, politics, materials, procedures, conflicts and symbols help explain macro-level events, like the introduction of social security or the onset of war. But we can’t model these theories as causes of the data we observe. There is far more theoretical complexity than there are countries.

There just aren’t enough countries to provide reliable measures of central tendency given theoretical complexities. Even if our theory were as simple as Y will be significantly different from some value (e.g., zero) given X, we should have at least 10 countries. If we were to actually conduct a power analysis it would tell us we need more like 200 countries depending on how big of a difference we expect in Y given X. I’m just gonna leave aside that countries are not a population in the typical sense (what is a country again exactly? exactly.).

What should we do?

  • Option A: stick to qualitative case studies.
  • Option B: continue running various country-level regressions and ignore the problem.
  • Option C: wait until data from 180+ countries covering a period of 50 years becomes available.

Maybe we just ask a better question. Why do we need to do macro-comparative analysis if we already got good theory?

Answer: Reliability and utility.

Without systematic evidence we don’t actually have theories, only ideas or conjectures. Logic and in-depth case study help define data-generating theories for one country in one time period. So we want to test if this might apply to most countries in most time periods. In testing this we get Pomeranian.

Even with the same variables, regressions on country data over time go all over the place given only tiny changes to the estimation procedure. That was a finding of the CRI. The effect of immigration on social policy preferences could be anything.

To the right: Mr. Summerbottom showcases the size and direction of effects from macro-comparative regressions using the same data. Outliers cropped.

Even if we run all possible model configurations, and even if we consider Bayesian posterior distributions. We still don’t know if we have a correct model specification, because we cannot truly test it. We can only make sweeping statements like, ‘in all possible model combinations, variable X was not significant so it probably does not have an effect’. Too bad we might have some kind of suppression from an unobserved variable!

Don’t drink the water. There is no method that can substitute for human logic. Zombie modeling where the researcher lumbers aimlessly through thousands of models looking for blood doesn’t work. The answer is therefore Answer D, none of the above.

We need new school meta. We need to logically analyze researcher’s decisions. We can’t do much with the limited data that are out there for macro-comparative research. Thus, we need to get down with specifications. By comparing specification curves and researcher decision trees, we can identify the difference between critical and benign. Some programmers offer us R packages for this (thanks Joachim Gassen for rdfanalysis!).

Some models come to the same results regardless of whether the researchers apply weighting, use latent variables or correct for autocorrelation; others not. When we identify the decisions a researcher made that carry the potential to influence the results to the greatest degree, we simultaneously identify where to focus our theoretical work. These decisions cannot be improved by running more models, instead they require, as Andrew Abbott once put it, ‘more sitting in our offices and staring at the wall’. We need to dig in and use logic, reflection and mental energy to improve our theories in small-N macro-comparative research.

Better than a computer. A crowd of researchers.

Crowdsourcing researchers. A new use for an old tactic. When Silberzahn and colleagues asked if football referees are skin-color biased in their assignment of red cards, they brought a new meta to social research. Rather than the typical one research team, one project; they got together twenty-nine teams. All got the same research question and same data. What can we learn?

Hold on. A single academic strapped with programming skills can run just about every possible statistical model configuration. Level up this academic and they can program machine learning routines to tell us almost anything we need to know. So why do we need a crowd of researchers to analyze the same data?

The answer: theory.

Computers do not do theory. Computers crunch data. They can’t tell us where the data came from. Running every possible in an effort to ‘test robustness’ means testing for things that do not exist. Even worse, it means taking the results of tests for things that cannot possibly exist and using them to draw conclusions on the robustness of a test for something that could exist. Throwing all possible variables into a model or ordering variables in every possible configuration will maximize statistical predictions, yes. But what good is predicting something that cannot exist?

An example: Policymakers decide they want to increase the number of females in society. A computer determines that bearing children is a good predictor of being biologically female. Now, imagine that in a statistical model, being pregnant or ever having had biological children explains 75% of the variance in the sex of a given population (its probably about 100% at this point in human history, but lets allow room for error). The policymakers thanks to the computer, conclude that if 1,000,000 more people were pregnant, a predicted 750,000 of them should be female and only 250,000 male, plus a margin of error. Thus, they conclude that getting 1,000,000 people pregnant is likely to increase the number of females by 250,000 in their society, assuming the society is currently 50% female.

Epic fail.

If we want to develop causal theories that provide useful knowledge for societies, human logic is necessary. Rather than having one computer report 8.8 million false positives after running 9 billion different regression models, humans can identify correct, or at least ‘better’, model specifications. In fact, we would not even know they are ‘false positives’ without a human to understand that getting someone pregnant does not turn them into a female, for example.

But relying on the logic of one human, or even one team of humans, is risky. There are researcher degrees of freedom that make their research unreliable. Their prior beliefs, knowledge, experience and context lead to variation in results. These priors lead them along different paths through the garden of research. With the same research question and even the same data researchers often come to different results as demonstrated by the Silberzahn study and our Crowdsourced Replication Initiative (CRI).

Sounds like a meta-problem. So what good is crowdsourcing if we cannot rely on the crowd?

Answer: social interaction.

Crowdsourcing when done with careful planning and central organization, allows participants to comment on, if not deliberate, each others research choices. Suddenly meta-uncertainty turns into the power of meta-logic. Not just one team and their narrow ideas, but a communal debate with diverse inputs. Both the Silberzahn et al study and our CRI involved deliberation and voting on research designs. Combined with the growing area of specification curve analysis, crowdsourcing increases credibility for social research at the level of the population under study and the meta-level of the researchers themselves.

Finally, the relevance of crowdsourcing for collaborative theory construction is an untapped but promising avenue for the future. Crowd research departs from the current system that favors individualism by rewarding novelty of individual researchers. Crowdsourcing instead isa system of consensus building and direct responsiveness to theoretical claims. It could resolve the perpetual problem of scholars, areas and disciplines talking ‘past’ each other. If not consensus, it can identify critical unresolved questions to guide future research.

Crowdsourcing can move us toward open science in the Mertonian sense of a communalistic endeavor. To achieve this, all participants should be co-authors on the project, get to discuss each others’ models and theory, and get to update their own results during the process. We need machines to facilitate this kind of large scale research, but they cannot produce communal, logical exchanges. For that we need to stick with the crowd.

It’s Crowdid

Meta-science, social inequality, methods, open science.

At inception this blog catalogs the wracking process of organizing, administering and evaluating the project and its massive amount of generated data in the Crowdsourced Replication Initiative (CRI).

A technical blog, looking at the ‘nitty-gritty’ of the methods necessary to carry out and present the findings of a project involving 88 research teams, almost 200 researchers, an online deliberation, four survey waves during the process, an experimental condition, a replication, an original research condition, and analysis and meta-analysis of the results.

A research question blog, asking bigger questions about meta-science, researcher reliability, reproducibility, social inequality and crowdsourcing.

In the future…. this blog will address more.

Why blog? The future of science may involve a more interactive, hyperlinked and faster disseminated format. Blogs can facilitate this. In the social science the turnaround time from research findings to published papers is somewhere around 3 to 4 years (counting rejections and R&R’s). Moreover, blogs offer space for discussing things that simply don’t fit in our awfully restrictive 6-12,000 word journal articles.

Open science calls for blogging. Its free, fast and owes no debts. It circumvents the institutional problems of science. As an Open Science Fellow as part of the Freies Wissen program of Wikimedia Germany, a Catalyst for the Berkeley Initiative for Transparency in the Social Sciences (BITSS) and a concerned sociologist, this blog is a contribution to the open science movement.