Open Science? What’s That?

Was the answer I got from Andrew Abbott at a Hogwart’s dinner I was fortunate enough to attend a few years back when I asked him after dinner, “what are your thoughts on open science?”

That’s right. Open science. Say something paradigm changing.

We have so many results and coherent arguments about the status of science. To make an impact here requires synthesizing them into something new. Not purely constructively new in nature, but also subtly undermining all that came before. Something was missing all along and this new perspective is it. That’s the problem.

Our formula for declaring theory and opinion in scientific writing is that it must be big. Paradigm impacting. Makes you think. Expresses the scientist author’s authoritative intellect and ingenious perspective. With seemingly effortless written words, the scientist author lays down an argument so convincing, that it must be true. It was true all along. It is so obvious, why didn’t I see it? Thinks the average reader.

If Diederik Stapel was never caught, he would be a model for success. THE model for success in social psych. You don’t know what the formula for success is? Ask an economist. You have 3, maybe 5 journals in which you need to get published very early on. You have to have a job market paper that is known. You have to have connections. Otherwise you are on the sidelines, building an exit strategy. This is roughly true in the other social and behavioral science disciplines, yet we in sociology or political science will mostly deny it. We are better than those status maniacs.

But it is publish-or-perish. And this is the model.

To do publish, requires word wizardy. The ability to take any findings and craft an argument about them that makes them seem relevant is more valuable than research method skills. To make any findings seem essential to our fundamental understandings of somethings that are important. That is how we do science. Framing.

Andrew Abbott has the goal of out thinking, reading, writing and arguing everyone he encounters. Ask him if you ever meet him. He will not deny this. Easily considered by most to be one of the greatest living sociologists. He was editor of AJS and published book reviews of random, late-night selections in the Oxford library under a pen name, because it was fun. Because that is what legends do, and then tell about. He once wrote that sharing all material to be reproducible and replicable would make science boring.

As long as this is the model for success, we are not making real progress.

I sat at a dinner once with Ronald Inglehart. Every student, postdoc and professor there gawked with wide eyes. What was so appealing and enthralling. Why was this scientist a rock star for us? Was it his findings? Or was it his status? I think status. I think that is what scientists are seeking. I observe it. I feel it. If you have to have certain publications in order to not just pursue a scientific career, but to be at the ‘forefront’ of a subfield, then does it really matter what is in those publications? As long as you get to claim that status, that is it. You win. Why do athletes take steroids, cheat? Is it because of their great passion for the sport? Or is it to win status?

I entered a Master’s program in order to learn about the problems associated with a topic that was pressing for social justice and democracy. I still want to do this. But status was dangled in front of my eyes. ‘Publications are the currency of our trade’. The professor who told me this was absolutely correct. Now I want to do open science, but I have to perform wizardy with words to be recognized as central to the open science movement. This makes me uneasy to the point of nausea.

How can science become trustworthy and effective, if we have to be witty, cutthroat and/or famous in order to take part in it? How can we overcome this delusion? When will we stop and say enough is enough?

This type of honesty is what John Levi Martin specifically advised not to put out on the intrawebs because it will come back to bite me someday. Image (presentation) is everything. Warhol or Abbott, they figured that out long ago. I guess then for my own peace of mind, I extend my hand. Bite away you biters.

Human Induced Climate Change

Show me the data


Nate Breznau

A quick guide to human impact on the environment

Carbon Emissions

The amount of Carbon we’ve released into the atmosphere is nearly perfectly collinear with the Industrial Revolution and subsequent industrial growth. This is easily estimated from ice samples.

Carbon is unquestionably a result of human activity

We can measure random and cyclical carbon in the atmosphere from ice core samples. Never in 800,000 years have we seen carbon levels like those since the Industrial Revolution.

source

Climate Warming & Emissions

A striking correlation between emissions and global average temperatures. Although temperatures fluctuate regularly, their fluctuations track upward following human-based carbon emissions. 

source

Microplastics

These are only produced by human production. Note that there are several parts of the ocean where we find more than 10 pieces per cubic meter (dark purple circles and dark red stars). There are over 1.3 billion cubic meters of surface water in the ocean. 

source

Polar Ice Degradation

We do not have accurate data from long ago, but since 1990 the arctic ice shelves have declined in size dramatically. This also leads to a rise in sea levels. Here a comparison of ice in summer and winter in 1990 compared to 2020.

source

Scientific Consensus as of 2019

A review of all published climate science studies in peer reviewed journal articles between 2005 and 2015 suggested a consensus of somewhere between 83 and 97% of climate scientists that humans were causing the major changes we observe in our environment in the last 100 years1

A new study of all of the just over 11 thousand articles across all disciplines and journals published in 2019 revealed that this consensus was 100% (within a natural margin of error)2

Bibliography

1. Cook, J. et al. Consensus on consensus: a synthesis of consensus estimates on human-caused global warming. Environ. Res. Lett. 11, 048002 (2016).

2. Powell, J. Scientists Reach 100% Consensus on Anthropogenic Global Warming. Bulletin of Science, Technology & Society 37, 183–184 (2017).

A fake journal, algorithmic plagiarism and tricking Google Scholar

A case of a fake journal that passed Google Scholars’ bots and algorithm checking to become indexed. The journal contained articles that were completely plagiarized but assigned new titles, abstracts and authors.

See updates at end of blog for table of plagiarized papers, and a small win for open science after the domain host revoked the content of the plagiarized paper’s URLs.

A colleague of mine was searching Google Scholar using certain words and the number two hit appeared with the following title and direct download link.

The PDF looks like a journal article. The journal has an ISBN instead of DOIs, but this is not unheard of. It even has an impact factor listed: 7.11, it must be good.

It turns out that this seemingly interesting paper with an alleged author who is an expert in this area, is plagiarized.

How do I know it is plagiarized? Just ask the real authors of the paper which appeared with a different title and abstract in Social Science Research one year earlier. Oh and with an identical word-for-word text, with some mistakes relating to equations – possibly a failed scraping algorithm.

The journal title is Journal of Current Research and Review, but it is extremely difficult to find through an internet search. This requires endless scrolling (as of 10.08.2023), or removing the ending on the download URL to arrive at the index page.

Scrolling down on the home page reveals the “Latest Articles”. Here I find the same title that appeared in Google Scholar. However there is no download link, only the title and abstract. In other words, the URL that Google Scholar found to download the paper is not accessible through any link on the journal’s website.

How is it possible that Google Scholar would pick up and index such garbage?

I asked ChatGPT to tell me how Google Scholar works, and the results suggest that someone or something had advanced knowledge and programming skills that would provide the right meta-data and materials on the journal webpage to pass the automated Google Scholar bots which are constantly scraping the web for academic papers and publications.

From ChatGPT prompted with “How does a journal get indexed in Google Scholar?” (09.08.2023). I received the following answer. Note that I removed the descriptions of each point, as they are mostly self-evident.

Google Scholar indexes scholarly literature, including journal articles, conference papers, theses, and other academic resources. In order for a journal to get indexed in Google Scholar, there are several steps and criteria to consider:

  1. Quality Content
  2. Digital Format
  3. Open Access and Accessibility
  4. Consistent Publishing Schedule
  5. Meta Tags and Structured Data
  6. Proper Article Metadata
  7. Citations and References
  8. Author Profiles

It’s important to note that Google Scholar’s indexing process is automated, and there is no formal application process for journals to get indexed. Google Scholar’s algorithms discover and index content based on various factors. However, journals can follow best practices to increase their chances of being indexed and improving their visibility within Google Scholar’s search results.

The journal website and meta-data had to pass Google Scholar’s algorithm. This is no lucky feat. Although Google does not reveal any usage of AI to evaluate journals for indexing, it clearly has an advanced system of scraping, filtering and crawling. Who or whatever designed this journal, or really this website that looks like a journal, understood how to pass the test. For example, the “Latest Articles” appear to be from “Volume 14” suggesting to potential crawling bots that the journal may have been published for 14 years now.

This is of course false. There are no other articles readily accessible from the journal’s website other than the two that appear. The other article is also plagiarized, in case you were wondering. But, a closer look at the URL reveals that the article I am discussing has the number “800” at the end of its URL.

https://zapjournals.com/Journals/index.php/jcrr/article/view/800

Changing this number yields other papers. At least as far back as number 795; prior to that yields 404 errors. Moreover, I cannot find the papers from 795-798, only their titles and abstracts.

The journal website’s main page yields something even stranger. It looks like what a computer science student might create as an example page to try and sell as a template or to demonstrate the services on offer for a website construction gig. The content has absolutely nothing to do with journals, academia or publishing.

It even comes with fake testimonials. I wonder whose pictures were stolen for this…

The real question for me is, ‘what to do about this?’. Clearly this is fraud and constitutes both ethical and legal infringements on science. Looking up the host of the domain of the journal using ICANN reveals that it is provided by a Lithuanian company Hostinger.

Looking at this company’s website suggests they are legitimate. The provide contact information to report abuse. So that is what I did by sending them the info in this blog post.

Hostinger replied within two days and told me they take abuse of their services very seriously and asked me for more evidence so they could pursue the case. I gave them the two links for the plagiarized papers and their original versions published one year earlier in Social Science Research.

https://zapjournals.com/Journals/index.php/jcrr/article/download/799/1237

https://zapjournals.com/Journals/index.php/jcrr/article/download/800/1238

Word for word plagiarized from:

https://www.sciencedirect.com/science/article/pii/S0049089X22000321

https://www.sciencedirect.com/science/article/pii/S0049089X22000333

For all the negative publicity surrounding Elsevier, they are still a player in the academic world. In 2020 they owned roughly 16% of the academic publishing market. If there is anyone who would want to prevent abuse of their services, including plagiarism of their work, it is Elsevier. They have a trove of lawyers fighting and winning battles to protect their content across the world. As the two plagiarized papers that I can download are from the Elsevier journal Social Science Research, it made sense to contact them as well. Elsevier’s due process suggests contacting the journal editor first. Thus, I have sent this information to the lead editor.

The journal also lists an ISBN number. But this number is a fake, and returns an invalid search with the ISBN lookup tool.

Google certainly would not want its products indexing fake journals with plagiarized papers, so I took the liberty of contacting them as well.

What is striking about the journal is that they have a long editorial board list. Internet searching reveals that these are real scholars. I also contacted them. I will continue to report on this case as it unfolds.

The question for me is: ‘what motivated someone to create this site?’. There is clearly no profit associated with it. There is also no status gain, because the plagiarized papers have authors assigned to them who are not the original authors and clearly not players supporting the fake journal. They are highly established scholars in their respective sub-fields. These fakely assigned authors are perfect examples of what an AI might choose to assign to a certain topic. I tested this by asking Chat GPT if Seamus McGuinness could have written the abstract. The response points at AI as a source for potentially re-writing abstracts of the plagiarized papers and for finding suitable authors to assign to them. ChatGPT said:

Yes, Seamus McGuinness could be a potential author who might have written this abstract. Seamus McGuinness is known for his research on labor market issues, including education-job mismatches, gender disparities, and remote work. His expertise aligns with the themes discussed in the abstract, making him a plausible candidate as one of the authors who could have written it.

It remains a mystery for now what is up with this website and the fake journal. Was it a computer science project that accidentally got picked up by Scholar, one that was never intended for public consumption? Was it an attempt to create a journal, but try and hide the real content of the journal until it could pick up real submissions? Was it entirely AI generated, to showcase the power of an AI?

[Update 14/08/2023]

Thanks to Random Cat on Twitter, I learned that there are more papers than just two. Using Google Scholar they searched by journal.

This allowed me to compile a table of plagiarized papers.

Table of Plagiarized Papers in JCRR, found via Google Scholar

Original TitleOriginal Author(s)Original JournalJCRR TitleJCRR AuthorsLink to Original ArticleLink to Plagiarized Article
The motherhood wage gap and trade-offs between family and work: A test of compensating wage differentialsNick Wuestenenk & Katia BegallSocial Science ResearchCOMPENSATING WAGE DIFFERENTIALS AND THE MOTHERHOOD WAGE GAP: A COMPARATIVE ANALYSISHK Kleven & CL Landaislinklink
Conflicting signals: Exploring the socioeconomic implications of gender discordant namesAndrew Francis-Tan & Aliya SapersteinSocial Science ResearchBREAKING BOUNDARIES: EXAMINING THE INTERSECTION OF GENDER DISCORDANT NAMES AND SOCIOECONOMIC ATTAINMENTAL Roberts & M Rosariolinklink
Gender overeducation gap in the digital age: Can spatial flexibility through working from home close the gap?Ana Santiago-Vela & Alexandra MergenerSocial Science ResearchBRIDGING THE GENDER OVEREDUCATION GAP: EXPLORING THE ROLE OF WORKING FROM HOME IN THE DIGITAL ERASMG McGuinnesslinklink
How the Great Recession changed class inequality: Evidence from 23 European countriesJad MoawadSocial Science ResearchCLASS INEQUALITIES IN THE WAKE OF THE GREAT RECESSION: A STUDY OF 23 EUROPEAN COUNTRIESPD Allisonlinklink
Higher education and high-wage gender inequalityNatasha Quadlin, Tom VanHeuvelen & Caitlin E. AhearnSocial Science ResearchASSESSING THE CONTRIBUTION OF EDUCATION TO GENDER WAGE DISPARITIES IN HIGH-EARNING PROFESSIONSJ Jacobslinklink
TRANSFORMING RESEARCH: EXPLORING THE INTERPLAY OF DATA MINING, MACHINE LEARNING, AND KNOWLEDGE DISCOVERRobert M. Bond & Christopher J. FarissSocial Science ResearchsKnowledge Discovery: Methods from data mining and machine learning [b]Xiaoling Shu & Yiwan Yelinklink
SOCIAL MEDIA ADDICTION: PROPOSED INDICATORS AND STAGESZakaria I. Saleh & Omar Zakaria SalehInternational Journal in Commerce, IT and Social SciencesUNPACKING THE CYCLE OF SOCIAL MEDIA ADDICTION: UNDERSTANDING SYMPTOMS, PROGRESSION, AND RECOVERY [a]OZ Salehlinklink

[a] Published twice in JCRR with different authors

[b] Not indexed in Google Scholar

All are from Social Science Research except one. There are also other papers where I cannot find an original. These papers might actually be original papers, or stolen working papers that are not easy to find online.

Interestingly, some papers that appear that they may not have been plagiarized have a different display in PDF form. This includes contact details for the journal. I thus emailed them as well to inform them that they are committing ethical and legal fraud.

[Update 23.08.2023]

Hostinger investigated the reported problem and determined that the user of their domain had violated the terms of ethical/legal usage and removed the plagiarized papers. However, the journal website still appears. In other words, a journal that obviously intentionally plagiarized several articles, possibly to boost its reputation and encourage others to submit and thus pay the 45 dollar fee, is still out there lurking. Moreover, this journal is part of a larger company called Zenodo Publishing. On their main page the list dozens of journals. If I had to guess I would assume these journals are also predatory, and may contain plagiarized content. Only investigating this will prove if this is true.

More to come.

ChatGPT teaches open science

I asked ChatGPT to talk about open science. This post is the unabridged conversation. ChatGPT is a trained algorithm using machine learning. It is what most would refer to as Artificial Intelligence. And intelligent it apparently is.

~

Nate: I am teaching a course on open science. Could you summarize the key arguments of the open science movement?

ChatGPT: Certainly! The open science movement aims to promote greater transparency, collaboration, and accessibility in scientific research. Here are some key arguments and principles of the movement:

  1. Open access to scientific publications: Advocates for open science argue that research findings should be made freely available to everyone, rather than locked behind paywalls or subscription fees. This allows for wider dissemination of knowledge, which can benefit both researchers and the general public.
  2. Data sharing: Open science also emphasizes the importance of sharing data and methods, allowing others to reproduce and build upon research findings. This can improve the quality of research and promote greater collaboration across scientific disciplines.
  3. Reproducibility and transparency: Open science advocates for greater transparency in research methods and data analysis, as well as reproducibility of findings. This can help to prevent fraud and errors in scientific research, and can promote greater trust in scientific results.
  4. Collaborative and participatory science: Open science encourages collaboration among researchers, as well as engagement with stakeholders and the public. This can help to ensure that research is more relevant and useful to society, and can promote greater accountability and transparency in the research process.
  5. Open source and open tools: Open science also advocates for the use of open-source software and tools, which can be freely accessed and modified by others. This can improve the reproducibility and efficiency of scientific research, and can promote greater collaboration and innovation.

Overall, the open science movement seeks to transform scientific research into a more transparent, accessible, and collaborative endeavor, with the ultimate goal of advancing knowledge and benefiting society.

Nate: As I said above, I am teaching a course on open science. It is targeted at university students. Could you recommend some introductory readings on this topic?

ChatGPT: Certainly! Here are a few introductory readings on open science that you may find helpful for your course:

  1. “The Open Science Training Handbook” by the Open Science MOOC: This is a comprehensive guide to open science, covering topics such as open research data, open access publishing, and reproducible research. The guide includes both theoretical and practical advice, as well as case studies and examples.
  2. “Ten Simple Rules for Reproducible Computational Research” by Sandve et al. (2013): This article provides a set of guidelines for ensuring that research is reproducible, including recommendations for version control, documentation, and sharing data and code.
  3. “Open Science: A Very Short Introduction” by Ana Brandusescu (2020): This short book provides an accessible introduction to open science, covering the history and principles of the movement, as well as key topics such as open access publishing and citizen science.
  4. “Open Science Framework (OSF): A Tutorial” by the Center for Open Science: This tutorial provides an overview of the Open Science Framework (OSF), a free and open platform for managing research projects and sharing data and code. The tutorial covers the basics of setting up a project on the OSF, as well as using the platform to collaborate with others and share research outputs.
  5. “The FAIR Guiding Principles for scientific data management and stewardship” by Wilkinson et al. (2016): This article presents a set of guidelines for ensuring that research data is Findable, Accessible, Interoperable, and Reusable (FAIR). The principles provide a framework for managing research data in a way that promotes openness and reusability.

These are just a few examples of the many resources available on open science. Depending on the focus of your course, you may also want to look for readings on specific topics such as open access publishing, preprints, or data sharing.

Nate: What are some actions students can take during their studies that will make their research more open? Open, as in more aligned with the goals of open science.

ChatGPT: There are several actions that students can take during their studies to make their research more open and aligned with the goals of open science. Here are a few suggestions:

  1. Share research data: One of the key principles of open science is making research data openly available. Students can make their research more open by sharing their data in a public repository, such as the Open Science Framework or Zenodo, and including a data sharing statement in their publications. Students can also share their data with collaborators and other researchers upon request.
  2. Use open source software: Students can use open source software, such as R or Python, for their data analysis and statistical modeling. Open source software is freely available, and its code can be accessed, modified, and shared by anyone, which promotes transparency and reproducibility in research.
  3. Practice pre-registration: Pre-registration involves submitting a detailed description of the research design, methods, and analysis plan prior to conducting the study. This can help prevent bias and increase transparency in research. Students can pre-register their research on platforms such as the Open Science Framework or AsPredicted.
  4. Use open access publishing: Open access publishing makes research articles freely available to anyone, which can increase the visibility and impact of the research. Students can publish their research in open access journals or deposit their articles in open access repositories, such as arXiv or PubMed Central.
  5. Collaborate openly: Collaboration is an important aspect of open science, and students can promote openness in their collaborations by sharing their research questions, data, and analysis plans with their collaborators. Students can also use collaborative platforms, such as the Open Science Framework or GitHub, to share and manage their research projects.

These are just a few examples of actions that students can take to make their research more open and aligned with the goals of open science. By promoting transparency, reproducibility, and openness in their research practices, students can help advance the principles of open science and contribute to a more open and collaborative research culture.

Nate: Are there arguments against practicing open science? If yes, where do these arguments come from?

ChatGPT: Yes, there are arguments against practicing open science, although they are often debated and may not be universally accepted. Some of the main arguments against open science are:

  1. Lack of incentives: Critics argue that there are few incentives for researchers to engage in open science, particularly when it comes to sharing data or making research available in open access. For example, researchers may be concerned about the impact of open sharing on their ability to publish in high-impact journals, obtain grant funding, or advance their careers. These concerns may be particularly acute for early-career researchers, who may be under pressure to produce high-profile publications and may not have the resources or institutional support to engage in open science practices.
  2. Risk of misuse: Some researchers argue that open science practices, particularly around sharing data, may increase the risk of misuse of research results. For example, in some fields, sharing data could lead to concerns about confidentiality, privacy, or intellectual property. Researchers may be concerned about how their data will be used, particularly if it could be used for purposes they did not intend, or if it could be used to generate findings that are inconsistent with the original research.
  3. Cost and logistics: Practicing open science requires resources, particularly around data management, sharing, and archiving. These costs can be particularly high for large or complex datasets, which may require specialized tools or infrastructure. Researchers may also face logistical challenges around sharing and storing data, particularly if they are working with sensitive or proprietary data.
  4. Misinterpretation or misrepresentation: Some researchers argue that open science practices may increase the risk of misinterpretation or misrepresentation of research results. For example, if research results are shared before they have been properly vetted, this could lead to the spread of inaccurate or incomplete findings. Researchers may also be concerned about the potential for misrepresentation of their work, particularly if their research is taken out of context or used for purposes they did not intend.

These arguments against open science are often debated, and there are counter-arguments and efforts to address some of the concerns raised. However, it is important for researchers to be aware of the potential challenges and risks associated with open science practices, and to carefully consider how to balance the benefits and costs of openness in their own research.

Nate: Can you recommend any readings that are highly critical of open science?

ChatGPT: While there are some critiques and debates around various aspects of open science, it is important to note that these are often specific to certain aspects of open science and not necessarily a wholesale rejection of the movement. However, here are a few examples of critical perspectives on certain aspects of open science:

  1. “Against Transparency” by Carl Elliott: In this article published in The Chronicle of Higher Education in 2014, Elliott argues that the push for transparency in research can lead to a culture of surveillance and mistrust. He suggests that the emphasis on open access to data and methods may create an environment where researchers feel they are being constantly watched and evaluated, which can ultimately have negative effects on scientific creativity and collaboration.
  2. “Why the ‘Open’ Philosophy Can Harm Science” by Thomas Basbøll: In this blog post published in 2018, Basbøll argues that the open science movement can create unrealistic expectations for research, particularly when it comes to the idea of “openness” as an unqualified good. He suggests that the focus on openness can lead to a neglect of other important values in science, such as quality, relevance, and creativity.
  3. “The Five Stages of Open Science Grief” by Cameron Neylon: In this blog post published in 2014, Neylon describes his experience of grappling with the challenges and limitations of open science, particularly around issues of data sharing and reuse. He suggests that the open science movement can sometimes oversimplify complex issues and create unrealistic expectations for what openness can achieve.

It is important to note that these are just a few examples of critical perspectives on open science, and that there are also many scholars and researchers who support the principles and practices of open science. As with any complex and rapidly evolving movement, there are likely to be ongoing debates and discussions about the benefits and challenges of open science, and it is important for researchers to engage critically with these issues and develop their own perspectives.

Nate: Do you think AI can improve the implementation of open science practices? If yes, how?

ChatGPT: Yes, I believe that AI (artificial intelligence) can potentially improve the implementation of open science practices in a number of ways. Here are a few examples:

  1. Data management and sharing: AI can be used to help manage and share research data in a more efficient and effective way. For example, AI tools can help automate data cleaning and formatting, making it easier for researchers to share data in a usable format. AI can also be used to help identify sensitive or confidential data that may need to be redacted or protected before sharing.
  2. Open access: AI can help increase access to research by providing tools for automatic translation and summarization. For example, an AI tool could automatically translate an article written in one language into another language, making it accessible to a broader audience. AI can also be used to summarize articles or research papers, making it easier for readers to quickly understand the key findings and implications.
  3. Peer review: AI can be used to help improve the efficiency and quality of peer review. For example, AI tools can be used to automatically screen articles for plagiarism or identify potential conflicts of interest among reviewers. AI can also be used to help identify potential biases in the peer review process, such as reviewer bias or gender bias.
  4. Reproducibility: AI can be used to help increase the reproducibility of research by automating some aspects of the research process. For example, AI can be used to automatically generate code or scripts, making it easier for other researchers to replicate the analysis. AI can also be used to identify potential errors or inconsistencies in the data or analysis, improving the quality and reliability of the research.

Of course, it is important to note that AI is not a panacea for all the challenges of open science, and there are still many issues that require careful consideration and human input. However, AI can potentially help make open science practices more efficient, effective, and accessible, and there is a growing interest in exploring the use of AI in this context.

Nate: Do you think AI poses a threat to science, in particular principles of open science?

ChatGPT: While there are concerns about the potential risks and challenges associated with the use of AI in science, I do not believe that AI itself poses a direct threat to the principles of open science. Rather, it is the way in which AI is developed, deployed, and used that can potentially pose challenges or risks to open science practices.

One concern is that AI can potentially exacerbate existing inequalities in scientific research. For example, if AI tools are developed and trained on biased data or by biased algorithms, they may perpetuate or amplify existing biases in research. This could have a negative impact on the principle of equity, which is a core component of open science.

Another concern is that AI could be used to automate aspects of research that require human judgment and interpretation, potentially leading to a loss of creativity, intuition, and critical thinking in science. This could impact the principle of transparency, as researchers may rely on automated tools rather than engaging in open and collaborative inquiry.

Additionally, there are concerns about the potential misuse of AI in science, such as the use of AI to generate fake research or manipulate data. This could undermine the principle of integrity, which is a core component of open science.

Overall, I believe that AI has the potential to both enhance and challenge the principles of open science, depending on how it is developed and used. It is important for researchers and stakeholders to engage in critical discussions and careful evaluation of the ethical, social, and legal implications of using AI in science to ensure that the principles of open science are upheld and advanced.

Nate: Speaking of reproducibility. If I asked you the same questions in the future, would you give identical answers?

ChatGPT: As an AI language model, I would provide the same response to the same question every time, assuming that the context and information available to me are the same. However, it is important to note that my responses are generated by machine learning algorithms and are based on patterns and relationships learned from large datasets. While I strive to provide accurate and reliable information, my responses may not always be perfect, and they may be subject to limitations or biases in the underlying data or algorithms. Additionally, if the context or information available to me changes, my responses may also change accordingly.

Nate: Are you processing new information? In other words, still learning and evolving?

ChatGPT: Yes, as an AI language model, I am designed to learn and evolve over time as I process more information and receive updates from my creators. My responses are based on the patterns and relationships learned from large datasets and are continually refined and updated as new data becomes available. This means that my responses can evolve and change over time as I learn from new sources and refine my understanding of different topics.

Nate: I am writing a blog post in my blog ‘Crowdid’ by me, Nate Breznau. The blog is an academic blog hosted by Hypotheses.de. Would it be ok if I posted our conversation in my blog?

ChatGPT: As an AI language model, I do not have any personal or legal rights over our conversation. However, I should note that the content of our conversation is generated by machine learning algorithms and is provided for informational purposes only. While I strive to provide accurate and reliable information, my responses are based on patterns and relationships learned from large datasets and may not always be perfect or fully accurate. It is ultimately up to you to decide whether you want to share our conversation in your blog, but I would recommend that you provide appropriate context and citations to help ensure that your readers have a clear understanding of the content and its source.

Nate: Thank you, this has been a very enlightening conversation. Could you provide me with some more information about you, ChatGPT, and how my blog readers could understand how you learned to become an AI?

ChatGPT: I’m glad to hear that you found our conversation enlightening! As an AI language model, I was created by a team of researchers at OpenAI, a research organization focused on developing advanced AI technologies for the betterment of society. My development was guided by a large corpus of text data, which was used to train deep neural network models to generate human-like responses to natural language input.

The training process involved exposing the models to large amounts of text data and using sophisticated algorithms to learn patterns and relationships in the data. Over time, the models became better at understanding natural language and generating coherent, human-like responses to text input.

It’s worth noting that my development was a collaborative effort that involved a diverse team of researchers, engineers, and linguists, as well as large amounts of computing power and resources. While I can generate responses to a wide range of questions and topics, my responses are generated based on patterns learned from the data and do not represent a true understanding or consciousness of the topics being discussed.

If you would like to provide your readers with more information about how I was developed and how I work, I would recommend checking out the OpenAI website, which has a wealth of resources and information on AI and machine learning.

Meta-reproducibility crisis: Software edition

Its not just a ‘replication crisis‘, its a reproducibility crisis. Forget about falsification if one can’t even reproduce the workflow of another. Efforts are underway to improve transparency which allows reproduction, good. But what if even with the original data, researchers cannot reproduce numerical results simply because of where they are in time and space? What if different statistical programming languages or different packages and versions of these languages actually lead to different findings? Based on results of the Crowdsourced Replication Initiative we demonstrated that even using the same data and models, only 80% of independent replicator teams could reproduce the numerical results, even when they had access to the original Stata code.

Based on my experiences in this area, I cannot but help have a strong hunch that software or package versions threaten the computational reproducibility of our research. The fact that Stata rounds up at .5 and R rounds down already suggests that the same models might produce different results across software simply from rounding variations. Recently, I found another reason to believe this hunch.

I am a participant in SCORE, a massive collaboration to systematically investigate the reproducibility of social science research – led by the Center for Open Science. I agreed to attempt a computational reproduction of the study “Age, Inequality, and Reactions to Marketization in Post-Communist Central and Eastern Europe” by Horvat and Evans (2011).

Despite some potential differences in the data I received from the original authors and those reported in their tables, I was able to produce similar results. Similar but not exactly close. The particular coefficient I was interested in came out at -0.39 in my ordered probit (cumulative link) model, whereas their original was -0.22. As a logistic coefficient or (translated into a percentage probability), this is potentially a large difference. I used R, but was not satisfied with my results. The original authors most likely used Stata, as their public data were a .dta file – Stata’s native file storage format. Also, lets be honest, no social scientist was using R before 2010 when they likely did their analyses. My curiosity and suspicion led me to run the model in Stata. Here I got a -0.21 coefficient. Almost identical to their original study, and not surprising given that there was minor variation in case numbers they reported and in the public data file. I was still not convinced that these coefficients were actually different because the models were quite complex. So I plotted the predicted probabilities to be sure that this was not statistical artifact of some model components (intercept cut-points for example).

But my hunch was supported. These same models run in R and Stata, led to different predicted probabilities. The figure below shows that among many former Communist Eastern and Central-Eastern European countries those who are aged 60+ were less optimistic about their living standards over the next 5 years. One of their critical findings was that this gap increased form 1993 to 2007, in particular because the 60+ group became even less optimistic; although in fairness this is difficult to conclude because of all the other variables in the model. What is clear, is that they were 0.45 points apart in 1993 and 0.65 apart in 2007 (see Table 5 in their original study). When I plot the predicted values I find that these results are similar but not identical in terms of the 1993 to 2007 change, but that the probabilities are quite different across software.

I used Stata v15 and the ‘oprobit package, and I used R’s ‘MASS’ package and the ‘polr‘ function. Despite identical data (and case numbers!) the ‘polr’ routine predicts the probability of respondents answering that their standard of living will fall or fall a great deal as much higher than that of Stata. Although the relative change between age groups is similar – with a slightly steeper negative slope in R – the lines are pretty far apart, and even further apart for the age group 60+. Without unpacking each package and the exact estimation strategy taking place therein, I cannot as of yet say why. My own statistical and software abilities are by no means exceptional, but certainly above the average social scientist. Thus, it would be totally unrealistic to expect any social scientist to understand the entire routine taking place within the polr or oprobit packages. If they did, they wouldn’t need the packages and could just write their own cumulative link routine!

The implications are that using different software leads to a lack of reproducibility. As if we did not have enough to worry about in the reproducibility area already.

Sci-hub. Good for science, otherwise mostly harmless

Sci-hub is the piratebay of academic journal articles. Its service is mostly illegal because it collects paywalled articles and makes them publicly available online via an indexed search. This is copyright infringement. People love it. The coverage is incredible, many journals have over 98% of their articles covered.

Frustrated with a lack of access to scientific articles, Alexandra Elbakyan of Kazakhstan founded the Sci-hub repository as a 20-year-old graduate student. Her site subsequently provided more open access to scientific knowledge than anyone in the history of science. She was named a person of the year in 2016 by Nature; yes, that Nature of the mega-profit-publisher Springer Nature who promotes open access by charging a 10 grand APC.

Reminiscent of the RAA’s takedown of Napster in 2001, Elsevier took legal action against Sci-hub in 2015 starting in the U.S. and quickly moving to other countries. This international campaign has to do with copyright law being organized by country, making it very difficult to pursue Sci-hub which exists in a cyberspace of mirrors, and it provides something that is unquestionably in the public interest and a basic UN human right.

Although there are allegations of security breaches that could lead to identity theft or other hacking university servers, I am not aware of a single piece of evidence Sci-hub has done anything other than ‘steal’ academic publications. It is not a threat to sovereign nation states, it doesn’t encourage sociopathic behavior. It is a form of rebellion against the plague that for-profit publishing unleashed on science, and a way to promote open science. Of course Elsevier was not wrong in its legal claim of copyright infringement. Elsevier wants researchers to pay for their articles and its minions see Sci-hub as causing profit losses. But the evidence suggests this is nonsense. Elsevier is wasting its time, precious time that it could use to sponsor arms fairs, create journals and sell them to big pharma or try to patent online peer review and force journals to pay to use it.

First, lets look at who is downloading Sci-hub content. Figure 1 shows the top ten countries by total downloads in February of 2022, compared with their total populations. We can see that relative to their populations, the U.S. and France are home to the most downloads per capita as of the most recent data.

Figure 1. Sci-hub downloads by country, February, 2022.
Image adjusted from Owens (2022) with
addition of Wikipedia population data in millions.

Next, lets think carefully about how publishing subscriptions work through two typical scenarios of a researcher downloading from Sci-hub.

Scenario 1. A researcher in the Global South downloads articles from Sci-hub. We know this is good for science. In fact this is science, it is active dissemination of useful knowledge. Merton would be pleased. The first question is easy: Is this researcher getting something for free that they should be paying for? Yes, they are receiving illegal good, getting copyrighted material for free. The second question is also easy: Would this researcher pay for this article if Sci-hub did not exist? No, they are presumably working for a fraction of what a researcher earns at a Global North university and do not have a budget of $25-60 for each article they need. The university also cannot afford millions of U.S. dollars for an Elsevier subscription. The scholar either gets the article from Sci-hub or some other green open access source, or does not use the article. It is a small loss for any publisher when someone would use their article, but could not access it; because having a potential citation to one of their articles is better than nothing.

Scenario 2. A researcher in the Global North downloads articles from Sci-hub. This researcher has either direct or indirect access to every published article that exists. Many universities have subscriptions to articles that their researchers are most likely to need. When the university does not have a subscription, a process of inter-library loan will get them roughly any article they need. Its not fool proof, but within a margin of error, Global North researchers have legal access. Using Sci-hubY yes, this researcher is getting something for free that they should pay for, but, their university or a university in their library network already pays for access, so preventing them from downloading also does not lead to any new money in the hands of the publisher.

Shutting down Sci-hub will not lead to any increase in profits for publishing companies. Elsevier, in its tantrum would argue otherwise. When universities and their libraries finally started to turn on predatory publishing houses like Elsevier and cancelled contracts, profits declined. In this case the universities do have the money to pay for a subscription. However, Project Deal and the UC systems’ boycotts asked Elsevier to sign a more reasonable and less draconian version of subscription and Elsevier refused. That is on Elsevier. It has nothing to do with Sci-hub. Again, no money would change hands because of the boycott which has nothing to do with universities implicitly encouraging their students to download illegal content.

Let’s look at more evidence. Figure 2 shows Elsevier’s profits in recent years. Sci-hub was in full effect in the mid 2010s. Interestingly, profits kept growing. They grow, and grow, and grow until something else happens that has nothing to do with Sci-hub. The universities began to wake up from their nightmares, and realized that they were being abused by publishers like Elsevier. In 2019, they started boycotting and demanding that publishers sign collective contracts rather than pay case-by-case, and that they greatly reduce their fees. And only in 2019 does a year-over-year profit growth model suddenly reverse direction.

Figure 2. Publicly available investor information from RELX

Elsevier’s profits only started to decline after they got canceled. Sci-hub has been irrelevant because those who can’t afford it will not pay and those who can already subscribe or should have a subscription, certainly won’t pay for Sci-hub downloads because they already do via their libraries. That is except for Elsevier’s refusal to support science, which is causing their own self-inflicted profit loss. Sci-hub is good for science, its mostly harmless otherwise.

Teaching to empower students as public, open and citizen scientists

Students pursuing a bachelor or master degree develop both labor market skills for a information and computer-based career, and learn to do science. Not all go on to be scientists. It would seem that those that do not, have no impact on scientific knowledge. They took tests and wrote papers in order to earn their degree. The test results and papers are only for their instructor’s eyes and maybe an occasional parent or other student.

It could be different.

  1. Science is an act of knowledge production
  2. Bachelor and master students have and develop useful knowledge
  3. Teaching them to share their knowledge and collaborate in knowledge production:
    • improves collective scientific knowledge
    • leads to a feeling of empowerment and utility among the students
    • creates ideal-type citizen scientists

Starts with teaching

In any given course in any particular discipline, students are taught to do science as a practice. This includes knowledge or ideas about the world; empirical evidence gathered through participation, observation or experimentation; and techniques to maximize the accuracy, efficacy and reliability of their knowledge and ideas. The students’ own work on assignments or theses is a form of ‘training’, and an instructor then checks or grades their learning. If they are privileged they get constructive feedback, and if they are highly self-motivated they actually read the feedback and incorporate it into their future work.

Students are generally aware that their work is for their instructor’s eyes only. At least in my experience, they cannot imagine their work shaping science or public knowledge. They thus have little extrinsic motivation to do more than what the instructor asks of them.

What if the instructor asks them to change the world outside the classroom in some way?

We, as instructors, are teaching students to produce reliable and critical knowledge. They should get a high mark if the work is deemed to be high quality. Why then does the entirety of their learning and knowledge production end as a forgotten file in a folder as a relic of some semester past? Why don’t they share some of that knowledge? I bet if they thought that their knowledge was valuable and useful beyond degree acquisition, they would be stoked.

Hampered by institutionalized beliefs and practices

A common argument against bachelor and master students trying to disseminate their work is that it is not high enough quality to compete with work from doctoral and post-doctoral researchers. In particular, they have not had enough time to dig deep into a body of literature on their topic of interest and their methodological skills may still be quite underdeveloped. They would probably be ‘wasting’ their time pursuing a journal publication or even a publicly disseminated working paper.

I agree. This points at the root of the problem. Publication-based science. Science as we know it has a rewards system where publications are treated as far more valuable than anything else scientists produce. This is particularly acute in the social sciences where scientific research rarely leads to tech or apps with private market value. When publications are the ‘currency of the trade’, academics, universities, editors, students, even policymakers prioritize publications and citations to those publications as the metric for judging the quality of scientific research. As such, scientists have maximum incentive to produce publications above all else.

Now, bachelor and maybe master students are generally unaware of the severity of this publish-or-perish plague that infects the very spirit of science. But like a fish that is unaware of the properties of the water it lives in, the students are deeply affected by the polluted nature of the academic norms in which they matriculate. How many times has a teacher told a bachelor student that they should submit their term paper to a high impact journal? How often are readings in scientific courses not from journal articles or books? A taskforce in sociology specifically recommended that students read and comment on books and journal articles because this is the ‘best’ type of academic knowledge for them to learn.

Student-citizen opportunities to shape public knowledge

With available technologies and new ideas about what constitutes meaningful knowledge, students can have a great impact on science; both now and into the future. Podcasts, blogs, vlogs, Youtube channels and many other forms of social media communication are consumed in high volumes across members of the public, and among students and scientists.

Therefore, when possible, I assign students the task of making a contribution to public knowledge and/or open science instead of writing a term paper. It seems better for all parties involved, and involves more parties because it could reach the public at large. I recently put this into practice in my course, “Open Science in Social Sciences: Crises, Controversies and Change” which I taught as an invited guest lecturer at the University of Zurich (UZH) (syllabus).

Wikipedia: knowledge now

Creating or editing a Wikipedia page where knowledge is lacking or does not exist at all, is a fantastic way to engage in public open science. Firstly, surveys report that more than a three-quarters and up to 90% of students use Wikipedia in their course research, at least in some English-speaking samples. Second, shaping knowledge immediately; seeing one’s own contributions appear on a public knowledge-platform is exciting and feels empowering. Third, knowledge is improved, in some cases much needed knowledge, like that which gives a voice, forum or contribution for underrepresented people or societies.

Three students in my course used Wikipedia to make contributions to public knowledge.

One of the students first edited and then created a Wikipedia page in Ukrainian, based on his Bachelor Thesis topic “A Theory of Generations”. He pointed out in class that most of the information on this topic, and knowledge accessed by Ukranians in general, is in Russian. Given the history of Russian hegemony in Ukraine, having more Ukranian resources promotes a Ukranian language and identity; in other words, promotes knowledge that is valuable to most Ukranians.

Wikipedia Ukrainian language page on “Theories of Generations“, created by Ernest Huk

Additionally, this particular academic topic is contested and misunderstood in the literature according to this student. He points out that the former page focused only on one theory, when in fact there are many. Thus, he pointed out that before his edits and page renaming, “Ukrainian users of Wikipedia, by looking through the [former] article Theory of Generations, would be not informed adequately at least and misinformed at most, as Strauss-Howe generational theory is just one of the many other theories in this domain (yet one of the most controversial).”

Another student has a extracurricular passion for wild flora. There is a class of plants whose native habitats are pastures and fields. These ‘pasture flora’ (Ackerbegleitflora in German), as the student pointed out, are “often rare plants that would find suitable living conditions in pastures but are conflicted with the threat of economic means that increase productivity in agriculture. Some of the rarest plants in Central Europe are found in this class.”

Wikipedia German-language page on “pasture flora” edited by Linus Signer

At least in German-speaking Wikipedia, there was no page on this topic at the outset. In fact, Wikipedia redirected the students search to Unkraut (weeds), reinforcing the public misunderstanding of this class of wild flora as undesirable or economically-inefficient intruders in a pasture or field. He later discovered an existing page on Segetalflora which is a related class that has overlap with ‘pasture flora’. This points out that knowledge on certain topics can go by different names or be cross-classified, a problem that requires more discussion and Wikipedia users to resolve. Therefore, he elected to significantly expand the existing Segetalflora page, rather than make a new page with information about Ackerbegleitflora, so that everything was in one place, including discussions about differences, conservation and utility.

During our course in fall-winter of 2021, many exciting things related to open science were taking place at UZH. For example, the university created an explicit open science policy. Moreover, there was an open science week with speakers and, of course, our open science course. One student noticed that much information about the open science happenings was missing from the UZH Wikipedia page and decided to make it his project to add it.

The ‘Open Science‘ section of the University of Zurich German-language Wikipedia page edited by Kerim Lengwiler

A major limitation to the open science movement is simply a lack of academic awareness of the issues and their solutions. Wikipedia provides a platform to disseminate such knowledge. Thanks to this student’s research and edits to the German-language page, I was easily able to update the English-language version of in tandem. We hope users can follow our lead and update the French and Italian versions as well.

Youtube – unlimited public science possibilities

Youtube is now outpacing Wikipedia as a student’s ‘go to’ for scientific information and especially for science communication. Internet searches for ‘how to’ or ‘information about’ now seem as likely to return Youtube or other short instructional videos as they are text-based entries (depending on cookies, browser, settings, and etc of course). Videos also give a voice to research subjects that cannot easily be expressed via written language. As long as participants consent, students can interview their subjects and post a video on Youtube – an act that requires only minimal technical skill and can be done with free or cheap software.

Another student in my course elected to contribute to knowledge on the topic colorism – discrimination based on skin color. Unlike racism, the student found this topic was far less present in academic discussion, at least based on her internet searches. Her impression from a previous course and her searches, was that this topic was mostly used in reference to the United States, but less so in the German-speaking countries’ contexts. In fact, she pointed out that to here it seemed like many people did not think colorism existed in Switzerland or Germany. Therefore, for her project, she conducted an interview with a person whose parents are Asian and European, to understand if and what types of color awareness or colorism this person experienced personally or in media and marketing.

An interview with an ethnically Swiss-Asian participant about colorism by
Nora Melanie Vetsch
Student open science activism

The opening of science and increase in reliability and transparency of knowledge for academic and public consumption requires more than a handful of citizen scientists active on social media and Wikipedia. Many closed and un-reliable methods are taught as part of the pathology of science. Instructors and students alike are generally unaware that the science they promote is potentially unreliable. This is another way of saying that they have not yet been exposed to the core information of the open science movement. Thus, changes at the curricular and institutional level are necessary to promote this awareness and to foster change.

Three students in my course elected to develop strategies to shape curricula for university and school students so that it fosters awareness of open, ethical science practices.

One student felt that herself and her peers were generally uninformed about open science both in and outside of UZH. Her idea for changing this was to create a student led social media initiative at the University. As this is nothing that could be achieved or even approved in a few months, her project was an action plan for the creation of this service. Firstly, it would entail the creation of a Student Open Science Organization, that among other things would maintain social media accounts where they posted important open science information and resources. This would require several layers of bureaucratic approval and liaising with the Open Science Office.

An excerpt of the open science at UZH social media action plan by Isabella Ferrera

Another student sought to communicate and potentially convince a primary methods professor in her area of Educational Science to incorporate open science into her courses. The idea would be that such a strategy could be deployed across other disciplines, and that her area was a test case to develop it. This required first developing persuasive reasons that could be shared in an open and friendly manner with a professor. Like many progressive universities, UZH has resources just waiting to be taken advantage of, a great realization of the student.

The Open Science Committee homepage at UZH, available for policy, communication and supporting curriculum development – for example Anastasiia Kurmann’s efforts to shape Educational Science courses so that they teach open science.

The professor responded to the student’s requests and agreed to add open science topics to her seminar plan, agreed with the students claims about the value of teaching open science and to specifically promote open data and FAIR principles as she teaches about qualitative data collection and evaluation; according to an exchange with the student.

Another student volunteers her time at a Zurich Community Center (Gemeinschaftszentrum). As the student points out, these centers, “offer space to work, play, learn, meet other people or participate in neighbourhood projects.” The student wanted to test if open science is a topic that adolescent children might understand or be interested in. Therefore, she first gained their interest and consent and organized a lesson at the Center. She used another resource of the UZH, the Kinderuniversität (Children’s University) as a protocol or concept for the adolescents to understand open science. The Kinderuniversität is specifically designed to support children in searching for answers and explanations of phenomena and, as the student points out, “serves as a good example to illustrate how science can be made accessible to all.”

Diellza Ismailji teaching adolescents at a Zurich Community Center about open science

Using this test case, the student was able to develop a curriculum for future courses for children and adolescents to discover and understand the concept of open science and why it is important. This guide includes resources that the children can directly interact with such as Wikipedia (that they can even edit this!), Blinde Kuh, Google Scholar, how to find and use a library and how to register for free at the UZH Children’s University.

Concluding thoughts

Public science is often practiced in the social sciences and involves researchers engaging with a local community to help provide inputs into their own efforts to address their own identified problems and priorities as a community. At international universities, bachelor and master students are less likely to be from the local community, are only staying there temporarily, and may face ethno-linguistic barriers to engaging in public social science locally. However, they can still have dramatic impacts on knowledge affecting to their home communities or countries – remotely. That is the beauty of an internet of knowledge, it only requires online access not physical presence.

Something else to keep in mind is that replacing a traditional term paper with a project to impact public knowledge and/or open science is not realistic in all course types. For example a broad introduction to open science across disciplines, like my course, makes it relatively easy for students to chose any topic and use it to contribute to public knowledge. In a course that is very theoretical and narrow, for example critical Marxism or history of the Holy Roman Empire, student learning might be maximized if they write a conventional essay based on reading of the literature that proves they have mastered a well-rehearsed topic. If asked to make a contribution to knowledge specifically in these areas they might struggle to find things to add to Wikipedia or find that there are already seemingly unlimited public resources. “Might”, then again might not. There are often opportunities to engage technology or do things in different languages.

Ultimately, when students feel they are doing something more than just earning a credential, ticking a box, or trying to maximize their grades, they become more likely to engage the material. If they think their term paper might actually contribute to local or global communities’ knowledge base, conservation efforts, capacity to address underprivileged students and etc., they are naturally inclined to do higher quality work and develop a sense of empowerment and satisfaction. These are bases of self-knowledge and fulfillment that they will hopefully carry with them into the future and stay motivated to impact public knowledge as scientists in academic or citizen scientists working in any job.

Legacy of Jon Tennant, “Open science is just good science”

Forward

This blogpost is an edited and abridged transcription of the talk ‘Open science is just good science’ given by Jon Tennant in 2018 at the annual meeting of DARIAH (“Digital Research Infrastructure for the Arts and Humanities”) a European Union based network to support digitally-enabled humanities research and teaching. This talk took place on the eve of major events in the Open Science Movement to which Jon contributed an enormous amount in a short period of time. The talk, presented here in readable format and appended with links, additional info and citations, provides an excellent primer on open science. This post is offered in honor of his memory as Jon passed away April 9th, 2020 in a motorbike accident. Any changes to the original wording of his speech are intended to focus, clarify and better communicate his message – a candid voice for open science.

Jon Tennant at DARIAH, 2018

Jon Tennant = open science

My story began about seven years ago… I was a master student in London at the time at Imperial College and I was talking to a friend and I was saying ‘you know I’d really like to publish my master’s thesis’.  And he said ‘well, you know, just make sure that it’s open access’. I was like ‘what the hell is open access?’, and he said ‘it’s where you make your research freely available to everyone.’ And I thought ‘well doesn’t everyone have access to… oh wait…’, and I had everything sort of like “unraveled” about my academic history up to that point.

When I was a student I would hit paywalls all the time, and it just seemed like a common everyday thing. Like, ‘oh crap this one’s paywalled, guess I can’t use that, move on to my next one’. I realized that despite being in a ridiculously privileged position at a very elite institute in the UK, I still couldn’t actually access the things I needed to do my own research. The more I thought about it, the worse it became.

So what are the things that motivate me in the morning?

A recreation of Jon’s slide from the video and this blog post.

These are the big picture things that the open science movement is trying to solve, like the fact that the vast majority of scholarly research is still held hostage by private corporations. Around 75% of all published knowledge which should be available to humanity, is instead owned by shareholders or private companies. This disadvantages almost everyone on this planet except for those who are fortunate enough to be in a privileged position at an elite institute. These commercial giants are ruthless racketeers. They have profit margins often in excess of 35 to 40 percent, which is even bigger than Apple and all the big oil companies. The result is that as a global research community, or scholarly community, we are not communicating our results effectively, and then so many massive issues that are affecting our planet are suffering.

There are major barriers to the dissemination of scholarly knowledge. Copyright is a huge one. It is completely and utterly broken. It does not protect us as content producers; it protects the profits of scholarly publishers. Often we can’t even access our own research results and we are prohibited from sharing them due to anachronistic copyright laws. As consumers, often we don’t even know what we’re buying until after we’ve bought it. You can pay 40 bucks to access a research article and you have no idea what’s actually in it or if it will prove useful, and there’s no way in hell Elsevier is going to give you a refund for that. We have life-saving research; but most cancer and global health research is still hidden behind paywalls. And the real question is: how is any of this helping science or research, or having an impact on the global challenges that we are facing?

Consider the bigger picture, for example the sustainable development goals set by the World Health Organization includes things like economic growth, industry innovation infrastructure, reduced inequality, clean energy, and combating energy insecurity, water insecurity, and hunger. If you believe that research can help us achieve these goals and resolve these issues, then you must also acknowledge the corollary that preventing access to research stops us from achieving these goals. This is exactly the system which you’re playing in: you have an industry that thrives on preventing access to knowledge that’s how it makes its money. It’s not a bug. That’s a feature. It really is systemic and it’s parasitic as well if you want to use an ecology term.

One of the consequences of this is that public trust in research and expertise has plummeted over the last few years[1]. We see expertise effectively dismissed especially from scholarly experts as if what we’re doing is no different than just Googling something. I’ve created a hypothetical conversation here, but this sort of conversation happens even at the highest levels. Like in Congress when researchers go to present evidence, they get rejected because politicians are like ‘you know, isn’t that research published in Science or Nature just ‘fake’ basically?’.

From Jon’s Slideshow

Academic: “This research paper has been published and therefore is scientifically valid.”
Non-academic: “But it’s paywalled. I can’t access it. How do I know it’s valid?”
Academic: “Because it has been peer reviewed.”
Non-academic: “Can you show me the peer reviews?”
Academic: “No. But it was done by two experts in the field.”
Non-academic: “Which experts?”
Academic: “We don’t know. But it’s in a top journal.”
Non-academic: “Why is it in a top journal?”
Academic: “Because it has a high impact factor, so is highly cited.”
Non-academic: “Why does that make the research better?”
Academic: “Trust me. I’m a scientist.”

If you think about it, there’s not really much reason why politicians and the public shouldn’t think that. Trust has to be earned, and trust is something that opacity does not breed. In a world where transparency breeds trust we shouldn’t actually be surprised when expertise is rejected because we’re operating within a closed system. If we step outside of that system and look at it, or empathize with those who are outside it, then actually it makes sense why we have a sort of chaotic relationship with members of the wider public at the moment. The ivory towers of academia are certainly crumbling due to the wider open movement, but is it happening fast enough and what are the consequences when it doesn’t move fast enough?

What is open science?

There is actually no universally accepted definition of this. Open science is about using science to help address the major challenges to society. Ironically if you look at the one systematic review of what open science is (published and paywalled by Elsevier) it says that ‘open science is transparent and accessible knowledge that is shared and developed through collaborative networks.’ So does that mean that open science excludes anything done by the individual? It’s a pretty stupid definition if you ask me. But you can’t read it anyway because its paywalled. So, when people use ‘open science’, often people will think oh they just mean you know ‘physics, biology, chemistry’, but when I talk about open science I mean in the most inclusive sense possible. ‘Open science’ is often used interchangeably with open scholarship or open research, but we just have to make sure that we include everyone; so it includes humanists, social scientists, even artists, engineers, mathematicians, medics, and citizen scientists even are included under this umbrella of what open science encapsulates.

For me as well, open science is based on core principles. I’ve got this nice little table here from Tony Ross-Hellauer.

(Source)

This is a combination of practical aspects and personal aspects behind open science. For example, accessibility, equality, and rigor are practical aspects; but there also are ones you might miss like freedom, fairness, justice and truth. For me, these are principles that you should adopt anyway as a good human being; and if so, then you’re basically an open scientist, or open scholar. You can embed these practices within your everyday life, or least practices you should be doing as a researcher. But in the practical aspect, open science is bloody complicated. I don’t want to hold that fact back.

This is the rainbow of open scholarship tools from Bianca Kramer and Jeroen Bosman

Open science includes things like discovery, analysis, writing and publication, all the way through to different tools used for assessment and evaluation. There are entire workflows here which we need to be trained at, but no one’s really teaching us how to use them. Imagine we are all aware of at least one of the above tools or practices, but integrating all of these into your everyday workflow as a researcher can be quite complicated. Another really important question I think we need to ask is:

How is open science objectively different to science?

Mick Watson, in 2015, just wrote this beautiful article ‘When will open science simply become science?’. Those principles and tools above are just good science. Mick said ‘open science describes the practice of carrying out scientific research in a completely transparent manner (good science) and making results of that research available to everyone. Isn’t that just science?’ And it’s difficult to disagree really. So when we talk about what open science is, it really is just better science; and the opposite of open science is just bad science, because if you’re not sharing in a transparent manner, then you’re basically creating anecdotes rather than research.

Is open science a movement?

A lot of people describe open science as a movement. A movement is defined as a group of people working together to advance their shared political, social, or artistic ideas. The implication of this is that a movement has a direction with shared common goals based on commonality. So if open science is a movement, then who is defining the direction? Who is defining the shared goals? What’s the strategy behind it? Who’s leading it? Is it DARIAH? Is it the Open Science Framework? What happens to those who don’t feel included in that movement? A nice example of this is one time when I had to go to the Humboldt Institute with some open science colleagues as part of an outreach workshop to teach them different methods of doing open science. When we went there, they actually ended up schooling us in doing something way better than what we were doing using virtual machine environments. And we were like ‘oh cool so you’ve basically already done open science anyway!’ and they were like, ‘yeah but we just don’t call it that.’

Is open science a process? A set of principles? A vision, a club, a political agenda, fad, a distraction, is it exclusive?

What happens when we, as a supposed movement or community, actually can’t answer any of these questions? I think that’s kind of important because it gets to the root of what open science is and how it is objectively different to what we as a scientific community are doing. Then that can help us define a strategic direction for the future of what we actually want to achieve, once we fall back upon that.

Among academics, there is this mantra publish-or-perish. But there is no publish-or-perish anymore. It’s publish-and-perish. You can publish a lot during grad school and still be told that you’re not qualified enough to get a postdoc. There’s too much competitiveness, too much is driven by funding, and too many people coming in through the pipeline being drilled into this sort of narrow ivory tower mindset of how to do academia as soon as we start. One of the first things I was told when I became a PhD student is within like your first year or so you better publish a high-impact paper. I was like ‘dude I don’t even have any data yet’. I did publish eventually, but it was a lot of strain and it’s a lot of stress to deal with that at Imperial College. Out of the 50 or so PhD students that were part of my cohort, pretty much every single one of them left with insomnia, alcoholism, depression, anxiety, or stress because they were treated like farm animals because of this publish-or-perish mentality. And we wonder why PhD students have almost twice the amount of mental health problems as people who work in emergency health services. We don’t have any sort of support framework.

These giant mega-publishers are partly to blame. We’ve heard of Springer and Elsevier et al. mentioned before. There’s a great quote from the journalist George Monbiot. He said that “academic publishers make Rupert Murdoch look like a socialist.” I think that’s a very ‘positive’ outlook. It’s also known as the industry the ‘internet could not kill.’ Back in 1995 Forbes wrote a really great editorial saying that Elsevier would be the Internet’s first victim. Elsevier went on then to basically have unbounded profit increases. It is still ongoing. It’s a twenty-five-billion-dollar-a-year industry in 2018. It’s extremely fat and bloated and 35% profit margins are fairly typical. We still talk about “papers”, I mean its 2018, and we’re still referring to “papers”, and what we have at the moment for the vast majority of our scholarly communication process is a 19th Century process of peer review applied to a 17th Century communication format around journals and articles. We still mostly use PDFs as well. I think it’s probably about time we adapted to the web of 1995 for scholarly communication because we’re seriously lagging and it’s a very strange system to be part of right now.

I’m sure we’ve all heard of Sci-Hub and ResearchGate as well. These are essentially platforms that want to provide increased access to scholarly research but are viewed as ‘pirate sites’, as if liberation of knowledge is equivalent to plundering and murdering. The American Chemical Society and Elsevier and their kin are suing them for millions of dollars and shutting them down preventing access to this research. For some reason sharing research is illegal; have a think about that one. The more you know the worse it gets. Only 25% of all scholarly research articles are open access and that comes about 20 years after the Budapest Open Access Initiative. We’re increasing our rate of free access to knowledge about 1% every year, so maybe in about 30 or 40 years we’ll finally have substantial access to research knowledge (see OS Timeline Updates at the end of this post for more info).

We have this prestige-based economy where your worth as a researcher is based on the commercial brands dictated by corporate values which you elect to publish in for whatever reason. There are various biases in this, for example if you are a minority researcher, woman or early career researcher then you are incredibly biased against from the outset[2]. As we all know, researchers write, review, and edit the papers; so they generate around 95% of the real value behind scholarly communication. Then we (the researchers) have that content stolen away from us by publishers and then sold back to us. When you wonder how they generate 40% profit margins, it’s like going into a restaurant and bringing all of your own ingredients cooking the meal yourself and then being charged 40 bucks for a waiter to bring it out to you on a plate.

Its bullshit. The old saying is, ‘it’s smart people doing stupid things for smart reasons’ and the reason is because our careers depend upon this publish-or-perish mentality. But at the end of the day we’re basically being duped as a global research community. We are no longer researchers. We are the oil for the machine. The provider, the product, and the consumer for this mega corporate entity out there. The market itself is an incredibly dysfunctional part of a wider oligopoly, similar to a monopoly. Yeah, we have lifesaving research about cancer, Ebola and Zika for example. All hidden behind paywalls – sold off to the highest bidder at the will of Elsevier’s stakeholders. The Ebola outbreak was, what, four years ago? Just two days ago, Nature finally decided to announce they would provide open access, for a limited period only, to Ebola research. So a round of applause to Springer Nature for acting four years after they were supposed to.

If anybody’s not angry at scholarly publishers yet, then I’m clearly failing because you should be.

In Germany, you have a national library consortium who are currently revolting against the revolting practices of Elsevier and their kin. At a publishing conference I attended earlier this year in Berlin [APE 2018], Martin Grötschel was a speaker. He was in 2018 the president of the Berlin-Brandenburg Academy of Sciences, and typically at these conferences he is supposed to get up and give a nice speech about what a great job everyone’s doing and give them a little round of applause for being who they are. But this was a conference held by Elsevier and Springer Nature and instead, he followed the vice president of External Relations of Elsevier onto the stage, and he spent about 15 minutes slamming the crap out of them in the most beautifully German way possible. He is one of the chief negotiators behind Project Deal. He is actually in the room when negotiating these big deal subscription contracts with Elsevier et al., and he was saying that he feels like he’s being bullied half the time.

He said during one of these negotiations, “we don’t want to pay Elsevier anymore because we don’t see the value in what you’re doing” and he described what followed:

“One publisher [Elsevier] stated ‘if your country stopped subscribing to our journals science in your country will be set back significantly’. I responded, ‘it is interesting to hear such a threat from a producer of envelopes who does not have any idea of the contents'”

Martin Grötschel, at APE 2018

Pretty harsh. Hilarious at the same time. But whichever side you are on, however you look at this, there are these enormous rifts happening in the world of scholarly communication at this moment. It’s basically big publishers versus everyone else. They are entering the legal realm. They influence copyright, career advancement, the structure of our research institutes, and there are really deep issues happening here. In response there is an open science revolution infiltrating into many of these aspects. Project DEAL is causing quite a mess for big publishing.

In France recently, we had a similar thing between the Couperin Consortium and and Springer Nature. Couperin served up the middle finger on a silver platter and said ‘we’re not going to subscribe anymore’. They saved 12 million euros every year in subscriptions which they are reinvesting into open scholarly infrastructure. Sweden announced two or three days ago that they, the Bibsam Consortium, are doing the same thing. They cancelled all subscriptions to Elsevier journals and they’re like ‘crap, we’ve got like all of this money we’ve been wasting on journals for these last 30 years now what do we do with it?!’ It’s fantastic and if you look in Taiwan, South Korea, Argentina, and Mexico they are all gearing up to do the same thing. Oddly it is just the UK who seems to be not too fond of this.

We have so many awesome web-based technologies to open scholarly communication

Why on earth are we still communicating in PDF articles with thumbnail sized images when we have an entire web at our fingertips? I assume we all know about tools like GitHub, StackExchange and Wikipedia. Why not use something like the moderation or editing system of Wikipedia combined with the reward structure behind StackExchange combined with the version control of GitHub in order to create a fully integrated, community owned, very cheap, open scholarly communication system? And before anybody says ‘it can’t be done’, people are doing this already! I’m not sure if they are in digital humanities, but you know there are communities like computer scientists who are doing these sorts of things already, which brings me on to penguins.

What do Penguins, Cobras and Gimli have to do with open science?

Cultural inertia defines academia. It’s a crowd based physiological effect and pervades all aspects of the Academy. Have you ever met the average academic? 50% of academics are stupider than the ‘average academic’. They have this publish-or-perish mentality and they are generally terrified of new technologies. It took me about a year just to even set up a GitHub profile. We are also really bad at making predictions, like 10 years ago, people said ‘open access is never gonna happen’, eight years ago people said ‘open data is never going to happen’, three or four years ago people said ‘open peer review is never gonna happen’. Now people are saying preprints are never gonna happen. Yeah… all of these aspects of open science are happening in some sort of way in one form or another already, and it’s fantastic. But in doing so we’ve created a whole range of new technical social and language barriers around these new developments and that’s a bit of a problem.

We like to think that openness is supposed to be inclusive right? It is one of the key founding principles of the open science movement but is that actually the reality? Have we created a new system that’s open for some and not open for all? I would argue yes, and one of the reasons for this is that there are immense barriers to change within academia. We have a suite of social, cultural, technological, political, organizational things that create vast barriers, to each of us on an individual or community level, to actually progress in a way in which we think is most beneficial to our communities.

Reproduction of Jon’s slide from video

Three of the biggest stifling effects are fear, particularly for the most underprivileged, competition, because we all want to advance our careers, and the abuse of power dynamics from those at the top. All of this create inertia, which prohibits course correction. If you look at the values that are driving open science: things like how to reduce publication bias, how to increase access to knowledge, how to make research more efficient and reliable, how to make it more sustainable, and how to foster collaboration: almost every one of the barriers to these revolves around fear. For example, fearing of being scooped, or fear of information overload, fear of wasting time learning new practices or fear of poor research quality. Fear of errors and public humiliation is a big one for grad students. It’s this concept of fear, and this is where the Penguins come in. Researchers are like penguins.

Tenor GIF, same as presented in Jon’s slide

Penguins spend most of their day huddled up together on an ice cap. Eventually they all start to get a little bit hungry, and they look longingly at the water. Food is out there, but so are killer whales. And no one wants to be the first one to jump into the water because they’re afraid of being eaten. Eventually one of them gets so hungry that he slides down off of the the iceberg into the water, and goes hunting for fish and he’s very happy. Then all of the others are like ‘oh, well he was safe, so maybe we can go down too’, and eventually one-by-one they all start laying down and they go off. And no one gets eaten, well sometimes someone gets eaten but that’s life, but it’s the same as academics.

There are new technologies and new processes and everyone’s terrified of being the first one to jump in. This fear is coupled with the fact that we’re almost singularly only rewarded for gaining academic capital based on the journals that we’ve published in. So we have an academic industry that relies on creating this ‘stifling effect’ over innovation and progression of a field. We are generating a lot of value for the publishing industry but we’re losing out as a global research community in the process. People talk about providing incentives to do open science or ‘sticks and carrots’ to make people do better science; but it’s kind of missing the point that we should actually be doing good science in the first place. We should not need to be incentivized to be transparent about our work, that’s the completely wrong way of looking at things in my view. That’s why the penguin analogy sort of works.

The next analogy is cobras. The cobra analogy is about key performance indicators and how having a performance-based evaluation system revolving typically around publication is damaging to academia and to global scholarly research. I call this the ‘cobra effect’, i.e., perverse incentive. There’s this really well-known anecdote about when the British ‘occupied’ India. Administrative officials were concerned that there were too many cobras in Delhi. So they created a new policy that members of the populace would be given money in exchange for any dead cobras. In return, the locals started breeding thousands and thousands of cobras, and then got a lot of money for them. So a policy designed to cull the number of cobras perversely led to a population boom in the end.

And the same thing happens in science if you look at how we are rewarded based on citations and impact factors. That’s what we what we end up aiming for. There’s another great paywalled article which came out last year that looked at this effect in Italian researchers. What it found was that within four years after the Italian Research Council’s policy saying that citation metrics were going to be used in hiring practices, it led to as much as a 179% increase in the number of self-citations. So it was a great idea executed in the wrong way, and led to an unintended consequence. It’s called Goodhart’s Law. When a measure becomes a target it ceases to be a good measure.

“When a measure becomes a target, it ceases to be a good measure.”

Goodhart’s Law; quoted in Jon Tennant’s slide

When high impact journals are the target for researchers they shift their priorities from scientific method to ‘how do we get into high impact journals?’. They conclude, ‘oh we have to tell a really good story, we have to get better data for our work.’ And that skews the research process, because the research process should never be about aiming for a high-impact publication. It should be about discovery of truth. Right? But we’ve skewed that.

So this is the game; and people always respond saying ‘well you know this is just a system that we’re a part of.’ But the system comprises of people, right? So anybody who is complicit in citation gaming, should be accountable for those actions. The fact that we are rewarded for high-impact journals is backwards. If you look at peer reviewed publications in the top journals it’s of typically a lower quality. The research has the highest probability of being retracted not due to more eyes but due to the probability that researchers have committed fraud or tried to cut corners in order to get into those journals[3].

Figure from Fang and Casadevall (2011), similar to Jon’s slide

According to this figure, ‘top journals’ mean worse research. It also demonstrates that the impact factor has perhaps nothing to do with the quality of research itself. The inverse of what we expect. One thing I tell researchers is ‘if you use the impact factor to evaluate the quality of another person’s research or of an individual researcher, then all of your papers that use any form of statistics should be retracted; because you sure as hell don’t know a thing about statistics’.

“If you use the impact factor to evaluate the quality of another person’s research or of an individual researcher, then all of your papers that use any form of statistics should be retracted; because you sure as hell don’t know a thing about statistics”

Jon Tennant, ‘Open science is just good science‘.

That’s a powerful message to tell people especially in senior positions.

How does open science factor into this? Some possible ways forward.

One solution is to use altmetrics and article-level metrics so that we don’t just use one crap proxy to evaluate an incredibly complex system of research. If you haven’t signed the Declaration on Research Assesment (DORA) or looked at things like the Leiden Manifesto or NISO yet, these should be high up on your agenda. But ultimately it’s down to the individual researcher to ‘stop breeding cobras’, because that just contributes to a worse system.

Now, about Gimli. Has anybody not seen Lord of the Rings? There’s a scene there where towards the end of the movie against all hope, the last of the good guys go to march on the gates of the bad guys, and it’s a no hope situation. They’re basically all worrying themselves to death, and Gimli says, “certainty of death, small chance for success, what are we waiting for?!”, and they all march off and most don’t die; and it’s another perfect analogy for academia. We are told that you can’t do various aspects of open science because they will harm your career; and that’s due to these social internal barriers mentioned before. The effect of a divergent attitude which has been imposed upon us: that people who want to innovate and explore and create or do good science are chased out of the system. The effect is that all of us are straining in perpetuity as part of the status quo, and research suffers. Statistically less than 1-out-of-200 grad students will get a full-time professorship, according to recent research done in the UK. The question is then, why would you try and be the worst version of yourself via publish-or-perish to get a job that you’re probably not gonna get because you are going to publish-and-perish anyways? Researchers become trapped in this cycle. We feel like we’re forced to play the game because leaving academia is perceived as failure. This leads to a reinforcement of the power imbalances, cultural inertia, commercial interest and governing systems of academia, and the cycle continues.

Can we break this cycle through training?

A paper published last year showed that in 60.8% of research articles published in global health journals, the researchers did not self archive, i.e., post a preprint, even though it was free and allowed within journal policy. This is life-changing research which researchers themselves are not sharing in a field where you would think access to knowledge is important for saving people’s lives. We have to ask ‘why?’. In the UK, a study showed that 93% of researchers believe that open access is important but less than half of that number have actually published in an open access journal. Why again this massive discrepancy? It is quite shocking that researchers can expressly promote open access but not practice it, even when if you publish in an open access journal statistically you will gain an increase in citations. The same can be said if you share your data and your code openly. You make your work more reusable therefore more open to being cited. In a system where ‘dead cobras’ still count, this is a good thing for you. A lot of people will counter this by saying ‘open access is too expensive.’ If you say that, all you are saying is that you can’t Google properly, because self-archiving costs nothing. There are so many routes out there to free instantaneous sharing that help to level the playing field for everyone.

“I honestly don’t know, it just it blows my mind that researchers can promote one thing with one hand and then fail to uphold their own values with the other. And I don’t understand why, because all of the evidence points towards being open as enhancing your career”

Jon Tennant, from talk

Now there are these policies and mandates saying ‘you have to publish your work open access’, and then publishers swooped in and said ‘we’ll give you open access for three thousand dollars a pop.’ Why would you pay three, four or five-thousand dollars for something that you can get for free? Preprints are amazing. Again, sharing is generally good for your career because you generate more citations faster. More importantly, you get free rapid communication for your research which could benefit society and its problems. There’s an explosion of preprint venues in the last five months. The concept here is that it’s your own work, don’t stick it behind a paywall. You do have choices to publish it where you want and the future is definitively going open, and you can already be a part of this.

(Source, updated since Jon’s talk)

On the left we have the exponential increase in the rise of preprints by platform, and on the right are preprints as a percentage of all papers by discipline. At the same time, open access mandates are appearing across the globe from funders, institutions and governments. Openness isn’t going anywhere so you might as well ride that wave.

In summary

I think it’s time to change the conversation because open science is pretty awesome. It increases the dissemination and reusability of your research and ultimately enhances your academic profile which is good for you. More importantly it helps to combat the reproducibility crisis, and makes you a better researcher both ethically and methodologically. It disseminates potentially life changing and saving knowledge freely to all.

The first step in achieving this is that we need to take responsibility and educate ourselves about open science or good science. This is one of the reasons I’m building this open science MOOC (Massive Open Online Courses) to help training and support and education for researchers around the world. We are going to hopefully use this to empower the next generation to become leaders in their own research fields. There are challenges though. We need to not just act within our own little communities, but act across them to increase interdisciplinarity and community building.

[speaking to the DARIAH audience specifically] I’m honored to be here with humanists and social scientists because I don’t get to speak to you very often and I know that at the conferences I attend [paleontology] that humanists and social scientists often aren’t invited and I think that’s a real problem. We have this gulf between physical science, and humanities and social sciences. We need to be working together building bridges, not walls. Open science for me is about breaking down barriers and generating equity in science. Things that can help us to foster collaboration and increase the power of communities against the entrenched crap which we’re all trying to fight against. This means we have to work together towards a common goal, ultimately that common goal for me is pooling knowledge and resources to create a decentralized scholarly infrastructure. With communities as the actual focus, then we can actually achieve the principles of open science.

Be that penguin. Don’t hold back from trying new things; be one of those people who jumps into the water first because you’ll be remembered amongst your community as a champion. Be fearless Gimli. The career pipeline is leaky anyway so why not diversify your skill set. Go out as an awesome researcher ‘guns’ blazing, or train yourself to become an awesome researcher through open scientific practices predicting what the future of your field is going to be rather than doing what professor X tells you to do because it worked 50 years ago. Don’t be a cobra farmer. Be focused on good science and responsible evaluation and let the quality of your research speak for itself.

What open science is

It’s a tautology. Science was always open. This is where we want to get to in 10 years. We don’t want open science to exist anymore eventually, because this is going to be the period when we woke up and realized that what we were doing before wasn’t really science. It was anecdote, and we need to change that. Science without open is just anecdote, open science is just good science that’s your take-home message.

Afterword – OS timeline updates

During and after Jon’s 2018 talk and his passing in 2020, major changes took place that would reshape the science landscape. Two in particular are of centrality to Jon’s core values about free knowledge communication, and reinforce that open science is actually good science. For one, and despite many lawsuits and attempted shut downs, Sci-hub has very little apparent impact on publishers’ profits. The figure below is Elsevier’s parent corporation RELX’s revenue and earnings per share since 2016. Sci-Hub had massive usage throughout this time yet RELX’s revenues continued to climb.

What happened in 2019 was not a sudden flocking to Sci-Hub. It was that most of Germany’s institutions, the University of California and many other global institutions cancelled their Elsevier contracts. This clearly had a huge impact. At the same time, already in 2016, researchers on every continent used Sci-Hub en masse and had way more scientific knowledge in their possession than at any time prior. This was a huge achievement for scientific communication, that still continues today. Why didn’t Sci-Hub matter for corporate publishing profits?

Simple: Presumably those who use Sci-hub in the Global South and as members of less-well endowed institutions in the Global North do not have legal access to the articles they download. These institutions do not have the funding to have legal access, so whether the researchers at these institutes use Sci-hub or not, the institutes are not a current source of profit for publishers. Moreover, given the persistence of global inequality, it is unlikely that these institutes will ever afford subscriptions, thus they are not potential sources of profit either. For those who use Sci-hub at well-endowed institutions with subscriptions – maybe they are lazy, unaware of the subscription or somehow unable to access their library (e.g., a VPN or technical glitch) – they are not cutting into profits. These universities already have subscriptions for the most part so when their researchers use Sci-hub, the publishers still profit. Thus, all the noise made by publishers about Sci-hub eating their profits is greatly overstated.

The percentage of research available through legal open access channels has doubled or even tripled between 2015 and 2020, but one of the greatest achievements politically is Plan S supported by cOAlition S. This is a mandate that all funded research be open access as of 2021. This has completely shaken the for-profit megalithic publishing system and its role in Europe. Already, two of Jon’s favorite targets, Springer Nature and Elsevier, have come to the table to offer solutions for researchers to be compliant with Plan S. Its not perfect as it still calls for often large APCs on the part of the author, but it allows the authors to retain CC BY rights to their work. We cannot expect science communication to be perfect, just like human nature, but we can in perpetuity strive to do just good science and continue to push publishers to reduce their fees as we move away from print distribution.

Footnotes

[1] This is one of Jon’s more controversial claims. While there is evidence the public may have trusted science less heading into the 2010s, there is also a lot of evidence that trust has been stable for decades and of course that this varies greatly by country.


[2] And statistically underrepresented in the output. The exception being that female solo authors are not apparently penalized in the peer review and publication process itself.

[3] The evidence on this is mixed.

Global inequalities in science are bigger than those in the economy

Original post by Witold Kieńć

[Note from Nate Breznau] This post originally appeared in DeGruyter’s blog. I had assigned it as a reading in my course, but when I followed the link recently it did not appear and resolved instead to the home page of the blog. I searched within the blog but it did not turn up. Did DeGruyter remove it? I also am unnable to find the author of the post as his website seems to no longer exist (according to what I think is his OSF page). [update] I Tweeted to them initially and they were quick to respond.

As their blog is discontinued original blog post as recovered from the Internet Archive.

Global inequalities in science are even bigger than those in the economy. Of course, to afford food is more important than to contribute to scientific discussion. Thus results of inequalities are less dramatic in case of scientific research, however, the case of Ebola might be worth to think over here. This pathogen was detected for the first time in 1976, quite a long time ago. Would the current therapy method for the disease that it causes be the same if it had been discovered in Alaska?

The lion’s share of all scientific articles published in established academic journals comes from a small number of countries, and some of these leading countries are really small and rich, when seen from a global perspective. According to World Bank Data, there were more than 21 thousand papers indexed by the Science Citation Index and Social Sciences Citation Index in 2013 that were published by Swiss researchers. This means that Switzerland was able to produce 2,603 top level papers for each one million of its inhabitants. Denmark, second in this ranking, achieved the result of 2,223 papers per million people, so 15% worse. The visualization of number of papers per million inhabitants on the global map shows the indubitable hegemony of rich, northern countries in science.

Reconstruction of original figure using the original figure thumbnail in a Google image search.

A country’s scientific publishing output per million people correlates very strongly with Gross Domestic Product per capita (Spearman 0.84!). In short, you have to be rich to have a significant input to science. This might be nothing new for you, but what is quite surprising for me, is that global inequalities in science are bigger than those in the economy.

I have calculated the Gini coefficient for 4 of the “development indicators” provided by the World Bank. First is the market capitalization of listed domestic companies, so commercial value of companies registered in a country, which I expected to be the most unequal on a global scale. The second one is Gross Domestic Product, that is a well established indicator of welfare and is known to be extremely unevenly distributed globally. I have also chosen electric power consumption, which is also a good indicator of general consumption level. The fourth indicator is the number of articles indexed by Thosmon Rueters services (this data is provided by the World Bank as well). All indicators were divided by the number of country’s inhabitants and then the Gini index was calculated.

In the result, I realized that the contribution to the scientific core is even more unequally distributed among countries than GDP and values of companies. And this is the most unevenly distributed factor that I have analysed. Countries are more equal in respect of their share in the global wealth than in their impact on global scientific discussion.

A lot of work has been done to inform citizens of Europe and North America about the dramatic scale of global inequalities. However, these inequalities are so big, that average people from wealthy countries still do not fully understand what it means “to live below the absolute poverty line”. So please, now try to imagine that inequalities in science are even bigger than that. Of course, to afford food is more important than to contribute to scientific discussion. Thus, the results of inequalities are less dramatic in the case of scientific research, however, the case of Ebola might be worth thinking about here. This pathogen was detected for the first time in 1976, so quite a long time ago. Would the current therapy method for disease that it causes be the same if it had been discovered on Alaska? The Ebola virus is the extreme and rare case, a lot of science has less to do with life or death issues.

However, my comparison to the drastically uneven world of global economy may let you imagine how different the chances of researchers from the Global South are from their colleagues in the Northern Countries. And factors that support hegemony of the North are not only economical ones. There are significant cultural and social barriers that enhance the unequal status quo (have a look here).

Open science in sociology. What, why and now.

WHAT

By now you’ve heard the term “open science”. Although it has no global definition, its advocates tend toward certain agreements. Most definitions focus on the practical aspects of accessibility.

“…the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods.”


FORSTER, open science teaching resource

Some definitions enter the realm of ethics, feminism and social justice.

“…to imagine and design inclusive infrastructures, practices, and workflows for scientific practice that intentionally enable meaningful participation and redress (these new) forms of exclusion.


Denisse Albornoz,OCSDNet

Others focus on the communicative interplay between scientists and the public.

“Openness in Open Science also means opening up science to society… The democratic ideal of Open Science argues for equal two-way communication with the public: one should not solely focus on the question of how to foster the uptake of science in society, but also on how to foster the uptake of societal insights in science.


Anne-Floor Scholvinck,ZBW Mediatalk

Whatever the ontology, open science is inevitably something that challenges the status quo in science. Usage of term indicates there is something undesirable about science, otherwise advocates would simply advocate “science”.

The “open” part of the concept refers to any number of things depending on whom you ask. Commonly it means:

Open access – making the results of scientific techniques, research and theory accessible to everyone; as opposed to only in paywalled journals.

Transparency <open process> – making all methods, code, data and any biases or conflicts of interest known before and after the research is conducted. So long as doing this does not harm human subjects or violate any laws.

Open source – on the technology side of science, all programs, apps, algorithms, tools and scripts should be transparent and usable by others. This means that when a scientist develops a new technology, anyone else’s technologies can interact and interface with it. Moreover, anyone can modify the technology to better suit their own needs.

Open academia <open communication/democracy/feminism> – allowing anyone to participate in academia. That academia has the goal of eliminating inequalities, prejudice and domination from academia that take place in the social world. That academia embraces feminism and critical race theory in its methods and institutional practices. That everyone has the same place in scientific discussions, and no science is conducted by pressuring others or taking advantage of existing power structures. That no science takes place in secret, except for research that requires obfuscation for its completion.

Again, the definitions can cover a broad range. The above are just a snippet, although they strike me as the most common usages; except for ‘open academia’, this is reserved for certain justice motivated scholars.

WHY

Although I do not proclaim to be the arbiter or knower of right or wrong in academia (and life in general), the following facts seem wrong to me.

Double-work and the co-opting of journals

Scientists provide their work as editors and reviewers, because the peer review and publication process is the centerpiece of all of science. Peer reviewers and editors are the only consistent form of quality control in science. The academic journal was a functional response to previous forms of knowledge transmission that required direct scientist/practitioner to student interactions which were geographically limited and reached a very narrow audience.

The journal made it possible to transmit knowledge across the globe. Moreover, the journal reduced the simultaneous discovery and re-discovery problems of science, because no one could prove they discovered something first, and others worked on problems that were already solved unknowingly. It represents one of the first ‘open science’ movements because it was driven by the idea that science was at an impasse and could only move forward through transparent and open exchange of ideas arbitrated by being part of the public record through publishing.

Ironically, the journal format came full circle and began to undermine science. After over two centuries of journals run by non-profit academic associations, for-profit publishing houses began ‘offering’ their services to meet the growing global demand for journals and their content and the rising costs of editing and distribution. In many cases, these publishing houses were able to purchase the journals by offering the academic societies the exclusive right to determine what went in them. Within just 30 years, five conglomerates owned the titles, content or certain features of over 50% of all journal articles published globally.

The content, as always, is still a product of the scientists and the voluntary work of editors and peer reviewers. The publishing houses make large profits, but pay nothing to these workers. The editors and peer reviewers earn their income from universities mostly. The very universities that pay high fees to purchase the right to provide the journals in their libraries. This is a double tax on the universities – paying the producers of content to produce and then paying the distributors of that content to consume it. The content does not change at any point in between these two forms of payment, in other words, the publishers do not add any scientific value to this content.

Matters got even worse with the publishing houses over the past decades. As creative and deceitful profit seekers, some publishing houses realized they could generate even more profit by collaborating with the private sector. For example pharmaceutical companies’ profits were directly determined by the findings of studies published in journals. Pharmaceutical companies, or any companies whose profits were determined by the outcomes of scientific experiments, would be willing to invest in shaping those outcomes if they could. Enter a novel concept pioneered by Elsevier: selling journals or journal space to private companies to boost their profits. Win-win for them. Elsevier also pioneered the process of monetizing open science by purchasing SSRN, engaging in massive lawsuits designed to stop the free sharing of (their) copyrighted knowledge and tries to copyright intellectual activities such as peer review.

Other ventures create journals that prey on scholars who do not know better, or seek to get easy publications to add to their CV. These publishers are often labeled “predatory publishers” and they “publish work without proper peer review and which charge scholars sometimes huge fees to submit should not be allowed to share space with legitimate journals and publishers, whether open access or not” (predatoryjournals.com). They also sometimes mimic reputable journals by copying their styles and their names and soliciting content from scholars, a procedure known as “hijacking“.

Publish-or-perish begets questionable research practices

Thanks to the advent of the scientific journal, knowledge could be evaluated, used and further transmitted across space and time. The utility of the journal and other forms of academic publication such as books, proved so effective that they became the primary source for others to evaluate the importance of scientists and their work. This gave rise to the norm we are all familiar with, publish-or-perish.

In a survey of psychologists, John et al. (2012) found that 50% claimed they had selectively reported studies that supported their hypothesis (as in, selectively excluding those that didn’t). Moreover, 35% admitted to reporting unexpected findings as having been predicted from the start. Nearly 2% outright admitted to faking data.

Publish-or-perish and questionable research practices have a causal relationship. Except for occasional sociopathic or psychotic individuals, there is no reason for a scientist to engage in questionable research practices. No reason, except scientists’ very existence on scientists may depend on it. So many studies in reality lead to results that go in all directions, support the null or (most importantly) do not provide groundbreaking new results.

Through the peer review and editorial process, journals select studies that are path-breaking. Studies that will move knowledge forward and be of the greatest interest to readers. When faced with prospects of not getting tenured, not getting grant funding and being forced out of academia, a human’s (scientist’s) rational calculations change. Suddenly, rounding that p-value from 0.054 to < 0.05 or even adding some cases to the data becomes a cognitively defensible decision.

Like any profession, science is competitive. Those who publish more, or get more citations to their publications tend to get ahead. Those who don’t, don’t. Professional athletes use incredible tactics to gain competitive advantage. Of course steroids are well-known, but other tactics are much harder to detect. For example, endurance athletes often use blood transfusions to boost recovery and performance. This is what it means to be human, scientist or not.

One of the most radical events in the social and behavioral sciences is Diederik Stapel’s entire career faking data and results that were published in at least 54 articles that consumed millions of Euro in funding. It took almost two decades for critics and whistleblowers to finally out him. Psychology is not alone. In political science LaCour and Green published a study in Science that attitudes toward gay marriage could be changed if heterosexual people listened to a homosexual person’s story, but it turns out LaCour fabricated results of a follow up survey that never took place as uncovered by Broockman. In economics Reinhart and Rogoff published numerous studies identifying a negative impact of high debt rates on national economic growth, when in fact several points in their dataset had conspicuously missing values. When these values were added there was no longer support for their claim as identified by Herndon, Ash and Pollin.

I suspect that most questionable research practices are not intentional. The sociopathic (~psychotic) Stapel’s of the world are rare. This pressure to find a job after doing doctoral studies and then to get tenured, means a trade off between conducting science in its ideal form – so learning as much as possible about the existing literature on a subject, mastering the necessary methods to perform the research and executing the research, possibly with several iterations, and facing the prospect of null results – with science in a form that will lead to publication as fast as possible.

This ‘fast as possible’ leads to amateur science. For example, in the rush to get my first publication I attempted to use “multiple imputation”, but lacked the time to properly learn this method. Instead I simply generated several datasets and averaged them into one and re-ran the analysis on this one. This was not an intentional misuse of a method. It is a questionable research practice as a result of context. Think about matrix algebra. It is the basis of many advanced statistical techniques regularly used by social scientists. How many of us have a strong grasp of matrix mathematics? I don’t. And yet I’ve published several studies using structural equation modeling.

WHAT & WHY in SOCIOLOGY

I am aware of nothing about sociology that suggests it needs a special adaptation of open science. Most research cannot be strictly delineated as sociology or not sociology anyways. The boundaries of a discipline, especially within the social sciences, exist mostly in the institutional structure of universities. Eliason suggested that sociology is unique because it overemphasizes quantitative techniques, has needlessly long articles, lacks writing for the popular press and emphasizes research at the expense of teaching. In my experience the previous sentence perfectly describes all social and behavioral science disciplines at once. Even article length, something I thought might be peculiar to sociology, is not special. Political science and management research have very long articles. Consider that and ASR and ESR for example, limit words to 9,000 and 8,000 or less – this is relatively average if not short for social science.

Actually, I would argue the most unique thing about sociology at the moment relates to open science. Two points in particular: (A) that sociology has not had the same incredible scandals as other disciplines and (B) that sociology lags behind other social sciences in promoting open science.

A lack of scandals, not scandalousness

Could sociologists be more scientific and ethical in their research behaviors than those in other disciplines? Given identical institutional and career structures that favor productivity and innovation over replicating or checking each other’s work, I doubt it. Sociology journals and their editors, for example, rarely retract articles despite evidence of serious methodological mistakes. Carina Mood once accurately pointed out mistakes in the interpretation of odds-ratios in some American Sociological Review articles, but the editors refused to publish her comments, much less consider retractions. She shared her exchange with ASR in an email to me and discusses some of it in a working paper. An exceptional recent event was the retraction of one of Legewie’s sociological studies, but this required he himself to initiate the retraction after someone pointed out errors in his work. Until 2020, the Retraction Watch database (www.retractiondatabase.org) listed no retractions from the top sociology journals, and only two among the well-known, one in Sociology and another in Social Indicators Research.

This year, something new happened. Five articles published in Social Problems, Criminology, and Law & Society Review were retracted. These articles had the common co-author Eric Stewart. It turns out that the data he provided were faked. There is no other logical conclusion that this after exceptionally rigorous work by Pickett (a co-author of Stewart) provided evidence that the Stewart studies had consistently incorrect means and standard deviations, unverifiable surveys (sources, methods, original materials), magically changing case numbers despite identical statistical results, sometimes half the data had duplicate cases and impossible clustering structures in the data.

As an aside, one of Pickett’s findings was that the data had non-uniform terminal digit distributions. This means that the right-most digits in the reported statistics differs markedly from a uniform distribution. In particular, at the third-digit numbers should be uniformly distributed with 0-9 appearing roughly 10% of the time. In one of the papers, zeros appear less than 2% of the time. If you are considering faking data, keep in mind that it is roughly impossible to do it in a way that cannot be detected by careful investigation. Any algorithm used to generate results (even copying and pasting) leaves is statistical marks.

Perhaps we sociologists should be partly relieved, as this is just confirmation that we are as much a part of social science and its problems, as any other discipline. However, the Stewart retractions which should have been breaking news for sociology, went mostly unnoticed. The results of the investigation leading to the retractions is not published in a flagship sociology journal where it belongs. Instead it appears in Econ Journal Watch – something unlikely to be read by any sociologist. Moreover, the retraction notices from the original journals do not cite outright fraud. Stewart continues to promote his work in print claiming the main findings still hold, and several other of his studies with similar irregularities have not been retracted.

Another, extremely important event was a case of ethnomethodological research conducted by Lindsay, Boghossian, and Pluckrose in the mid 2010s. This is sociological self-examination at its best, although their backgrounds are mostly outside of the discipline of sociology. They wrote a series of 20 papers presenting fake results and making arguably unethical claims. They invented the papers to mimic the style of articles published in journals well-known for sociological research on topics of identity, hegemony and marginalization. Seven of their papers were published or had revise and resubmit recommendations before whistleblowing forced them to cancel the project. Some highlights: one paper contained sections from Hitler’s Mein Kampf. Another suggested men should be trained similar to dogs to prevent rape, and a third that white men should be forced to sit in chains on the floors of university classrooms, instead of normal desks. I am not commenting on the merit contained in these ideas, only that they all contained faked data, non-existent methods or conclusions not supported by the data. That these studies easily flew under the radar of a number of high impact journals points out how easy it is to publish without doing the necessary research work.

Lagging behind closed doors

October 6th, 2020. I entered the search terms “open science” (with quotations to search the exact phrase) and “sociology” (with quotations to only return results that contain the word) into Google Scholar. Six pages of results without a single sociology journal. On page 7, Merton’s “Priorities in scientific discovery: a chapter in the sociology of science” appears. Publication date 1957.

In 1973, Wilson, Smoke and Martin found that 80% of studies published in the top three sociology journals of that time rejected the null hypothesis, in other words they had p-values below a threshold. This suggests publication bias, if not p-hacking. Sahner (Table 5) analyzed all article submissions to the Zeitschrift für Soziologie, 1972-1980. Of those that contained significance tests, 70% were significant at p < 0.05 suggesting that authors prefer to submit significant results. More recently, Gerber and Malhotra (2008) reviewed articles published in American Journal of Sociology, American Sociological Review and The Sociological Quarterly, and specifically looked at the boundary of t = 1,96 (i.e., p<0.05) to find that as many as 4-out-of-5 studies were ‘significant’. This suggests publication bias as well. Sociology has yet to have a systematic review of p-hacking by comparing p-values within ‘significant’ results. Meanwhile psychology and political science for example are teeming with papers on “p-hacking” and “publication bias”.

Sociology is rather intransparent. An estimated 78% of the major sociology journals have long-standing transparency policies. Unfortunately, these policies are mostly artifacts on paper without much enforcement. For example, only 37% of sociology articles published in the mainstream journals between 2012-2014 include shared data and/or materials. In 2015, a small group of sociologists tried to obtain materials from the authors of 53 prominent sociological studies. They obtained these from just 19%, and only 20% of all the authors they contacted bothered to respond despite several requests. This suggests sociologists are free to hide the data and materials that led to their findings without recourse, despite such guidelines.

Other disciplines have embraced the Transparency and Openness Promotion Guidelines (TOP). The TOP guidelines with help of the Center for Open Science support journals to improve science. Journals can become signatories of TOP, and in doing so they either adopt and enforce new transparency guidelines, or certify that they already meet certain transparency standards. Most of the top psychology journals and several political science journals signed on. Other major journals such as the Journal of Applied Econometrics and later the American Economic Review adopted their own enforced transparency guidelines.

Until 2017, the only higher ranking sociology journals that signed TOP were Sociological Methods and Research and American Journal of Cultural Sociology. In 2017, Elsevier dictated that all its journals adopt guidelines and this added Social Science Research to the list. At the time of writing this, the flagship journals American Journal of Sociology and American Sociological Review neither signed TOP nor enforce their own guidelines. Of top German sociology journals, the Kölner Zeitschrift für Soziologie und Sozialpsychologie is the only signatory.

If intransparency is pervasive in sociology, then research cannot be (a) checked for errors, (b) reproduced or (c) simply critiqued. Even when exact reproducibility is not the goal, as often is the case with context-specific interpretive research, most research methods remain shrouded in mystery. This requires readers to take a giant leap to trust what others report. Part of the problem is that sociologists express little interest in reproduction or checking others’ works. There are few replications in the history of sociology, and if anything, they decreased over time until recently. For example, searching the articles in American Journal of Sociology and American Sociological Review reveals 22 replication studies from 1950-1980 and only 8 from 1981-2010.

Something telling about a lack of willingness to open sociology comes from sociology’s most ‘powerful’ society, the American Sociological Association. They collectively petitioned the US government to not make data transparency a requirement attached to grant funding in 2019.

NOW

What to do about it? Here are some simple steps to consider especially for sociologists. Similar to steps advocated by many others for graduate students and academic institutions, or all of us for example.

Transparency

Make all the materials – research design, methodological steps, data (when legally and ethically possible), analyses, conflict of interest and any software code – available online. The practical reason is that others can follow your work and expand it in the future. Doubly practical is that you don’t need to respond to email requests for your materials. So long as you are not a deceitful sociopath, you want others interested in your work and to replicate your work. Even if a study, seems to ‘prove you wrong’, the fact that it replicated your work is evidence of how important your work is and the topic of study. You are a piece of a much larger community of knowledge construction. Constructive exchange can lead to collaboration with critics to generate better future research without personal conflicts.

The immediate value of transparency is that being transparent forces you to be careful. Knowing everything will be public information increases the value of attention to detail. Put in its converse: not sharing your workflow publicly can indirectly foster lower quality standards, in addition to creating possibilities for misconduct. All this enables rather than hinders knowledge, and increases inter-researcher trust.

Transparency should not be much extra work. During the research process you should take high quality notes for yourself. You will often return to your data and research in the future and thus need those notes. This is a best practice with or without sharing your work. When you engage in this best practice, you have a deep familiarity with your data and can draw meaningful conclusions and easily redact identifying characteristics in your data in the case of qualitative research. In case you cannot share data, you can still reveal the design and expectations; or allow controlled access to the data. Human subjects must be protected at all costs, and yes this often means data sharing is not possible .

The ‘transparency work’ of the qualitative research process can be reduced by software platforms that provide semi-automated annotation and coding. Even if you do not share data, you can build an open workflow from the beginning that allows others to understand every step of the data generating process. However, this work can also be extremely tedious and the incentives not immediately clear. More fruitful discussion if not research assistant funding is needed in this area moving forward.

If you are using quantitative methods, immediately stop hiding your work. If you ran 100 models and 99 did not support your hypothesis, then this is your finding. If a journal does not want to publish this, point the editors and reviewers to the importance of null results and the problems of publication bias. If they still refuse, consider boycotting this journal and sharing your negative experience in public.

Preregistration

Preregistration can drastically reduce bias and hacking prior to collecting data. When you clearly outline your plans including how you will analyze the data, before conducting the research, there is little room for hacking so long as you stick to the plan. Moreover, preregistration can be done directly with a journal although sociology journals are laggards here because they generally do not offer this option. In a preregistration, even if you just put an pre-analysis plan or research design and goals online, you must think much harder about factors such as meaning, causality, inter-subjectivity and ‘how the world probably works’. You cannot hide behind results in this process and therefore you must anticipate counterarguments and explore counterfactual logic. This improves the clarity of theory and research, creating an immense gain in efficiency and effectiveness.

Regardless of the methods you use there are many opportunities to take advantage of preregistration. Some forms of qualitative research, for example those involving grounded theory and interpretivist methods, require decisions during the research process that cannot be foreseen. This uncertainty can be outlined in a preregistration stating explicitly when flexibility is and is not admissible. Moreover, simply putting a qualitative research plan online prior to conducting the research is equivalent to a pre-analysis plan. This research design need not compromise your data collection work because you can register the plan on a platform like the Open Science Framework and then embargo it, so that it is preserved but not made public until after the research concludes. Some scholars using quantitative methods might assume that preregistration is not possible because they work with secondary survey data. But the regularity and release of these survey data are known in advance, and these scholars can preregister their studies before the next round of data are collected with the knowledge of which questions and countries will be available.

Decommodify science

The central functions of the scientific publishing industry are printing and disseminating knowledge, which historically solved a problem of how to share knowledge across universities and countries. The business functions of publishing, however, come with harmful byproducts. Publishing firms extract profits from scientists twice. First, scientists provide free labor in the form of editing and peer reviewing, in addition to producing the results for the articles to be printed. Next, researchers, or their employers, must purchase the product of their own labor; labor not paid for by the publishers. The journal article as a product comes at a high cost, and often only in packages of journals meaning that universities have to pay for extra material their scholars do not use.

Sometimes publishing houses neglect science in favor of profits, but Elsevier has been particularly problematic. They sponsored weapon fairs, created and sold ‘fake’ journals to pharmaceutical companies to publish ‘results’ supporting their drugs, purchased the Social Science Research Network and created paywalls or removed legally shared working versions of articles, charge fees for open access articles, and actively lobbied against open access legislation (For a concise summary with links see Tal Yarkoni’s blog entry). This brought massive counter movements against Elsevier in the scientific community (for example, The Cost of Knowledge). You can take action and refuse to review for or publish with unethical publishers if you feel it is justified. Thus, you should inform yourself about the publishers. Your libraries are a source of information, because they deal with the business side of publishers.

If you are in Europe, check if your institution is a signatory of ProjektDEAL. A consortium of universities are collectively bargaining with publishers via ProjektDEAL demanding that publishers reduce fees and eliminate the double paying of universities. The primary objective is that publishers sign country-wide subscription agreements that enable access for all universities at once. Wiley agreed to such a model and this marks a paradigm change. It indicates how the publishing industry looks in the future, so long as the OS Movement proceeds. If you are not in Europe, consider starting a similar initiative, for example the entire University of California system of 10 universities, 5 medical centers and several research institutions that collectively produce roughly 10% of the world’s academic publications recently followed ProjektDEAL and boycotted Elsevier.

You can work around the publishing business. Prior to submitting an article or after it is published, you have the right to share a preprint – a draft of the paper you share publicly so long as it is not published elsewhere or sold for profit. Posting preprints reduces the power that publishing firms have over science, in addition to giving others immediate access to your work. But simply posting preprints on your academic website is not open enough. Use a preprint service, for example through the Open Science Framework, to ensure that your preprints appear in search engines such as Google Scholar. SocArXiv for example, is the go to location for sociology. This enables scholars to find and directly access research results based on the words they contain, uninhibited by paywalls – a crucial aspect to practicing sociology in the Global South. Preprint services are free and open access.

Die ‚Novel Coronavirus‘ Pandemie und die Grenzen von Open Science

Deutsche Übersetzung von Novel coronavirus pandemic and the limits of open science (8. April 2020).

Am 30. Januar 2020 erklärte die WHO einen ‚Global Health Emergency‘ basierend auf Hinweisen auf ein Virus, das sich schnell verbreitet. Das Coronavirus aus der SARS-Familie (Sars-CoV-2, und die Krankheit Covid-19). Die Beweise, mit denen die WHO diesen Notfall erklärte, stammten fast ausschließlich aus chinesischen Daten.

Die chinesischen Daten zeigten im Januar eine alarmierende Ausbreitungsrate, wie in Abbildung 1 dargestellt. Ohne die chinesischen Daten hätte die WHO wenig Anlass zur Sorge gehabt, da in allen anderen Ländern zusammen kaum 90 Fälle bekannt waren und kein einziger Todesfall.

Abbildung 1. Die Verbreitung von Covid-19, die zur Notstandserklärung der WHO am 30. Januar führte. Johns Hopkins Daten.

Anfang Januar ergriff die chinesische Regierung Maßnahmen, um Nachrichten und Daten1 im Zusammenhang mit dem Virus zu blockieren. Trotzdem gelang es chinesischen Wissenschaftler*innen, offenen wissenschaftlichen Praktiken zu folgen, einschließlich des Teilens partiell-genetischer Sequenzdaten mit der Welt. Dies ermöglichte der WHO, geeignete Maßnahmen zu ergreifen, und befähigte  Wissenschaftler*innen in Deutschland, Tests zur Identifizierung des ,novel Coronavirus‘ zu entwickeln. Das deutsche Team veröffentlichte seine Methoden am 13. Januar auf der WHO-Website. Technologie und globale Kommunikation haben sich zu einem Punkt entwickelt, an dem Regierungen den freien Informationsfluss verlangsamen, aber nicht stoppen können.

Die gemeinsame Nutzung aller Daten und Erkenntnisse ist die beste Form der Wissenschaft, wird aber nicht immer praktiziert. Die Open Science Movement hat das Ziel, dies zu ändern. Wenn jeder auf der Welt gleichermaßen Zugang zu Theorie, Methoden, Daten und Ergebnissen aller anderen wissenschaftlichen Forschung hat, steigen Qualität und Effizienz exponentiell an. Dies zeigt sich in den offenen wissenschaftlichen Praktiken hinter dem globalen Kampf gegen Covid-19, die Leben retten und retten werden, möglicherweise Millionen von Leben.

Abbildung 2 ist eine Simulation, die vorhersagt, wie viele Menschen in einem bestimmten Land als Ergebnis des Zeitpunkts der staatlichen Intervention an dem Virus sterben würden. Intervention heißt wann die Regierungen die von der WHO empfohlenen Vorgehensweisen befolgen, wie z. B.: Anweisungen für den Aufenthalt zu Hause, Durchführung umfassender Tests und Quarantäne für diejenigen, die positiv auf das Virus getestet wurden und die, mit denen sie in Kontakt waren. “Tag 0” in Abbildung 2 ist der Moment, in dem mindestens 3 symptomatische Fälle pro Million Menschen auftreten, normalerweise etwa 2 Monate nach dem ersten Fall in einem Land, aber natürlich viel schneller, wenn mehrere Fälle gleichzeitig auftreten.

Abbildung 2. Die Auswirkungen staatlicher Interventionen auf die Reduzierung der Todesfälle durch Covid-19. Quelle: Gabriel Goh, und eigene Berechnungen des Autors (* vorhergesagte Todesfälle)

Der Leser sollte bedenken, dass Abbildung 2 eine vereinfachte Simulation ist. Die Realität ist äußerst komplex. Insbesondere gehen die Regierungen nicht an einem Tag vom normalen Betrieb zur vollständigen Stilllegung der Gesellschaft über, dies geschieht normalerweise schrittweise. Diese Simulation basiert jedoch auf den bekanntesten Modellen der prädiktiven Epidemiologie und zeigt, wie selbst ein Tag der Unentschlossenheit Tausende von Menschenleben kosten kann.

Als Reaktion auf diesen Ausbruch in China und das rasche Auftreten von Covid-19 weltweit folgte Südkorea den standardisierten „Emergency Operating Procedures“ der WHO. Das heißt: möglichst viele Personen testen, alle Fälle isolieren, Reisen und Versammlungen beschränken, nicht notwendige Geschäfte schließen. Das Virus war eingedämmt und nur 200 Menschen starben. Natürlich haben frühere Virusausbrüche in Südkorea die Bereitschaft verbessert. Ebenso war Deutschland gut vorbereitet, weil es schnell Tests entwickelt hatte  und weil es aus den Erfahrungen Italiens als Europas „Ground Zero“ gelernt hatte.

Grob gesagt hat Italien um den 15. Februar herum die Schwelle für „Tag 0“ in Abbildung 2 überschritten. Als Land war es am wenigsten vorbereitet, weil es das erste in Europa war und ein Ort ist, zu dem Menschen aus der ganzen Welt als Touristen, wenn nicht als Fußballfans, strömen. Somit ist der Fall Italiens keine Geschichte eines großen Versagens der Regierung, auch da es Gründe gab, dem chinesischen Fall misstrauisch gegenüberzustehen.

Die Schwelle für “Tag 0” lag in Deutschland um den 2. März herum, und “Tag 0” war um den 8. März herum in New York, zumindest auf dem Papier. New York begann jedoch erst am 1. März mit dem erfolgreichen Testen von Personen, da die Anfang Februar veröffentlichten CDC-eigenen Testkits fehlschlugen. ‘Tag 0’ in New York war wahrscheinlich Mitte Februar oder früher. Dennoch hätte New York aus „pandemischer Sicht“ noch viel Zeit gehabt, Maßnahmen zu ergreifen. Der Rest der Welt hatte seit Ende Januar, dank des offenen Datenaustauschs auf der WHO-Website, genaue Tests durchgeführt. Dies geschah jedoch weder in New York noch in den USA als Ganzes. So wurde New York völlig unvorbereitet getroffen, aber nicht, weil das Virus überraschend aufgetaucht  war.

In Kombination mit den Daten aus China, Südkorea und mehreren anderen Ländern erklärte die WHO am 12. März, dass der globale Notfall nun eine „Global Pandemic“ sei. New York hatte den Ausnahmezustand verhängt, aber erst ab dem 20. März Anordnungen für den Aufenthalt zu Hause erteilt. Erst eine Woche später wurden die meisten Schulen geschlossen und die Polizei autorisiert, diese Anweisungen durchzusetzen (der blaue Pfeil um „Tag 31“ in Abbildung 2). Trotz massiver offener wissenschaftlicher Bemühungen, die durch die WHO kanalisiert wurden, haben New York und ein Großteil der USA offensichtliche wissenschaftliche Beweise und Vorhersagen einfach nicht beachtet. Dies ist umso schockierender, als Seattle und nicht New York in den USA „Ground Zero“ war. Der gesamte Bundesstaat Washington hatte frühzeitig und erfolgreich Sofortmaßnahmen ergriffen.

Die Überprüfung des Versagens von Ländern, Staaten oder Städten, vor oder am 30. Januar (globaler Notfall) oder 12. März (Pandemie) sofort drastische Notfallmaßnahmen zu ergreifen, ist nicht Gegenstand dieses Blogposts. Dank Open Science Praktiken, der WHO und mehrerer Partnerorganisationen und Websites hatte die Welt Zugang zu denselben Daten und Kenntnissen darüber, wie man auf das Virus testet.

Die Botschaft, die ich vermitteln möchte, ist, dass Open Science nicht ausreicht. Ihre Grenzen liegen in den Regierungen. In vielen Ländern hat die Wissenschaft wenig Platz in der Entscheidungsfindung der Regierung. Dies ist vielleicht in einem dysfunktionalen autoritären Regime verständlich, in dem fast alle politischen Entscheidungen getroffen werden, um die Macht aufrechtzuerhalten und zu konzentrieren. Dies ist sicherlich ein Grund dafür, dass die schlimmsten Schrecken des Virus in Afrika südlich der Sahara und in Zentralasien noch bevorstehen. Aber es ist schockierend in Demokratien, in denen es eine Schar von Wissenschaftler*innen und Agenturen gibt, die die Regierung dabei beaufsichtigen und beraten sollen, was zu tun ist, um ihre Bevölkerung zu schützen.

Die Vereinigten Staaten hatten reichlich Informationen darüber, dass sich Covid-19 in den USA befand und sich schnell verbreitete, wie man wirksame Tests entwirft und was genau zu tun ist, um die Ausbreitung des Virus und die Zahl der Todesopfer zu verringern, Monate vor dem Ergreifen größerer Maßnahmen – dieselben Informationen, die der Staat Washington zur Eindämmung der Ausbreitung nutzte. Aber diese wissenschaftlichen Informationen, die in einem Umfang und einer Geschwindigkeit geteilt wurden, die in der Weltgeschichte noch nie zuvor gesehen wurden, reichten einfach nicht aus.

Die Open Science Movement hat ethische Grundsätze, die ihrem offenen Zugang, den offenen Daten, den offenen Methoden und Empfehlungen zum Austausch von zugrunde liegen. Es ist nicht nur so, dass offene wissenschaftliche Praktiken die Wissenschaft zuverlässiger und effektiver machen. Sie fördern soziale Gerechtigkeit oder wissenschaftliche Gerechtigkeit, wenn Sie so wollen. Wenn jede*r Wissenschaftler*in auf der Welt auf alle Informationen zugreifen kann, über die jede*r andere Wissenschaftler*in auf der Welt verfügt, besteht wissenschaftliche Gleichheit. Während reiche Universitäten Elsevier boykottieren, können sich ärmere Universitäten nicht einmal ein Abonnement leisten. Open Access würde also der Welt eine globale Nord-Süd und eine dotierte vs. nicht dotierte Universitätsgleichheit bringen. Aber es kann denjenigen, die potenzielle Virusopfer sind, keine Gerechtigkeit bringen.

Im Fall der Covid-19-Pandemie schien die offene Wissenschaft zunächst das Gezänke und die Tiraden der Regierungen zu untergraben, konnte aber nur an der Tür klingeln. Einige Regierungen weigerten sich einfach, die Tür zu öffnen und Maßnahmen zu ergreifen. Dies wirft die Frage auf, ob die Open Science Bewegung politische Handlungsprinzipien verabschieden muss, die über Maßnahmen zur Förderung von Transparenz und Reproduzierbarkeit hinausgehen. Muss die Open Science Bewegung die Regierungen dazu drängen, administrative, wenn nicht verfassungsrechtliche Verfahren einzuführen, die die Regierungen bei einer Naturkatastrophe oder einem Notfall wie einem Hurrikan oder einer Pandemie den Wissenschaftler*innen gegenüber rechenschaftspflichtig machen?

Ich sage ja aus ethischer Sicht. Aber es ist nicht so einfach. Sobald wir anfangen, Dinge wie Verfahrensreformen voranzutreiben, werden tiefsitzende Sonderinteressen einbezogen und es wird hässlich. Als Wissenschaftler*innen sind wir wahrscheinlich nicht für Schlammschlachten und politisches Manövrieren geeignet. Ganz zu schweigen davon, dass wir umso weniger Zeit für die Wissenschaft haben, je mehr Zeit wir für Lobbying aufwenden. Einige von uns haben die Fähigkeit, die Bewegung zu führen und Regierungen zu beeinflussen, aber die meisten von uns sind schlecht gerüstet, um die Mächte zu bekämpfen, die hinter der Politik stehen.

Das wirft die Frage nach dem Endspiel auf: Reicht es aus, den Regierungen die richtigen Antworten zu geben, auch wenn sie sie ignorieren? Haben wir unsere Pflicht als Wissenschaftler*innen erfüllt, wenn wir nur vor der Haustür auftauchen und Regierungsbeamte entscheiden lassen, ob wir eintreten dürfen?

1 Der ursprüngliche Nachrichtenartikel wurde von der Website der chinesischen Nachrichtenagenturen gelöscht, kann aber im Internet Archive gefunden werden.

2 Quelle: Goh, Gabriel. „COVID Epidemic Calculator“. Tag 0 ist mindestens 3 symptomatische Fälle pro Million Menschen, was bedeutet, dass aufgrund der Inkubationszeit möglicherweise Hunderte infiziert sind. Für Vorhersagen verwendete Parameter: 106 mio. Bevölkerung, ein einziger Erstfall, Ansteckungsgefahr pro Person von 2,2, Übertragungsrate 0,73, Inkubationszeit 5,2 Tage und Sterblichkeitsrate 2%.

Ein Hinweis von mir: Ich habe versucht, die empirischen Beweise und den historischen Zeitplan so genau wie möglich zu erfassen, aber alle Fehler in diesem Blog-Beitrag sind meine eigenen. Ich bin dankbar für die Kommentare von Lisa Heukamp.

Novel coronavirus pandemic and the limits of open science

German version available.

On January 30th, 2020, the WHO declared a global health emergency based on scientific evidence of a rapidly spreading coronavirus from the SARS family (Sars-CoV-2, and the disease Covid-19). The evidence the WHO used to declare this emergency came almost entirely from Chinese data.

The Chinese data demonstrated an alarming spread rate in January, as shown in Figure 1. Without the Chinese data, there would have been little cause for alarm as all other countries combined had barely 90 known cases in that period, and not a single death.

Figure 1. The spread of Covid-19 leading to the WHO emergency declaration Janurary 30th. Johns Hopkins data.

In early January, the Chinese government took measures to block news and data1 related to the virus; however, Chinese scientists still managed to follow open science practices (updated news on this here) including sharing partial-gene sequence data with the world. This allowed the WHO to take appropriate measures and enabled scientists in Germany to develop tests to identify the novel coronavirus. The German team shared publicly their methods on the WHO website on January 13th. Technology and global communications have evolved to the point where governments can slow but not stop the free flow of information.

Sharing all data and findings is the best form of science, but not always practiced. The Open Science Movement has the goal of changing this. If everyone in the world has equal access to the theory, methods, data and results of all other scientific research, quality and efficiency increases exponentially. This is evidenced in the open science practices behind the global fight against Covid-19 that saved and will save lives, potentially millions of them.

Figure 2 is a simulation predicting how many people would die of the virus in any given country depending on when governments follow WHO recommended operating procedures, as in: issue stay-at-home orders, engage in widespread testing and quarantine both individuals with the virus and those they were in contact with. ‘Day 0’ in Figure 2 is the moment when there are at least 3 symptomatic cases per million people, usually about 2 months after the first case in a country but much faster if several cases arrive at once.

Figure 2. The impact of government intervention in reducing deaths from Covid-19. Source: Gabriel Goh & author’s calculations
(*indicates a predicted death toll)

The reader should keep in mind that Figure 2 is a simplified simulation. The reality of the situation is extremely complex. In particular, governments do not go from normal operations to full lockdown of society in one day, this usually proceeds in stages. Nonetheless, this simulation comes from the best known predictive epidemiology models and helps demonstrate how even one day of indecision can cost thousands of lives.

In response to this outbreak in China and rapid appearance of Covid-19 globally, South Korea followed the WHO’s standard emergency operating procedures. Meaning: Test everyone possible, isolate all cases, restrict travel and gatherings, close non-essential businesses. The virus was contained and only 200 people died. Of course, previous virus outbreaks heightened their preparedness level. Germany was also well prepared given its rapid development of tests, and because they learned from the experience of Italy as Europe’s ‘ground zero’.

Roughly speaking, Italy crossed the ‘Day 0’ threshold in Figure 2 around February 15th. It was the least prepared as a country because it was the first in Europe, and a place where people from around the globe flock as tourists if not football fans. Thus, Italy’s case is not a story of major government failure, also given that there were reasons to be suspicious of the Chinese case.

The ‘Day 0’ threshold came around March 2nd in Germany, and ‘Day 0’ was around March 8th in New York at least on paper. But New York only started testing people with success around March 1st because the CDC’s own test kits released in early February failed. This left plenty of time, in ‘pandemic terms’, to source the accurate tests being deployed in the rest of the world since January. This did not happen in New York or the US as a whole. Thus, New York was caught completely unprepared but not because the virus was a surprise arrival.

When combined with the data from China, South Korea and several other countries, the WHO upgraded the global emergency to a global pandemic on March 12th. New York had issued a state of emergency but only gave stay at home orders as of March 20th. It was not until a week later that most schools were closed and police authorized to enforce these orders (the blue arrow around ‘Day 31’ in Figure 2). Despite massive open science efforts channeled through the WHO, New York and much of the US simply failed to heed obvious scientific evidence and predictions. This is even more shocking because Seattle, not New York, was ‘ground zero’ in the US. Washington State as a whole implemented early and successful emergency measures.

Reviewing the failures of countries, states or cities to immediately take drastic emergency measures before or on January 30th (global emergency) or March 12th (pandemic) is not the subject of this blog post. The world had access to all the same data and knowledge of how to test for the virus thanks to open science practices, the WHO and several partner organizations and websites.

The message I want to convey is that open science is not enough. Its limits are found in governments. In many countries, science has little place in government decision making. This is perhaps understandable in a dysfunctional authoritarian regime where nearly all political decisions are made to maintain and concentrate power. This is certainly a reason that the worst horrors of the virus are yet to come in sub-Saharan Africa and Central Asia. But it is shocking in democracies where there are throngs of scientists and agencies tasked with monitoring and advising the government on what to do to protect its people.

The United States had ample information that Covid-19 was in the US and spreading rapidly, how to design effective tests, and exactly what to do to reduce the spread of the virus and its death toll months before any major actions were taken – the same information Washington State used to stem the spread. But this scientific information, shared at a scale and speed not seen before in world history, was simply not enough.

The Open Science Movement has ethical principles underlying its open access, data, methods and sharing recommendations. It is not just that open science practices make science more reliable and effective; they promote social justice, or scientific justice if you will. When every scientist in the world can access all the information that every other scientist in the world has, there is scientific equality. While rich universities boycott Elsevier, poorer universities cannot even afford a subscription. Thus, open access would bring a global North-South and a endowed v. not-endowed university equality to the world. But it can’t bring justice to those who are potential virus victims.

In the case of the Covid-19 pandemic, open science looked to undermine the bickering and buffonery of governments at first, but it could only ring the doorbell. Some governments simply refused to open the door, to take action. This begs the question if the Open Science Movement needs to adopt principles of political action that extend beyond policies promoting transparency and reproducibility. Does the Open Science Movement need to push governments to adopt administrative, if not constitutional procedures that make governments accountable to scientists in a natural disaster or emergency like a hurricane or pandemic?

I say yes from an ethical stand point. But its not so simple. As soon as we start pushing things like procedural reform, deep-pocketed special interests get involved and it gets ugly. As scientists we are not likely suited to mudslinging and political maneuvering. Not to mention that the more time we spend on lobbying, the less time we have for science. There are some of us with the ability to lead the Movement and influence governments, but most of us are ill-equipped to combat the powers-that-be behind politics.

That brings up the end game question: Is it enough to give the right answers to governments even if they ignore them? Have we done our duty as scientists if we just ‘show up at the doorstep’ and let government officials decide if we get to come in?

1 The original news article was deleted from the Chinese News Agencies’ website but can be found in the Internet Archive.

2 Source: Goh, Gabriel. “COVID Epidemic Calculator“. Day 0 is at least 3 symptomatic cases per million people, meaning there are potentially hundreds infected given the incubation period. Parameters used for predictions: 106 mil. Population, a single initial case, contagiousness per person of 2.2, rate of transmission 0.73, incubation period 5.2 days and mortality rate 2%.

A note from me: I have sought to capture the empirical evidence and historical timeline as accurate as possible, but any errors in this blog post are my own.

.

Search OpenEdition Search

You will be redirected to OpenEdition Search