Installment V. If Only Science Had Superheroes
Installment IV. Science Mart
Installment III. Dystopian Tenure
Installment II. In Science We Trust
Installment I. Impact Factor Fetish
Installment V. If Only Science Had Superheroes
Installment IV. Science Mart
Installment III. Dystopian Tenure
Installment II. In Science We Trust
Installment I. Impact Factor Fetish
Was the answer I got from Andrew Abbott at a Hogwart’s dinner I was fortunate enough to attend a few years back when I asked him after dinner, “what are your thoughts on open science?”
That’s right. Open science. Say something paradigm changing.
We have so many results and coherent arguments about the status of science. To make an impact here requires synthesizing them into something new. Not purely constructively new in nature, but also subtly undermining all that came before. Something was missing all along and this new perspective is it. That’s the problem.
Our formula for declaring theory and opinion in scientific writing is that it must be big. Paradigm impacting. Makes you think. Expresses the scientist author’s authoritative intellect and ingenious perspective. With seemingly effortless written words, the scientist author lays down an argument so convincing, that it must be true. It was true all along. It is so obvious, why didn’t I see it? Thinks the average reader.
If Diederik Stapel was never caught, he would be a model for success. THE model for success in social psych. You don’t know what the formula for success is? Ask an economist. You have 3, maybe 5 journals in which you need to get published very early on. You have to have a job market paper that is known. You have to have connections. Otherwise you are on the sidelines, building an exit strategy. This is roughly true in the other social and behavioral science disciplines, yet we in sociology or political science will mostly deny it. We are better than those status maniacs.
But it is publish-or-perish. And this is the model.
To do publish, requires word wizardy. The ability to take any findings and craft an argument about them that makes them seem relevant is more valuable than research method skills. To make any findings seem essential to our fundamental understandings of somethings that are important. That is how we do science. Framing.
Andrew Abbott has the goal of out thinking, reading, writing and arguing everyone he encounters. Ask him if you ever meet him. He will not deny this. Easily considered by most to be one of the greatest living sociologists. He was editor of AJS and published book reviews of random, late-night selections in the Oxford library under a pen name, because it was fun. Because that is what legends do, and then tell about. He once wrote that sharing all material to be reproducible and replicable would make science boring.
As long as this is the model for success, we are not making real progress.
I sat at a dinner once with Ronald Inglehart. Every student, postdoc and professor there gawked with wide eyes. What was so appealing and enthralling. Why was this scientist a rock star for us? Was it his findings? Or was it his status? I think status. I think that is what scientists are seeking. I observe it. I feel it. If you have to have certain publications in order to not just pursue a scientific career, but to be at the ‘forefront’ of a subfield, then does it really matter what is in those publications? As long as you get to claim that status, that is it. You win. Why do athletes take steroids, cheat? Is it because of their great passion for the sport? Or is it to win status?
I entered a Master’s program in order to learn about the problems associated with a topic that was pressing for social justice and democracy. I still want to do this. But status was dangled in front of my eyes. ‘Publications are the currency of our trade’. The professor who told me this was absolutely correct. Now I want to do open science, but I have to perform wizardy with words to be recognized as central to the open science movement. This makes me uneasy to the point of nausea.
How can science become trustworthy and effective, if we have to be witty, cutthroat and/or famous in order to take part in it? How can we overcome this delusion? When will we stop and say enough is enough?
This type of honesty is what John Levi Martin specifically advised not to put out on the intrawebs because it will come back to bite me someday. Image (presentation) is everything. Warhol or Abbott, they figured that out long ago. I guess then for my own peace of mind, I extend my hand. Bite away you biters.
Show me the data
A quick guide to human impact on the environment
The amount of Carbon we’ve released into the atmosphere is nearly perfectly collinear with the Industrial Revolution and subsequent industrial growth. This is easily estimated from ice samples.
We can measure random and cyclical carbon in the atmosphere from ice core samples. Never in 800,000 years have we seen carbon levels like those since the Industrial Revolution.
A striking correlation between emissions and global average temperatures. Although temperatures fluctuate regularly, their fluctuations track upward following human-based carbon emissions.
These are only produced by human production. Note that there are several parts of the ocean where we find more than 10 pieces per cubic meter (dark purple circles and dark red stars). There are over 1.3 billion cubic meters of surface water in the ocean.
We do not have accurate data from long ago, but since 1990 the arctic ice shelves have declined in size dramatically. This also leads to a rise in sea levels. Here a comparison of ice in summer and winter in 1990 compared to 2020.
A review of all published climate science studies in peer reviewed journal articles between 2005 and 2015 suggested a consensus of somewhere between 83 and 97% of climate scientists that humans were causing the major changes we observe in our environment in the last 100 years1.
A new study of all of the just over 11 thousand articles across all disciplines and journals published in 2019 revealed that this consensus was 100% (within a natural margin of error)2.
A case of a fake journal that passed Google Scholars’ bots and algorithm checking to become indexed. The journal contained articles that were completely plagiarized but assigned new titles, abstracts and authors.
See updates at end of blog for table of plagiarized papers, and a small win for open science after the domain host revoked the content of the plagiarized paper’s URLs.
A colleague of mine was searching Google Scholar using certain words and the number two hit appeared with the following title and direct download link.
The PDF looks like a journal article. The journal has an ISBN instead of DOIs, but this is not unheard of. It even has an impact factor listed: 7.11, it must be good.
It turns out that this seemingly interesting paper with an alleged author who is an expert in this area, is plagiarized.
How do I know it is plagiarized? Just ask the real authors of the paper which appeared with a different title and abstract in Social Science Research one year earlier. Oh and with an identical word-for-word text, with some mistakes relating to equations – possibly a failed scraping algorithm.
The journal title is Journal of Current Research and Review, but it is extremely difficult to find through an internet search. This requires endless scrolling (as of 10.08.2023), or removing the ending on the download URL to arrive at the index page.
Scrolling down on the home page reveals the “Latest Articles”. Here I find the same title that appeared in Google Scholar. However there is no download link, only the title and abstract. In other words, the URL that Google Scholar found to download the paper is not accessible through any link on the journal’s website.
How is it possible that Google Scholar would pick up and index such garbage?
I asked ChatGPT to tell me how Google Scholar works, and the results suggest that someone or something had advanced knowledge and programming skills that would provide the right meta-data and materials on the journal webpage to pass the automated Google Scholar bots which are constantly scraping the web for academic papers and publications.
From ChatGPT prompted with “How does a journal get indexed in Google Scholar?” (09.08.2023). I received the following answer. Note that I removed the descriptions of each point, as they are mostly self-evident.
Google Scholar indexes scholarly literature, including journal articles, conference papers, theses, and other academic resources. In order for a journal to get indexed in Google Scholar, there are several steps and criteria to consider:
It’s important to note that Google Scholar’s indexing process is automated, and there is no formal application process for journals to get indexed. Google Scholar’s algorithms discover and index content based on various factors. However, journals can follow best practices to increase their chances of being indexed and improving their visibility within Google Scholar’s search results.
The journal website and meta-data had to pass Google Scholar’s algorithm. This is no lucky feat. Although Google does not reveal any usage of AI to evaluate journals for indexing, it clearly has an advanced system of scraping, filtering and crawling. Who or whatever designed this journal, or really this website that looks like a journal, understood how to pass the test. For example, the “Latest Articles” appear to be from “Volume 14” suggesting to potential crawling bots that the journal may have been published for 14 years now.
This is of course false. There are no other articles readily accessible from the journal’s website other than the two that appear. The other article is also plagiarized, in case you were wondering. But, a closer look at the URL reveals that the article I am discussing has the number “800” at the end of its URL.
https://zapjournals.com/Journals/index.php/jcrr/article/view/800
Changing this number yields other papers. At least as far back as number 795; prior to that yields 404 errors. Moreover, I cannot find the papers from 795-798, only their titles and abstracts.
The journal website’s main page yields something even stranger. It looks like what a computer science student might create as an example page to try and sell as a template or to demonstrate the services on offer for a website construction gig. The content has absolutely nothing to do with journals, academia or publishing.
It even comes with fake testimonials. I wonder whose pictures were stolen for this…
The real question for me is, ‘what to do about this?’. Clearly this is fraud and constitutes both ethical and legal infringements on science. Looking up the host of the domain of the journal using ICANN reveals that it is provided by a Lithuanian company Hostinger.
Looking at this company’s website suggests they are legitimate. The provide contact information to report abuse. So that is what I did by sending them the info in this blog post.
Hostinger replied within two days and told me they take abuse of their services very seriously and asked me for more evidence so they could pursue the case. I gave them the two links for the plagiarized papers and their original versions published one year earlier in Social Science Research.
https://zapjournals.com/Journals/index.php/jcrr/article/download/799/1237
https://zapjournals.com/Journals/index.php/jcrr/article/download/800/1238
Word for word plagiarized from:
https://www.sciencedirect.com/science/article/pii/S0049089X22000321
https://www.sciencedirect.com/science/article/pii/S0049089X22000333
For all the negative publicity surrounding Elsevier, they are still a player in the academic world. In 2020 they owned roughly 16% of the academic publishing market. If there is anyone who would want to prevent abuse of their services, including plagiarism of their work, it is Elsevier. They have a trove of lawyers fighting and winning battles to protect their content across the world. As the two plagiarized papers that I can download are from the Elsevier journal Social Science Research, it made sense to contact them as well. Elsevier’s due process suggests contacting the journal editor first. Thus, I have sent this information to the lead editor.
The journal also lists an ISBN number. But this number is a fake, and returns an invalid search with the ISBN lookup tool.
Google certainly would not want its products indexing fake journals with plagiarized papers, so I took the liberty of contacting them as well.
What is striking about the journal is that they have a long editorial board list. Internet searching reveals that these are real scholars. I also contacted them. I will continue to report on this case as it unfolds.
The question for me is: ‘what motivated someone to create this site?’. There is clearly no profit associated with it. There is also no status gain, because the plagiarized papers have authors assigned to them who are not the original authors and clearly not players supporting the fake journal. They are highly established scholars in their respective sub-fields. These fakely assigned authors are perfect examples of what an AI might choose to assign to a certain topic. I tested this by asking Chat GPT if Seamus McGuinness could have written the abstract. The response points at AI as a source for potentially re-writing abstracts of the plagiarized papers and for finding suitable authors to assign to them. ChatGPT said:
Yes, Seamus McGuinness could be a potential author who might have written this abstract. Seamus McGuinness is known for his research on labor market issues, including education-job mismatches, gender disparities, and remote work. His expertise aligns with the themes discussed in the abstract, making him a plausible candidate as one of the authors who could have written it.
It remains a mystery for now what is up with this website and the fake journal. Was it a computer science project that accidentally got picked up by Scholar, one that was never intended for public consumption? Was it an attempt to create a journal, but try and hide the real content of the journal until it could pick up real submissions? Was it entirely AI generated, to showcase the power of an AI?
[Update 14/08/2023]
Thanks to Random Cat on Twitter, I learned that there are more papers than just two. Using Google Scholar they searched by journal.
This allowed me to compile a table of plagiarized papers.
Table of Plagiarized Papers in JCRR, found via Google Scholar
Original Title | Original Author(s) | Original Journal | JCRR Title | JCRR Authors | Link to Original Article | Link to Plagiarized Article |
The motherhood wage gap and trade-offs between family and work: A test of compensating wage differentials | Nick Wuestenenk & Katia Begall | Social Science Research | COMPENSATING WAGE DIFFERENTIALS AND THE MOTHERHOOD WAGE GAP: A COMPARATIVE ANALYSIS | HK Kleven & CL Landais | link | link |
Conflicting signals: Exploring the socioeconomic implications of gender discordant names | Andrew Francis-Tan & Aliya Saperstein | Social Science Research | BREAKING BOUNDARIES: EXAMINING THE INTERSECTION OF GENDER DISCORDANT NAMES AND SOCIOECONOMIC ATTAINMENT | AL Roberts & M Rosario | link | link |
Gender overeducation gap in the digital age: Can spatial flexibility through working from home close the gap? | Ana Santiago-Vela & Alexandra Mergener | Social Science Research | BRIDGING THE GENDER OVEREDUCATION GAP: EXPLORING THE ROLE OF WORKING FROM HOME IN THE DIGITAL ERA | SMG McGuinness | link | link |
How the Great Recession changed class inequality: Evidence from 23 European countries | Jad Moawad | Social Science Research | CLASS INEQUALITIES IN THE WAKE OF THE GREAT RECESSION: A STUDY OF 23 EUROPEAN COUNTRIES | PD Allison | link | link |
Higher education and high-wage gender inequality | Natasha Quadlin, Tom VanHeuvelen & Caitlin E. Ahearn | Social Science Research | ASSESSING THE CONTRIBUTION OF EDUCATION TO GENDER WAGE DISPARITIES IN HIGH-EARNING PROFESSIONS | J Jacobs | link | link |
TRANSFORMING RESEARCH: EXPLORING THE INTERPLAY OF DATA MINING, MACHINE LEARNING, AND KNOWLEDGE DISCOVER | Robert M. Bond & Christopher J. Fariss | Social Science Researchs | Knowledge Discovery: Methods from data mining and machine learning [b] | Xiaoling Shu & Yiwan Ye | link | link |
SOCIAL MEDIA ADDICTION: PROPOSED INDICATORS AND STAGES | Zakaria I. Saleh & Omar Zakaria Saleh | International Journal in Commerce, IT and Social Sciences | UNPACKING THE CYCLE OF SOCIAL MEDIA ADDICTION: UNDERSTANDING SYMPTOMS, PROGRESSION, AND RECOVERY [a] | OZ Saleh | link | link |
[a] Published twice in JCRR with different authors
[b] Not indexed in Google Scholar
All are from Social Science Research except one. There are also other papers where I cannot find an original. These papers might actually be original papers, or stolen working papers that are not easy to find online.
Interestingly, some papers that appear that they may not have been plagiarized have a different display in PDF form. This includes contact details for the journal. I thus emailed them as well to inform them that they are committing ethical and legal fraud.
[Update 23.08.2023]
Hostinger investigated the reported problem and determined that the user of their domain had violated the terms of ethical/legal usage and removed the plagiarized papers. However, the journal website still appears. In other words, a journal that obviously intentionally plagiarized several articles, possibly to boost its reputation and encourage others to submit and thus pay the 45 dollar fee, is still out there lurking. Moreover, this journal is part of a larger company called Zenodo Publishing. On their main page the list dozens of journals. If I had to guess I would assume these journals are also predatory, and may contain plagiarized content. Only investigating this will prove if this is true.
More to come.
I asked ChatGPT to talk about open science. This post is the unabridged conversation. ChatGPT is a trained algorithm using machine learning. It is what most would refer to as Artificial Intelligence. And intelligent it apparently is.
~
Nate: I am teaching a course on open science. Could you summarize the key arguments of the open science movement?
ChatGPT: Certainly! The open science movement aims to promote greater transparency, collaboration, and accessibility in scientific research. Here are some key arguments and principles of the movement:
Overall, the open science movement seeks to transform scientific research into a more transparent, accessible, and collaborative endeavor, with the ultimate goal of advancing knowledge and benefiting society.
Nate: As I said above, I am teaching a course on open science. It is targeted at university students. Could you recommend some introductory readings on this topic?
ChatGPT: Certainly! Here are a few introductory readings on open science that you may find helpful for your course:
These are just a few examples of the many resources available on open science. Depending on the focus of your course, you may also want to look for readings on specific topics such as open access publishing, preprints, or data sharing.
Nate: What are some actions students can take during their studies that will make their research more open? Open, as in more aligned with the goals of open science.
ChatGPT: There are several actions that students can take during their studies to make their research more open and aligned with the goals of open science. Here are a few suggestions:
These are just a few examples of actions that students can take to make their research more open and aligned with the goals of open science. By promoting transparency, reproducibility, and openness in their research practices, students can help advance the principles of open science and contribute to a more open and collaborative research culture.
Nate: Are there arguments against practicing open science? If yes, where do these arguments come from?
ChatGPT: Yes, there are arguments against practicing open science, although they are often debated and may not be universally accepted. Some of the main arguments against open science are:
These arguments against open science are often debated, and there are counter-arguments and efforts to address some of the concerns raised. However, it is important for researchers to be aware of the potential challenges and risks associated with open science practices, and to carefully consider how to balance the benefits and costs of openness in their own research.
Nate: Can you recommend any readings that are highly critical of open science?
ChatGPT: While there are some critiques and debates around various aspects of open science, it is important to note that these are often specific to certain aspects of open science and not necessarily a wholesale rejection of the movement. However, here are a few examples of critical perspectives on certain aspects of open science:
It is important to note that these are just a few examples of critical perspectives on open science, and that there are also many scholars and researchers who support the principles and practices of open science. As with any complex and rapidly evolving movement, there are likely to be ongoing debates and discussions about the benefits and challenges of open science, and it is important for researchers to engage critically with these issues and develop their own perspectives.
Nate: Do you think AI can improve the implementation of open science practices? If yes, how?
ChatGPT: Yes, I believe that AI (artificial intelligence) can potentially improve the implementation of open science practices in a number of ways. Here are a few examples:
Of course, it is important to note that AI is not a panacea for all the challenges of open science, and there are still many issues that require careful consideration and human input. However, AI can potentially help make open science practices more efficient, effective, and accessible, and there is a growing interest in exploring the use of AI in this context.
Nate: Do you think AI poses a threat to science, in particular principles of open science?
ChatGPT: While there are concerns about the potential risks and challenges associated with the use of AI in science, I do not believe that AI itself poses a direct threat to the principles of open science. Rather, it is the way in which AI is developed, deployed, and used that can potentially pose challenges or risks to open science practices.
One concern is that AI can potentially exacerbate existing inequalities in scientific research. For example, if AI tools are developed and trained on biased data or by biased algorithms, they may perpetuate or amplify existing biases in research. This could have a negative impact on the principle of equity, which is a core component of open science.
Another concern is that AI could be used to automate aspects of research that require human judgment and interpretation, potentially leading to a loss of creativity, intuition, and critical thinking in science. This could impact the principle of transparency, as researchers may rely on automated tools rather than engaging in open and collaborative inquiry.
Additionally, there are concerns about the potential misuse of AI in science, such as the use of AI to generate fake research or manipulate data. This could undermine the principle of integrity, which is a core component of open science.
Overall, I believe that AI has the potential to both enhance and challenge the principles of open science, depending on how it is developed and used. It is important for researchers and stakeholders to engage in critical discussions and careful evaluation of the ethical, social, and legal implications of using AI in science to ensure that the principles of open science are upheld and advanced.
Nate: Speaking of reproducibility. If I asked you the same questions in the future, would you give identical answers?
ChatGPT: As an AI language model, I would provide the same response to the same question every time, assuming that the context and information available to me are the same. However, it is important to note that my responses are generated by machine learning algorithms and are based on patterns and relationships learned from large datasets. While I strive to provide accurate and reliable information, my responses may not always be perfect, and they may be subject to limitations or biases in the underlying data or algorithms. Additionally, if the context or information available to me changes, my responses may also change accordingly.
Nate: Are you processing new information? In other words, still learning and evolving?
ChatGPT: Yes, as an AI language model, I am designed to learn and evolve over time as I process more information and receive updates from my creators. My responses are based on the patterns and relationships learned from large datasets and are continually refined and updated as new data becomes available. This means that my responses can evolve and change over time as I learn from new sources and refine my understanding of different topics.
Nate: I am writing a blog post in my blog ‘Crowdid’ by me, Nate Breznau. The blog is an academic blog hosted by Hypotheses.de. Would it be ok if I posted our conversation in my blog?
ChatGPT: As an AI language model, I do not have any personal or legal rights over our conversation. However, I should note that the content of our conversation is generated by machine learning algorithms and is provided for informational purposes only. While I strive to provide accurate and reliable information, my responses are based on patterns and relationships learned from large datasets and may not always be perfect or fully accurate. It is ultimately up to you to decide whether you want to share our conversation in your blog, but I would recommend that you provide appropriate context and citations to help ensure that your readers have a clear understanding of the content and its source.
Nate: Thank you, this has been a very enlightening conversation. Could you provide me with some more information about you, ChatGPT, and how my blog readers could understand how you learned to become an AI?
ChatGPT: I’m glad to hear that you found our conversation enlightening! As an AI language model, I was created by a team of researchers at OpenAI, a research organization focused on developing advanced AI technologies for the betterment of society. My development was guided by a large corpus of text data, which was used to train deep neural network models to generate human-like responses to natural language input.
The training process involved exposing the models to large amounts of text data and using sophisticated algorithms to learn patterns and relationships in the data. Over time, the models became better at understanding natural language and generating coherent, human-like responses to text input.
It’s worth noting that my development was a collaborative effort that involved a diverse team of researchers, engineers, and linguists, as well as large amounts of computing power and resources. While I can generate responses to a wide range of questions and topics, my responses are generated based on patterns learned from the data and do not represent a true understanding or consciousness of the topics being discussed.
If you would like to provide your readers with more information about how I was developed and how I work, I would recommend checking out the OpenAI website, which has a wealth of resources and information on AI and machine learning.
I was recently asked to give a speech at a wedding. I had several requests for a copy of that speech. I provide that here. The speech is written so that those getting married remain anonymous, I will refer to them as A and B. It is also my best effort to reconstruct it, as it was not written down. It likely varies slightly from the original except for the quotes. The speech started with some personal comments and a joke or two and then it transitioned into this:
Families can be complicated. As the Dalai Lama pointed out:
If you think you are enlightened, go spend a week with your family.
Families aside, we live in a world that sometimes feels cruel, cold and in conflict. That’s why it is so powerful when two people come together and build something. A and B are building something. A sanctuary. A place where they can be safe, where they can express their true feelings, where they can expose their true selves.
What am I saying?…. I’m saying A and B are in love!
Now I don’t actually know what that is, but Dr. Seuss described it once as:
You know you are in love when you can’t fall asleep because reality is finally better than your dreams.
In another book that I admit I haven’t read much of, but is quite popular, we have John telling us that,
God is love.
The point, I think, is that love is not just a state of being. It is a choice, it is an action, it is a power that exists as a synergy. Something greater than the sum of the people who make up its parts. But at a basic level, love is commitment. Commitment is a choice. A choice to honor the idea of love.
Because a relationship will not always be pink fluffy dinosaurs, rainbows and moonbeams. Its like a river. Sometimes surging after rain fall, other times flowing mildly and still other times dried up in drought. Therefore, true love is the commitment to work through the drought. To admit that those loving feelings are not there, but to honor and cherish the partner anyways. To have faith that it will return.
Its maybe like one of my favorite authors, bell hooks, describes it:
Love is an act of will, both an intention and an action…the will to extend one’s self for the purpose of nurturing one’s own or another’s spiritual growth.
Therefore, it seems like what A and B have experienced in their 8 years together, and what I’ve experienced in my 13 years with my wonderful partner, is the transformative power of love – as in the commitment to nurture love through thick and thin.
Eleanor Roosevelt said:
It takes courage to love, but pain through love is the purifying fire which those who love generously know
A and B sit here with confidence because they have experienced this purifying fire, and are ready to experience it again and again. I see and honor their commitment tonight.
I see them create what is described by the Rabbi Jonathan Sacks:
For life to have personal meaning there must be people who matter to us, and for whom we matter, unconditionally and non-substitutably.
And their efforts are not just for them. This act of commitment to love is also a commitment to contribute something to the world that is good. That makes it just that much better for the rest of us. That nurtures humanity. As the Rabbi Amanda Greene pointed out:
We must build this world from love.
All of that! All through a commitment to love. All through love as a commitment. A commitment to love unconditionally, to fall short of this perfect ideal over and over, and to keep trying.
For this beautiful act I salute you and I love you both.
Its not just a ‘replication crisis‘, its a reproducibility crisis. Forget about falsification if one can’t even reproduce the workflow of another. Efforts are underway to improve transparency which allows reproduction, good. But what if even with the original data, researchers cannot reproduce numerical results simply because of where they are in time and space? What if different statistical programming languages or different packages and versions of these languages actually lead to different findings? Based on results of the Crowdsourced Replication Initiative we demonstrated that even using the same data and models, only 80% of independent replicator teams could reproduce the numerical results, even when they had access to the original Stata code.
Based on my experiences in this area, I cannot but help have a strong hunch that software or package versions threaten the computational reproducibility of our research. The fact that Stata rounds up at .5 and R rounds down already suggests that the same models might produce different results across software simply from rounding variations. Recently, I found another reason to believe this hunch.
I am a participant in SCORE, a massive collaboration to systematically investigate the reproducibility of social science research – led by the Center for Open Science. I agreed to attempt a computational reproduction of the study “Age, Inequality, and Reactions to Marketization in Post-Communist Central and Eastern Europe” by Horvat and Evans (2011).
Despite some potential differences in the data I received from the original authors and those reported in their tables, I was able to produce similar results. Similar but not exactly close. The particular coefficient I was interested in came out at -0.39 in my ordered probit (cumulative link) model, whereas their original was -0.22. As a logistic coefficient or (translated into a percentage probability), this is potentially a large difference. I used R, but was not satisfied with my results. The original authors most likely used Stata, as their public data were a .dta file – Stata’s native file storage format. Also, lets be honest, no social scientist was using R before 2010 when they likely did their analyses. My curiosity and suspicion led me to run the model in Stata. Here I got a -0.21 coefficient. Almost identical to their original study, and not surprising given that there was minor variation in case numbers they reported and in the public data file. I was still not convinced that these coefficients were actually different because the models were quite complex. So I plotted the predicted probabilities to be sure that this was not statistical artifact of some model components (intercept cut-points for example).
But my hunch was supported. These same models run in R and Stata, led to different predicted probabilities. The figure below shows that among many former Communist Eastern and Central-Eastern European countries those who are aged 60+ were less optimistic about their living standards over the next 5 years. One of their critical findings was that this gap increased form 1993 to 2007, in particular because the 60+ group became even less optimistic; although in fairness this is difficult to conclude because of all the other variables in the model. What is clear, is that they were 0.45 points apart in 1993 and 0.65 apart in 2007 (see Table 5 in their original study). When I plot the predicted values I find that these results are similar but not identical in terms of the 1993 to 2007 change, but that the probabilities are quite different across software.
I used Stata v15 and the ‘oprobit package, and I used R’s ‘MASS’ package and the ‘polr‘ function. Despite identical data (and case numbers!) the ‘polr’ routine predicts the probability of respondents answering that their standard of living will fall or fall a great deal as much higher than that of Stata. Although the relative change between age groups is similar – with a slightly steeper negative slope in R – the lines are pretty far apart, and even further apart for the age group 60+. Without unpacking each package and the exact estimation strategy taking place therein, I cannot as of yet say why. My own statistical and software abilities are by no means exceptional, but certainly above the average social scientist. Thus, it would be totally unrealistic to expect any social scientist to understand the entire routine taking place within the polr or oprobit packages. If they did, they wouldn’t need the packages and could just write their own cumulative link routine!
The implications are that using different software leads to a lack of reproducibility. As if we did not have enough to worry about in the reproducibility area already.
Sci-hub is the piratebay of academic journal articles. Its service is mostly illegal because it collects paywalled articles and makes them publicly available online via an indexed search. This is copyright infringement. People love it. The coverage is incredible, many journals have over 98% of their articles covered.
Frustrated with a lack of access to scientific articles, Alexandra Elbakyan of Kazakhstan founded the Sci-hub repository as a 20-year-old graduate student. Her site subsequently provided more open access to scientific knowledge than anyone in the history of science. She was named a person of the year in 2016 by Nature; yes, that Nature of the mega-profit-publisher Springer Nature who promotes open access by charging a 10 grand APC.
Reminiscent of the RAA’s takedown of Napster in 2001, Elsevier took legal action against Sci-hub in 2015 starting in the U.S. and quickly moving to other countries. This international campaign has to do with copyright law being organized by country, making it very difficult to pursue Sci-hub which exists in a cyberspace of mirrors, and it provides something that is unquestionably in the public interest and a basic UN human right.
Although there are allegations of security breaches that could lead to identity theft or other hacking university servers, I am not aware of a single piece of evidence Sci-hub has done anything other than ‘steal’ academic publications. It is not a threat to sovereign nation states, it doesn’t encourage sociopathic behavior. It is a form of rebellion against the plague that for-profit publishing unleashed on science, and a way to promote open science. Of course Elsevier was not wrong in its legal claim of copyright infringement. Elsevier wants researchers to pay for their articles and its minions see Sci-hub as causing profit losses. But the evidence suggests this is nonsense. Elsevier is wasting its time, precious time that it could use to sponsor arms fairs, create journals and sell them to big pharma or try to patent online peer review and force journals to pay to use it.
First, lets look at who is downloading Sci-hub content. Figure 1 shows the top ten countries by total downloads in February of 2022, compared with their total populations. We can see that relative to their populations, the U.S. and France are home to the most downloads per capita as of the most recent data.
Next, lets think carefully about how publishing subscriptions work through two typical scenarios of a researcher downloading from Sci-hub.
Scenario 1. A researcher in the Global South downloads articles from Sci-hub. We know this is good for science. In fact this is science, it is active dissemination of useful knowledge. Merton would be pleased. The first question is easy: Is this researcher getting something for free that they should be paying for? Yes, they are receiving illegal good, getting copyrighted material for free. The second question is also easy: Would this researcher pay for this article if Sci-hub did not exist? No, they are presumably working for a fraction of what a researcher earns at a Global North university and do not have a budget of $25-60 for each article they need. The university also cannot afford millions of U.S. dollars for an Elsevier subscription. The scholar either gets the article from Sci-hub or some other green open access source, or does not use the article. It is a small loss for any publisher when someone would use their article, but could not access it; because having a potential citation to one of their articles is better than nothing.
Scenario 2. A researcher in the Global North downloads articles from Sci-hub. This researcher has either direct or indirect access to every published article that exists. Many universities have subscriptions to articles that their researchers are most likely to need. When the university does not have a subscription, a process of inter-library loan will get them roughly any article they need. Its not fool proof, but within a margin of error, Global North researchers have legal access. Using Sci-hubY yes, this researcher is getting something for free that they should pay for, but, their university or a university in their library network already pays for access, so preventing them from downloading also does not lead to any new money in the hands of the publisher.
Shutting down Sci-hub will not lead to any increase in profits for publishing companies. Elsevier, in its tantrum would argue otherwise. When universities and their libraries finally started to turn on predatory publishing houses like Elsevier and cancelled contracts, profits declined. In this case the universities do have the money to pay for a subscription. However, Project Deal and the UC systems’ boycotts asked Elsevier to sign a more reasonable and less draconian version of subscription and Elsevier refused. That is on Elsevier. It has nothing to do with Sci-hub. Again, no money would change hands because of the boycott which has nothing to do with universities implicitly encouraging their students to download illegal content.
Let’s look at more evidence. Figure 2 shows Elsevier’s profits in recent years. Sci-hub was in full effect in the mid 2010s. Interestingly, profits kept growing. They grow, and grow, and grow until something else happens that has nothing to do with Sci-hub. The universities began to wake up from their nightmares, and realized that they were being abused by publishers like Elsevier. In 2019, they started boycotting and demanding that publishers sign collective contracts rather than pay case-by-case, and that they greatly reduce their fees. And only in 2019 does a year-over-year profit growth model suddenly reverse direction.
Elsevier’s profits only started to decline after they got canceled. Sci-hub has been irrelevant because those who can’t afford it will not pay and those who can already subscribe or should have a subscription, certainly won’t pay for Sci-hub downloads because they already do via their libraries. That is except for Elsevier’s refusal to support science, which is causing their own self-inflicted profit loss. Sci-hub is good for science, its mostly harmless otherwise.
Students pursuing a bachelor or master degree develop both labor market skills for a information and computer-based career, and learn to do science. Not all go on to be scientists. It would seem that those that do not, have no impact on scientific knowledge. They took tests and wrote papers in order to earn their degree. The test results and papers are only for their instructor’s eyes and maybe an occasional parent or other student.
It could be different.
In any given course in any particular discipline, students are taught to do science as a practice. This includes knowledge or ideas about the world; empirical evidence gathered through participation, observation or experimentation; and techniques to maximize the accuracy, efficacy and reliability of their knowledge and ideas. The students’ own work on assignments or theses is a form of ‘training’, and an instructor then checks or grades their learning. If they are privileged they get constructive feedback, and if they are highly self-motivated they actually read the feedback and incorporate it into their future work.
Students are generally aware that their work is for their instructor’s eyes only. At least in my experience, they cannot imagine their work shaping science or public knowledge. They thus have little extrinsic motivation to do more than what the instructor asks of them.
What if the instructor asks them to change the world outside the classroom in some way?
We, as instructors, are teaching students to produce reliable and critical knowledge. They should get a high mark if the work is deemed to be high quality. Why then does the entirety of their learning and knowledge production end as a forgotten file in a folder as a relic of some semester past? Why don’t they share some of that knowledge? I bet if they thought that their knowledge was valuable and useful beyond degree acquisition, they would be stoked.
A common argument against bachelor and master students trying to disseminate their work is that it is not high enough quality to compete with work from doctoral and post-doctoral researchers. In particular, they have not had enough time to dig deep into a body of literature on their topic of interest and their methodological skills may still be quite underdeveloped. They would probably be ‘wasting’ their time pursuing a journal publication or even a publicly disseminated working paper.
I agree. This points at the root of the problem. Publication-based science. Science as we know it has a rewards system where publications are treated as far more valuable than anything else scientists produce. This is particularly acute in the social sciences where scientific research rarely leads to tech or apps with private market value. When publications are the ‘currency of the trade’, academics, universities, editors, students, even policymakers prioritize publications and citations to those publications as the metric for judging the quality of scientific research. As such, scientists have maximum incentive to produce publications above all else.
Now, bachelor and maybe master students are generally unaware of the severity of this publish-or-perish plague that infects the very spirit of science. But like a fish that is unaware of the properties of the water it lives in, the students are deeply affected by the polluted nature of the academic norms in which they matriculate. How many times has a teacher told a bachelor student that they should submit their term paper to a high impact journal? How often are readings in scientific courses not from journal articles or books? A taskforce in sociology specifically recommended that students read and comment on books and journal articles because this is the ‘best’ type of academic knowledge for them to learn.
With available technologies and new ideas about what constitutes meaningful knowledge, students can have a great impact on science; both now and into the future. Podcasts, blogs, vlogs, Youtube channels and many other forms of social media communication are consumed in high volumes across members of the public, and among students and scientists.
Therefore, when possible, I assign students the task of making a contribution to public knowledge and/or open science instead of writing a term paper. It seems better for all parties involved, and involves more parties because it could reach the public at large. I recently put this into practice in my course, “Open Science in Social Sciences: Crises, Controversies and Change” which I taught as an invited guest lecturer at the University of Zurich (UZH) (syllabus).
Creating or editing a Wikipedia page where knowledge is lacking or does not exist at all, is a fantastic way to engage in public open science. Firstly, surveys report that more than a three-quarters and up to 90% of students use Wikipedia in their course research, at least in some English-speaking samples. Second, shaping knowledge immediately; seeing one’s own contributions appear on a public knowledge-platform is exciting and feels empowering. Third, knowledge is improved, in some cases much needed knowledge, like that which gives a voice, forum or contribution for underrepresented people or societies.
Three students in my course used Wikipedia to make contributions to public knowledge.
One of the students first edited and then created a Wikipedia page in Ukrainian, based on his Bachelor Thesis topic “A Theory of Generations”. He pointed out in class that most of the information on this topic, and knowledge accessed by Ukranians in general, is in Russian. Given the history of Russian hegemony in Ukraine, having more Ukranian resources promotes a Ukranian language and identity; in other words, promotes knowledge that is valuable to most Ukranians.
Additionally, this particular academic topic is contested and misunderstood in the literature according to this student. He points out that the former page focused only on one theory, when in fact there are many. Thus, he pointed out that before his edits and page renaming, “Ukrainian users of Wikipedia, by looking through the [former] article Theory of Generations, would be not informed adequately at least and misinformed at most, as Strauss-Howe generational theory is just one of the many other theories in this domain (yet one of the most controversial).”
Another student has a extracurricular passion for wild flora. There is a class of plants whose native habitats are pastures and fields. These ‘pasture flora’ (Ackerbegleitflora in German), as the student pointed out, are “often rare plants that would find suitable living conditions in pastures but are conflicted with the threat of economic means that increase productivity in agriculture. Some of the rarest plants in Central Europe are found in this class.”
At least in German-speaking Wikipedia, there was no page on this topic at the outset. In fact, Wikipedia redirected the students search to Unkraut (weeds), reinforcing the public misunderstanding of this class of wild flora as undesirable or economically-inefficient intruders in a pasture or field. He later discovered an existing page on Segetalflora which is a related class that has overlap with ‘pasture flora’. This points out that knowledge on certain topics can go by different names or be cross-classified, a problem that requires more discussion and Wikipedia users to resolve. Therefore, he elected to significantly expand the existing Segetalflora page, rather than make a new page with information about Ackerbegleitflora, so that everything was in one place, including discussions about differences, conservation and utility.
During our course in fall-winter of 2021, many exciting things related to open science were taking place at UZH. For example, the university created an explicit open science policy. Moreover, there was an open science week with speakers and, of course, our open science course. One student noticed that much information about the open science happenings was missing from the UZH Wikipedia page and decided to make it his project to add it.
A major limitation to the open science movement is simply a lack of academic awareness of the issues and their solutions. Wikipedia provides a platform to disseminate such knowledge. Thanks to this student’s research and edits to the German-language page, I was easily able to update the English-language version of in tandem. We hope users can follow our lead and update the French and Italian versions as well.
Youtube is now outpacing Wikipedia as a student’s ‘go to’ for scientific information and especially for science communication. Internet searches for ‘how to’ or ‘information about’ now seem as likely to return Youtube or other short instructional videos as they are text-based entries (depending on cookies, browser, settings, and etc of course). Videos also give a voice to research subjects that cannot easily be expressed via written language. As long as participants consent, students can interview their subjects and post a video on Youtube – an act that requires only minimal technical skill and can be done with free or cheap software.
Another student in my course elected to contribute to knowledge on the topic colorism – discrimination based on skin color. Unlike racism, the student found this topic was far less present in academic discussion, at least based on her internet searches. Her impression from a previous course and her searches, was that this topic was mostly used in reference to the United States, but less so in the German-speaking countries’ contexts. In fact, she pointed out that to here it seemed like many people did not think colorism existed in Switzerland or Germany. Therefore, for her project, she conducted an interview with a person whose parents are Asian and European, to understand if and what types of color awareness or colorism this person experienced personally or in media and marketing.
The opening of science and increase in reliability and transparency of knowledge for academic and public consumption requires more than a handful of citizen scientists active on social media and Wikipedia. Many closed and un-reliable methods are taught as part of the pathology of science. Instructors and students alike are generally unaware that the science they promote is potentially unreliable. This is another way of saying that they have not yet been exposed to the core information of the open science movement. Thus, changes at the curricular and institutional level are necessary to promote this awareness and to foster change.
Three students in my course elected to develop strategies to shape curricula for university and school students so that it fosters awareness of open, ethical science practices.
One student felt that herself and her peers were generally uninformed about open science both in and outside of UZH. Her idea for changing this was to create a student led social media initiative at the University. As this is nothing that could be achieved or even approved in a few months, her project was an action plan for the creation of this service. Firstly, it would entail the creation of a Student Open Science Organization, that among other things would maintain social media accounts where they posted important open science information and resources. This would require several layers of bureaucratic approval and liaising with the Open Science Office.
Another student sought to communicate and potentially convince a primary methods professor in her area of Educational Science to incorporate open science into her courses. The idea would be that such a strategy could be deployed across other disciplines, and that her area was a test case to develop it. This required first developing persuasive reasons that could be shared in an open and friendly manner with a professor. Like many progressive universities, UZH has resources just waiting to be taken advantage of, a great realization of the student.
The professor responded to the student’s requests and agreed to add open science topics to her seminar plan, agreed with the students claims about the value of teaching open science and to specifically promote open data and FAIR principles as she teaches about qualitative data collection and evaluation; according to an exchange with the student.
Another student volunteers her time at a Zurich Community Center (Gemeinschaftszentrum). As the student points out, these centers, “offer space to work, play, learn, meet other people or participate in neighbourhood projects.” The student wanted to test if open science is a topic that adolescent children might understand or be interested in. Therefore, she first gained their interest and consent and organized a lesson at the Center. She used another resource of the UZH, the Kinderuniversität (Children’s University) as a protocol or concept for the adolescents to understand open science. The Kinderuniversität is specifically designed to support children in searching for answers and explanations of phenomena and, as the student points out, “serves as a good example to illustrate how science can be made accessible to all.”
Using this test case, the student was able to develop a curriculum for future courses for children and adolescents to discover and understand the concept of open science and why it is important. This guide includes resources that the children can directly interact with such as Wikipedia (that they can even edit this!), Blinde Kuh, Google Scholar, how to find and use a library and how to register for free at the UZH Children’s University.
Public science is often practiced in the social sciences and involves researchers engaging with a local community to help provide inputs into their own efforts to address their own identified problems and priorities as a community. At international universities, bachelor and master students are less likely to be from the local community, are only staying there temporarily, and may face ethno-linguistic barriers to engaging in public social science locally. However, they can still have dramatic impacts on knowledge affecting to their home communities or countries – remotely. That is the beauty of an internet of knowledge, it only requires online access not physical presence.
Something else to keep in mind is that replacing a traditional term paper with a project to impact public knowledge and/or open science is not realistic in all course types. For example a broad introduction to open science across disciplines, like my course, makes it relatively easy for students to chose any topic and use it to contribute to public knowledge. In a course that is very theoretical and narrow, for example critical Marxism or history of the Holy Roman Empire, student learning might be maximized if they write a conventional essay based on reading of the literature that proves they have mastered a well-rehearsed topic. If asked to make a contribution to knowledge specifically in these areas they might struggle to find things to add to Wikipedia or find that there are already seemingly unlimited public resources. “Might”, then again might not. There are often opportunities to engage technology or do things in different languages.
Ultimately, when students feel they are doing something more than just earning a credential, ticking a box, or trying to maximize their grades, they become more likely to engage the material. If they think their term paper might actually contribute to local or global communities’ knowledge base, conservation efforts, capacity to address underprivileged students and etc., they are naturally inclined to do higher quality work and develop a sense of empowerment and satisfaction. These are bases of self-knowledge and fulfillment that they will hopefully carry with them into the future and stay motivated to impact public knowledge as scientists in academic or citizen scientists working in any job.
Forward
This blogpost is an edited and abridged transcription of the talk ‘Open science is just good science’ given by Jon Tennant in 2018 at the annual meeting of DARIAH (“Digital Research Infrastructure for the Arts and Humanities”) a European Union based network to support digitally-enabled humanities research and teaching. This talk took place on the eve of major events in the Open Science Movement to which Jon contributed an enormous amount in a short period of time. The talk, presented here in readable format and appended with links, additional info and citations, provides an excellent primer on open science. This post is offered in honor of his memory as Jon passed away April 9th, 2020 in a motorbike accident. Any changes to the original wording of his speech are intended to focus, clarify and better communicate his message – a candid voice for open science.
Jon Tennant = open science
My story began about seven years ago… I was a master student in London at the time at Imperial College and I was talking to a friend and I was saying ‘you know I’d really like to publish my master’s thesis’. And he said ‘well, you know, just make sure that it’s open access’. I was like ‘what the hell is open access?’, and he said ‘it’s where you make your research freely available to everyone.’ And I thought ‘well doesn’t everyone have access to… oh wait…’, and I had everything sort of like “unraveled” about my academic history up to that point.
When I was a student I would hit paywalls all the time, and it just seemed like a common everyday thing. Like, ‘oh crap this one’s paywalled, guess I can’t use that, move on to my next one’. I realized that despite being in a ridiculously privileged position at a very elite institute in the UK, I still couldn’t actually access the things I needed to do my own research. The more I thought about it, the worse it became.
So what are the things that motivate me in the morning?
These are the big picture things that the open science movement is trying to solve, like the fact that the vast majority of scholarly research is still held hostage by private corporations. Around 75% of all published knowledge which should be available to humanity, is instead owned by shareholders or private companies. This disadvantages almost everyone on this planet except for those who are fortunate enough to be in a privileged position at an elite institute. These commercial giants are ruthless racketeers. They have profit margins often in excess of 35 to 40 percent, which is even bigger than Apple and all the big oil companies. The result is that as a global research community, or scholarly community, we are not communicating our results effectively, and then so many massive issues that are affecting our planet are suffering.
There are major barriers to the dissemination of scholarly knowledge. Copyright is a huge one. It is completely and utterly broken. It does not protect us as content producers; it protects the profits of scholarly publishers. Often we can’t even access our own research results and we are prohibited from sharing them due to anachronistic copyright laws. As consumers, often we don’t even know what we’re buying until after we’ve bought it. You can pay 40 bucks to access a research article and you have no idea what’s actually in it or if it will prove useful, and there’s no way in hell Elsevier is going to give you a refund for that. We have life-saving research; but most cancer and global health research is still hidden behind paywalls. And the real question is: how is any of this helping science or research, or having an impact on the global challenges that we are facing?
Consider the bigger picture, for example the sustainable development goals set by the World Health Organization includes things like economic growth, industry innovation infrastructure, reduced inequality, clean energy, and combating energy insecurity, water insecurity, and hunger. If you believe that research can help us achieve these goals and resolve these issues, then you must also acknowledge the corollary that preventing access to research stops us from achieving these goals. This is exactly the system which you’re playing in: you have an industry that thrives on preventing access to knowledge that’s how it makes its money. It’s not a bug. That’s a feature. It really is systemic and it’s parasitic as well if you want to use an ecology term.
One of the consequences of this is that public trust in research and expertise has plummeted over the last few years[1]. We see expertise effectively dismissed especially from scholarly experts as if what we’re doing is no different than just Googling something. I’ve created a hypothetical conversation here, but this sort of conversation happens even at the highest levels. Like in Congress when researchers go to present evidence, they get rejected because politicians are like ‘you know, isn’t that research published in Science or Nature just ‘fake’ basically?’.
From Jon’s Slideshow
Academic: “This research paper has been published and therefore is scientifically valid.”
Non-academic: “But it’s paywalled. I can’t access it. How do I know it’s valid?”
Academic: “Because it has been peer reviewed.”
Non-academic: “Can you show me the peer reviews?”
Academic: “No. But it was done by two experts in the field.”
Non-academic: “Which experts?”
Academic: “We don’t know. But it’s in a top journal.”
Non-academic: “Why is it in a top journal?”
Academic: “Because it has a high impact factor, so is highly cited.”
Non-academic: “Why does that make the research better?”
Academic: “Trust me. I’m a scientist.”
If you think about it, there’s not really much reason why politicians and the public shouldn’t think that. Trust has to be earned, and trust is something that opacity does not breed. In a world where transparency breeds trust we shouldn’t actually be surprised when expertise is rejected because we’re operating within a closed system. If we step outside of that system and look at it, or empathize with those who are outside it, then actually it makes sense why we have a sort of chaotic relationship with members of the wider public at the moment. The ivory towers of academia are certainly crumbling due to the wider open movement, but is it happening fast enough and what are the consequences when it doesn’t move fast enough?
What is open science?
There is actually no universally accepted definition of this. Open science is about using science to help address the major challenges to society. Ironically if you look at the one systematic review of what open science is (published and paywalled by Elsevier) it says that ‘open science is transparent and accessible knowledge that is shared and developed through collaborative networks.’ So does that mean that open science excludes anything done by the individual? It’s a pretty stupid definition if you ask me. But you can’t read it anyway because its paywalled. So, when people use ‘open science’, often people will think oh they just mean you know ‘physics, biology, chemistry’, but when I talk about open science I mean in the most inclusive sense possible. ‘Open science’ is often used interchangeably with open scholarship or open research, but we just have to make sure that we include everyone; so it includes humanists, social scientists, even artists, engineers, mathematicians, medics, and citizen scientists even are included under this umbrella of what open science encapsulates.
For me as well, open science is based on core principles. I’ve got this nice little table here from Tony Ross-Hellauer.
This is a combination of practical aspects and personal aspects behind open science. For example, accessibility, equality, and rigor are practical aspects; but there also are ones you might miss like freedom, fairness, justice and truth. For me, these are principles that you should adopt anyway as a good human being; and if so, then you’re basically an open scientist, or open scholar. You can embed these practices within your everyday life, or least practices you should be doing as a researcher. But in the practical aspect, open science is bloody complicated. I don’t want to hold that fact back.
Open science includes things like discovery, analysis, writing and publication, all the way through to different tools used for assessment and evaluation. There are entire workflows here which we need to be trained at, but no one’s really teaching us how to use them. Imagine we are all aware of at least one of the above tools or practices, but integrating all of these into your everyday workflow as a researcher can be quite complicated. Another really important question I think we need to ask is:
How is open science objectively different to science?
Mick Watson, in 2015, just wrote this beautiful article ‘When will open science simply become science?’. Those principles and tools above are just good science. Mick said ‘open science describes the practice of carrying out scientific research in a completely transparent manner (good science) and making results of that research available to everyone. Isn’t that just science?’ And it’s difficult to disagree really. So when we talk about what open science is, it really is just better science; and the opposite of open science is just bad science, because if you’re not sharing in a transparent manner, then you’re basically creating anecdotes rather than research.
Is open science a movement?
A lot of people describe open science as a movement. A movement is defined as a group of people working together to advance their shared political, social, or artistic ideas. The implication of this is that a movement has a direction with shared common goals based on commonality. So if open science is a movement, then who is defining the direction? Who is defining the shared goals? What’s the strategy behind it? Who’s leading it? Is it DARIAH? Is it the Open Science Framework? What happens to those who don’t feel included in that movement? A nice example of this is one time when I had to go to the Humboldt Institute with some open science colleagues as part of an outreach workshop to teach them different methods of doing open science. When we went there, they actually ended up schooling us in doing something way better than what we were doing using virtual machine environments. And we were like ‘oh cool so you’ve basically already done open science anyway!’ and they were like, ‘yeah but we just don’t call it that.’
Is open science a process? A set of principles? A vision, a club, a political agenda, fad, a distraction, is it exclusive?
What happens when we, as a supposed movement or community, actually can’t answer any of these questions? I think that’s kind of important because it gets to the root of what open science is and how it is objectively different to what we as a scientific community are doing. Then that can help us define a strategic direction for the future of what we actually want to achieve, once we fall back upon that.
Among academics, there is this mantra publish-or-perish. But there is no publish-or-perish anymore. It’s publish-and-perish. You can publish a lot during grad school and still be told that you’re not qualified enough to get a postdoc. There’s too much competitiveness, too much is driven by funding, and too many people coming in through the pipeline being drilled into this sort of narrow ivory tower mindset of how to do academia as soon as we start. One of the first things I was told when I became a PhD student is within like your first year or so you better publish a high-impact paper. I was like ‘dude I don’t even have any data yet’. I did publish eventually, but it was a lot of strain and it’s a lot of stress to deal with that at Imperial College. Out of the 50 or so PhD students that were part of my cohort, pretty much every single one of them left with insomnia, alcoholism, depression, anxiety, or stress because they were treated like farm animals because of this publish-or-perish mentality. And we wonder why PhD students have almost twice the amount of mental health problems as people who work in emergency health services. We don’t have any sort of support framework.
These giant mega-publishers are partly to blame. We’ve heard of Springer and Elsevier et al. mentioned before. There’s a great quote from the journalist George Monbiot. He said that “academic publishers make Rupert Murdoch look like a socialist.” I think that’s a very ‘positive’ outlook. It’s also known as the industry the ‘internet could not kill.’ Back in 1995 Forbes wrote a really great editorial saying that Elsevier would be the Internet’s first victim. Elsevier went on then to basically have unbounded profit increases. It is still ongoing. It’s a twenty-five-billion-dollar-a-year industry in 2018. It’s extremely fat and bloated and 35% profit margins are fairly typical. We still talk about “papers”, I mean its 2018, and we’re still referring to “papers”, and what we have at the moment for the vast majority of our scholarly communication process is a 19th Century process of peer review applied to a 17th Century communication format around journals and articles. We still mostly use PDFs as well. I think it’s probably about time we adapted to the web of 1995 for scholarly communication because we’re seriously lagging and it’s a very strange system to be part of right now.
I’m sure we’ve all heard of Sci-Hub and ResearchGate as well. These are essentially platforms that want to provide increased access to scholarly research but are viewed as ‘pirate sites’, as if liberation of knowledge is equivalent to plundering and murdering. The American Chemical Society and Elsevier and their kin are suing them for millions of dollars and shutting them down preventing access to this research. For some reason sharing research is illegal; have a think about that one. The more you know the worse it gets. Only 25% of all scholarly research articles are open access and that comes about 20 years after the Budapest Open Access Initiative. We’re increasing our rate of free access to knowledge about 1% every year, so maybe in about 30 or 40 years we’ll finally have substantial access to research knowledge (see OS Timeline Updates at the end of this post for more info).
We have this prestige-based economy where your worth as a researcher is based on the commercial brands dictated by corporate values which you elect to publish in for whatever reason. There are various biases in this, for example if you are a minority researcher, woman or early career researcher then you are incredibly biased against from the outset[2]. As we all know, researchers write, review, and edit the papers; so they generate around 95% of the real value behind scholarly communication. Then we (the researchers) have that content stolen away from us by publishers and then sold back to us. When you wonder how they generate 40% profit margins, it’s like going into a restaurant and bringing all of your own ingredients cooking the meal yourself and then being charged 40 bucks for a waiter to bring it out to you on a plate.
Its bullshit. The old saying is, ‘it’s smart people doing stupid things for smart reasons’ and the reason is because our careers depend upon this publish-or-perish mentality. But at the end of the day we’re basically being duped as a global research community. We are no longer researchers. We are the oil for the machine. The provider, the product, and the consumer for this mega corporate entity out there. The market itself is an incredibly dysfunctional part of a wider oligopoly, similar to a monopoly. Yeah, we have lifesaving research about cancer, Ebola and Zika for example. All hidden behind paywalls – sold off to the highest bidder at the will of Elsevier’s stakeholders. The Ebola outbreak was, what, four years ago? Just two days ago, Nature finally decided to announce they would provide open access, for a limited period only, to Ebola research. So a round of applause to Springer Nature for acting four years after they were supposed to.
If anybody’s not angry at scholarly publishers yet, then I’m clearly failing because you should be.
In Germany, you have a national library consortium who are currently revolting against the revolting practices of Elsevier and their kin. At a publishing conference I attended earlier this year in Berlin [APE 2018], Martin Grötschel was a speaker. He was in 2018 the president of the Berlin-Brandenburg Academy of Sciences, and typically at these conferences he is supposed to get up and give a nice speech about what a great job everyone’s doing and give them a little round of applause for being who they are. But this was a conference held by Elsevier and Springer Nature and instead, he followed the vice president of External Relations of Elsevier onto the stage, and he spent about 15 minutes slamming the crap out of them in the most beautifully German way possible. He is one of the chief negotiators behind Project Deal. He is actually in the room when negotiating these big deal subscription contracts with Elsevier et al., and he was saying that he feels like he’s being bullied half the time.
He said during one of these negotiations, “we don’t want to pay Elsevier anymore because we don’t see the value in what you’re doing” and he described what followed:
“One publisher [Elsevier] stated ‘if your country stopped subscribing to our journals science in your country will be set back significantly’. I responded, ‘it is interesting to hear such a threat from a producer of envelopes who does not have any idea of the contents'”
Martin Grötschel, at APE 2018
Pretty harsh. Hilarious at the same time. But whichever side you are on, however you look at this, there are these enormous rifts happening in the world of scholarly communication at this moment. It’s basically big publishers versus everyone else. They are entering the legal realm. They influence copyright, career advancement, the structure of our research institutes, and there are really deep issues happening here. In response there is an open science revolution infiltrating into many of these aspects. Project DEAL is causing quite a mess for big publishing.
In France recently, we had a similar thing between the Couperin Consortium and and Springer Nature. Couperin served up the middle finger on a silver platter and said ‘we’re not going to subscribe anymore’. They saved 12 million euros every year in subscriptions which they are reinvesting into open scholarly infrastructure. Sweden announced two or three days ago that they, the Bibsam Consortium, are doing the same thing. They cancelled all subscriptions to Elsevier journals and they’re like ‘crap, we’ve got like all of this money we’ve been wasting on journals for these last 30 years now what do we do with it?!’ It’s fantastic and if you look in Taiwan, South Korea, Argentina, and Mexico they are all gearing up to do the same thing. Oddly it is just the UK who seems to be not too fond of this.
We have so many awesome web-based technologies to open scholarly communication
Why on earth are we still communicating in PDF articles with thumbnail sized images when we have an entire web at our fingertips? I assume we all know about tools like GitHub, StackExchange and Wikipedia. Why not use something like the moderation or editing system of Wikipedia combined with the reward structure behind StackExchange combined with the version control of GitHub in order to create a fully integrated, community owned, very cheap, open scholarly communication system? And before anybody says ‘it can’t be done’, people are doing this already! I’m not sure if they are in digital humanities, but you know there are communities like computer scientists who are doing these sorts of things already, which brings me on to penguins.
What do Penguins, Cobras and Gimli have to do with open science?
Cultural inertia defines academia. It’s a crowd based physiological effect and pervades all aspects of the Academy. Have you ever met the average academic? 50% of academics are stupider than the ‘average academic’. They have this publish-or-perish mentality and they are generally terrified of new technologies. It took me about a year just to even set up a GitHub profile. We are also really bad at making predictions, like 10 years ago, people said ‘open access is never gonna happen’, eight years ago people said ‘open data is never going to happen’, three or four years ago people said ‘open peer review is never gonna happen’. Now people are saying preprints are never gonna happen. Yeah… all of these aspects of open science are happening in some sort of way in one form or another already, and it’s fantastic. But in doing so we’ve created a whole range of new technical social and language barriers around these new developments and that’s a bit of a problem.
We like to think that openness is supposed to be inclusive right? It is one of the key founding principles of the open science movement but is that actually the reality? Have we created a new system that’s open for some and not open for all? I would argue yes, and one of the reasons for this is that there are immense barriers to change within academia. We have a suite of social, cultural, technological, political, organizational things that create vast barriers, to each of us on an individual or community level, to actually progress in a way in which we think is most beneficial to our communities.
Three of the biggest stifling effects are fear, particularly for the most underprivileged, competition, because we all want to advance our careers, and the abuse of power dynamics from those at the top. All of this create inertia, which prohibits course correction. If you look at the values that are driving open science: things like how to reduce publication bias, how to increase access to knowledge, how to make research more efficient and reliable, how to make it more sustainable, and how to foster collaboration: almost every one of the barriers to these revolves around fear. For example, fearing of being scooped, or fear of information overload, fear of wasting time learning new practices or fear of poor research quality. Fear of errors and public humiliation is a big one for grad students. It’s this concept of fear, and this is where the Penguins come in. Researchers are like penguins.
Penguins spend most of their day huddled up together on an ice cap. Eventually they all start to get a little bit hungry, and they look longingly at the water. Food is out there, but so are killer whales. And no one wants to be the first one to jump into the water because they’re afraid of being eaten. Eventually one of them gets so hungry that he slides down off of the the iceberg into the water, and goes hunting for fish and he’s very happy. Then all of the others are like ‘oh, well he was safe, so maybe we can go down too’, and eventually one-by-one they all start laying down and they go off. And no one gets eaten, well sometimes someone gets eaten but that’s life, but it’s the same as academics.
There are new technologies and new processes and everyone’s terrified of being the first one to jump in. This fear is coupled with the fact that we’re almost singularly only rewarded for gaining academic capital based on the journals that we’ve published in. So we have an academic industry that relies on creating this ‘stifling effect’ over innovation and progression of a field. We are generating a lot of value for the publishing industry but we’re losing out as a global research community in the process. People talk about providing incentives to do open science or ‘sticks and carrots’ to make people do better science; but it’s kind of missing the point that we should actually be doing good science in the first place. We should not need to be incentivized to be transparent about our work, that’s the completely wrong way of looking at things in my view. That’s why the penguin analogy sort of works.
The next analogy is cobras. The cobra analogy is about key performance indicators and how having a performance-based evaluation system revolving typically around publication is damaging to academia and to global scholarly research. I call this the ‘cobra effect’, i.e., perverse incentive. There’s this really well-known anecdote about when the British ‘occupied’ India. Administrative officials were concerned that there were too many cobras in Delhi. So they created a new policy that members of the populace would be given money in exchange for any dead cobras. In return, the locals started breeding thousands and thousands of cobras, and then got a lot of money for them. So a policy designed to cull the number of cobras perversely led to a population boom in the end.
And the same thing happens in science if you look at how we are rewarded based on citations and impact factors. That’s what we what we end up aiming for. There’s another great paywalled article which came out last year that looked at this effect in Italian researchers. What it found was that within four years after the Italian Research Council’s policy saying that citation metrics were going to be used in hiring practices, it led to as much as a 179% increase in the number of self-citations. So it was a great idea executed in the wrong way, and led to an unintended consequence. It’s called Goodhart’s Law. When a measure becomes a target it ceases to be a good measure.
“When a measure becomes a target, it ceases to be a good measure.”
Goodhart’s Law; quoted in Jon Tennant’s slide
When high impact journals are the target for researchers they shift their priorities from scientific method to ‘how do we get into high impact journals?’. They conclude, ‘oh we have to tell a really good story, we have to get better data for our work.’ And that skews the research process, because the research process should never be about aiming for a high-impact publication. It should be about discovery of truth. Right? But we’ve skewed that.
So this is the game; and people always respond saying ‘well you know this is just a system that we’re a part of.’ But the system comprises of people, right? So anybody who is complicit in citation gaming, should be accountable for those actions. The fact that we are rewarded for high-impact journals is backwards. If you look at peer reviewed publications in the top journals it’s of typically a lower quality. The research has the highest probability of being retracted not due to more eyes but due to the probability that researchers have committed fraud or tried to cut corners in order to get into those journals[3].
According to this figure, ‘top journals’ mean worse research. It also demonstrates that the impact factor has perhaps nothing to do with the quality of research itself. The inverse of what we expect. One thing I tell researchers is ‘if you use the impact factor to evaluate the quality of another person’s research or of an individual researcher, then all of your papers that use any form of statistics should be retracted; because you sure as hell don’t know a thing about statistics’.
“If you use the impact factor to evaluate the quality of another person’s research or of an individual researcher, then all of your papers that use any form of statistics should be retracted; because you sure as hell don’t know a thing about statistics”
Jon Tennant, ‘Open science is just good science‘.
That’s a powerful message to tell people especially in senior positions.
How does open science factor into this? Some possible ways forward.
One solution is to use altmetrics and article-level metrics so that we don’t just use one crap proxy to evaluate an incredibly complex system of research. If you haven’t signed the Declaration on Research Assesment (DORA) or looked at things like the Leiden Manifesto or NISO yet, these should be high up on your agenda. But ultimately it’s down to the individual researcher to ‘stop breeding cobras’, because that just contributes to a worse system.
Now, about Gimli. Has anybody not seen Lord of the Rings? There’s a scene there where towards the end of the movie against all hope, the last of the good guys go to march on the gates of the bad guys, and it’s a no hope situation. They’re basically all worrying themselves to death, and Gimli says, “certainty of death, small chance for success, what are we waiting for?!”, and they all march off and most don’t die; and it’s another perfect analogy for academia. We are told that you can’t do various aspects of open science because they will harm your career; and that’s due to these social internal barriers mentioned before. The effect of a divergent attitude which has been imposed upon us: that people who want to innovate and explore and create or do good science are chased out of the system. The effect is that all of us are straining in perpetuity as part of the status quo, and research suffers. Statistically less than 1-out-of-200 grad students will get a full-time professorship, according to recent research done in the UK. The question is then, why would you try and be the worst version of yourself via publish-or-perish to get a job that you’re probably not gonna get because you are going to publish-and-perish anyways? Researchers become trapped in this cycle. We feel like we’re forced to play the game because leaving academia is perceived as failure. This leads to a reinforcement of the power imbalances, cultural inertia, commercial interest and governing systems of academia, and the cycle continues.
Can we break this cycle through training?
A paper published last year showed that in 60.8% of research articles published in global health journals, the researchers did not self archive, i.e., post a preprint, even though it was free and allowed within journal policy. This is life-changing research which researchers themselves are not sharing in a field where you would think access to knowledge is important for saving people’s lives. We have to ask ‘why?’. In the UK, a study showed that 93% of researchers believe that open access is important but less than half of that number have actually published in an open access journal. Why again this massive discrepancy? It is quite shocking that researchers can expressly promote open access but not practice it, even when if you publish in an open access journal statistically you will gain an increase in citations. The same can be said if you share your data and your code openly. You make your work more reusable therefore more open to being cited. In a system where ‘dead cobras’ still count, this is a good thing for you. A lot of people will counter this by saying ‘open access is too expensive.’ If you say that, all you are saying is that you can’t Google properly, because self-archiving costs nothing. There are so many routes out there to free instantaneous sharing that help to level the playing field for everyone.
“I honestly don’t know, it just it blows my mind that researchers can promote one thing with one hand and then fail to uphold their own values with the other. And I don’t understand why, because all of the evidence points towards being open as enhancing your career”
Jon Tennant, from talk
Now there are these policies and mandates saying ‘you have to publish your work open access’, and then publishers swooped in and said ‘we’ll give you open access for three thousand dollars a pop.’ Why would you pay three, four or five-thousand dollars for something that you can get for free? Preprints are amazing. Again, sharing is generally good for your career because you generate more citations faster. More importantly, you get free rapid communication for your research which could benefit society and its problems. There’s an explosion of preprint venues in the last five months. The concept here is that it’s your own work, don’t stick it behind a paywall. You do have choices to publish it where you want and the future is definitively going open, and you can already be a part of this.
On the left we have the exponential increase in the rise of preprints by platform, and on the right are preprints as a percentage of all papers by discipline. At the same time, open access mandates are appearing across the globe from funders, institutions and governments. Openness isn’t going anywhere so you might as well ride that wave.
In summary
I think it’s time to change the conversation because open science is pretty awesome. It increases the dissemination and reusability of your research and ultimately enhances your academic profile which is good for you. More importantly it helps to combat the reproducibility crisis, and makes you a better researcher both ethically and methodologically. It disseminates potentially life changing and saving knowledge freely to all.
The first step in achieving this is that we need to take responsibility and educate ourselves about open science or good science. This is one of the reasons I’m building this open science MOOC (Massive Open Online Courses) to help training and support and education for researchers around the world. We are going to hopefully use this to empower the next generation to become leaders in their own research fields. There are challenges though. We need to not just act within our own little communities, but act across them to increase interdisciplinarity and community building.
[speaking to the DARIAH audience specifically] I’m honored to be here with humanists and social scientists because I don’t get to speak to you very often and I know that at the conferences I attend [paleontology] that humanists and social scientists often aren’t invited and I think that’s a real problem. We have this gulf between physical science, and humanities and social sciences. We need to be working together building bridges, not walls. Open science for me is about breaking down barriers and generating equity in science. Things that can help us to foster collaboration and increase the power of communities against the entrenched crap which we’re all trying to fight against. This means we have to work together towards a common goal, ultimately that common goal for me is pooling knowledge and resources to create a decentralized scholarly infrastructure. With communities as the actual focus, then we can actually achieve the principles of open science.
Be that penguin. Don’t hold back from trying new things; be one of those people who jumps into the water first because you’ll be remembered amongst your community as a champion. Be fearless Gimli. The career pipeline is leaky anyway so why not diversify your skill set. Go out as an awesome researcher ‘guns’ blazing, or train yourself to become an awesome researcher through open scientific practices predicting what the future of your field is going to be rather than doing what professor X tells you to do because it worked 50 years ago. Don’t be a cobra farmer. Be focused on good science and responsible evaluation and let the quality of your research speak for itself.
What open science is
It’s a tautology. Science was always open. This is where we want to get to in 10 years. We don’t want open science to exist anymore eventually, because this is going to be the period when we woke up and realized that what we were doing before wasn’t really science. It was anecdote, and we need to change that. Science without open is just anecdote, open science is just good science that’s your take-home message.
Afterword – OS timeline updates
During and after Jon’s 2018 talk and his passing in 2020, major changes took place that would reshape the science landscape. Two in particular are of centrality to Jon’s core values about free knowledge communication, and reinforce that open science is actually good science. For one, and despite many lawsuits and attempted shut downs, Sci-hub has very little apparent impact on publishers’ profits. The figure below is Elsevier’s parent corporation RELX’s revenue and earnings per share since 2016. Sci-Hub had massive usage throughout this time yet RELX’s revenues continued to climb.
What happened in 2019 was not a sudden flocking to Sci-Hub. It was that most of Germany’s institutions, the University of California and many other global institutions cancelled their Elsevier contracts. This clearly had a huge impact. At the same time, already in 2016, researchers on every continent used Sci-Hub en masse and had way more scientific knowledge in their possession than at any time prior. This was a huge achievement for scientific communication, that still continues today. Why didn’t Sci-Hub matter for corporate publishing profits?
Simple: Presumably those who use Sci-hub in the Global South and as members of less-well endowed institutions in the Global North do not have legal access to the articles they download. These institutions do not have the funding to have legal access, so whether the researchers at these institutes use Sci-hub or not, the institutes are not a current source of profit for publishers. Moreover, given the persistence of global inequality, it is unlikely that these institutes will ever afford subscriptions, thus they are not potential sources of profit either. For those who use Sci-hub at well-endowed institutions with subscriptions – maybe they are lazy, unaware of the subscription or somehow unable to access their library (e.g., a VPN or technical glitch) – they are not cutting into profits. These universities already have subscriptions for the most part so when their researchers use Sci-hub, the publishers still profit. Thus, all the noise made by publishers about Sci-hub eating their profits is greatly overstated.
The percentage of research available through legal open access channels has doubled or even tripled between 2015 and 2020, but one of the greatest achievements politically is Plan S supported by cOAlition S. This is a mandate that all funded research be open access as of 2021. This has completely shaken the for-profit megalithic publishing system and its role in Europe. Already, two of Jon’s favorite targets, Springer Nature and Elsevier, have come to the table to offer solutions for researchers to be compliant with Plan S. Its not perfect as it still calls for often large APCs on the part of the author, but it allows the authors to retain CC BY rights to their work. We cannot expect science communication to be perfect, just like human nature, but we can in perpetuity strive to do just good science and continue to push publishers to reduce their fees as we move away from print distribution.
Footnotes
[1] This is one of Jon’s more controversial claims. While there is evidence the public may have trusted science less heading into the 2010s, there is also a lot of evidence that trust has been stable for decades and of course that this varies greatly by country.
[2] And statistically underrepresented in the output. The exception being that female solo authors are not apparently penalized in the peer review and publication process itself.
[3] The evidence on this is mixed.
Original post by Witold Kieńć
[Note from Nate Breznau] This post originally appeared in DeGruyter’s blog. I had assigned it as a reading in my course, but when I followed the link recently it did not appear and resolved instead to the home page of the blog. I searched within the blog but it did not turn up. Did DeGruyter remove it? I also am unnable to find the author of the post as his website seems to no longer exist (according to what I think is his OSF page). [update] I Tweeted to them initially and they were quick to respond.
@degruyter_soc, we are trying to read the blogpost "Global inequalities in science are bigger than those in the economy" in our course, which once appeared in your blog https://t.co/k3pCKU0IkE, but it no longer appears to be there. Have you removed it?
— Nate Breznau (@BreznauNate) June 23, 2021
Hi Nate, the article in question was published on the "Open Science" blog, which was recently discontinued. That is why the article is not available anymore – apologies for the confusion!
— De Gruyter Conversations (@DGConversations) June 24, 2021
As their blog is discontinued original blog post as recovered from the Internet Archive.
Global inequalities in science are even bigger than those in the economy. Of course, to afford food is more important than to contribute to scientific discussion. Thus results of inequalities are less dramatic in case of scientific research, however, the case of Ebola might be worth to think over here. This pathogen was detected for the first time in 1976, quite a long time ago. Would the current therapy method for the disease that it causes be the same if it had been discovered in Alaska?
The lion’s share of all scientific articles published in established academic journals comes from a small number of countries, and some of these leading countries are really small and rich, when seen from a global perspective. According to World Bank Data, there were more than 21 thousand papers indexed by the Science Citation Index and Social Sciences Citation Index in 2013 that were published by Swiss researchers. This means that Switzerland was able to produce 2,603 top level papers for each one million of its inhabitants. Denmark, second in this ranking, achieved the result of 2,223 papers per million people, so 15% worse. The visualization of number of papers per million inhabitants on the global map shows the indubitable hegemony of rich, northern countries in science.
A country’s scientific publishing output per million people correlates very strongly with Gross Domestic Product per capita (Spearman 0.84!). In short, you have to be rich to have a significant input to science. This might be nothing new for you, but what is quite surprising for me, is that global inequalities in science are bigger than those in the economy.
I have calculated the Gini coefficient for 4 of the “development indicators” provided by the World Bank. First is the market capitalization of listed domestic companies, so commercial value of companies registered in a country, which I expected to be the most unequal on a global scale. The second one is Gross Domestic Product, that is a well established indicator of welfare and is known to be extremely unevenly distributed globally. I have also chosen electric power consumption, which is also a good indicator of general consumption level. The fourth indicator is the number of articles indexed by Thosmon Rueters services (this data is provided by the World Bank as well). All indicators were divided by the number of country’s inhabitants and then the Gini index was calculated.
In the result, I realized that the contribution to the scientific core is even more unequally distributed among countries than GDP and values of companies. And this is the most unevenly distributed factor that I have analysed. Countries are more equal in respect of their share in the global wealth than in their impact on global scientific discussion.
A lot of work has been done to inform citizens of Europe and North America about the dramatic scale of global inequalities. However, these inequalities are so big, that average people from wealthy countries still do not fully understand what it means “to live below the absolute poverty line”. So please, now try to imagine that inequalities in science are even bigger than that. Of course, to afford food is more important than to contribute to scientific discussion. Thus, the results of inequalities are less dramatic in the case of scientific research, however, the case of Ebola might be worth thinking about here. This pathogen was detected for the first time in 1976, so quite a long time ago. Would the current therapy method for disease that it causes be the same if it had been discovered on Alaska? The Ebola virus is the extreme and rare case, a lot of science has less to do with life or death issues.
However, my comparison to the drastically uneven world of global economy may let you imagine how different the chances of researchers from the Global South are from their colleagues in the Northern Countries. And factors that support hegemony of the North are not only economical ones. There are significant cultural and social barriers that enhance the unequal status quo (have a look here).
This blog originally appeared in the COVID-19 Blog of the Collaborative Research Center “The Global Dynamics of Social Policy” at the University of Bremen.
The WHO declared a Global Pandemic on Jan 30th, 2020, based on overwhelming evidence that the highly infectious Novel Coronavirus SARS-CoV-2 and the deadly COVID-19 disease that it causes threatened all of human kind. Despite this clear message, public and government responses varied dramatically by country, city and even neighborhood. Controlling the spread of any global pandemic requires large-scale, cohesive public responses. As there is no global governance, national governments were crucial institutional actors in the pandemic fight.
In Germany, the national government was quick to push German states to adopt cohesive measures in February, and to then ratchet these up in March as infections exploded in places like New York City and Italy and localized regions and events within Germany. In Figure 1, the dotted line are the deaths from COVID-19 and the solid green line is the degree of government lockdown measures. At least in the first wave, Germany was highly successful at curbing the spread of the virus. This contrasts sharply with Sweden, displayed in the middle panel of Figure 1.
In Sweden, the constitution prevented lockdown measures in non-war-times. Although the Swedish government encouraged its residents to follow pandemic safety guidelines, the lockdown measures were relatively lax and the infection rate and resulting deaths were among the worst in the world at the outbreak of the pandemic. The Swedish response and even relatively ‘good’ German response paled in comparison to the swift and effective lockdown in South Korea and most East Asian countries). In the lowest panel of Figure 1, the deaths per capita stays nearly at zero and remained there until the time of writing this.
Government response is not the only factor as made clear in the second wave of infection starting in October of 2020. Germany and Sweden had similar death rates in the second wave with a slightly stronger government intervention and slightly less deaths in Germany. However, in South Korea government lockdown was similar to Germany, but they had fraction of the infections and almost no deaths.
Government response is simply a method to control public behaviors. Ultimately the public are the arbiters of infection, and their behaviors explain different outcomes where governments take similar control measures. In Wuhan Province, China the public had little control over their behaviors as they were confined to their homes, subject to biosecurity protocols and ‘policed’ by both actual police and Communist Party-led neighborhood watches for at least 76 days. The lockdown halted the infection and death rates locally, but the virus had already hopped China’s borders leading to the pandemic. By contrast, once arrived in countries like Sweden or the United States, the residents were mostly free to behave as they pleased. The fate of the virus spread was essentially in the public’s hands because their behaviors – movements, contacts and (lack of) awareness – provide the only way the SARS-CoV-2 virus can spread or not.
This means that especially in liberal democratic systems where the governments cannot easily impose lockdown measures, studying human behavior is essential to understanding how to fight a pandemic. Social scientists regularly observe a correlation between sentiments and behaviors. The public forms attitudes toward ‘the virus’ and ‘a pandemic’ from the news and word-of-mouth. Therefore, the contents of media messages play a major role in shaping behaviors indirectly through the information contained in news and editorial articles.
Figure 2 shows how daily infections closely follow the sentiment in media messages. When sentiment is more positive (thick yellow line) it is likely that the public perceive less risk and then engage in less precautionary behavior leading to increases in infections (dashed purple line). At the same time, sentiment is more positive as government restrictions ease (thin green line), thus enabling less precautionary behavior like social gatherings and in-person work; in turn leading to more infections.
What is also striking about Figure 2 is that infection rates in Germany show a weaker correlation with media sentiment than in the United States. This is most likely due to stronger government intervention in Germany, whereby individuals have less control over their decisions, or at least will face criminal punishment for not following government guidelines. The apparent association between media sentiment and infections should be caused by public behaviors, but cross-national behavioral data are scarce during the pandemic. However, during a brief window of opportunity from March 15th to April 7th, 2020, Thiemo Fetzer and colleagues fielded a survey asking about precautionary public behaviors in at least 80 countries. Figure 3 compares behaviors with average media sentiment in the last week across these countries and demonstrates a clear correlation between more positive sentiment in media contents and less precautionary behaviors.
National governments are in a tough position during pandemics. They cannot enforce lockdown measures beyond a certain ‘breaking’ point, or the public will simply rebel or ignore them in such large numbers that enforcement becomes impossible. It is therefore not unreasonable to conclude that at least in liberal democratic regimes, the most effective pandemic prevention measures, like those taken in Wuhan, are simply not possible. The old adage ‘give me liberty or give me death’ might therefore be reframed as ‘give me liberty and death’ in pandemic times.
Data and code available at GitHub/nbreznau/covid-liberty-death
The introduction of work-injury insurance is seen by many welfare state scholars as a pivotal moment in each country’s history. It created two new institutional features of societies.
1. Work-related risks were re-framed as affecting all of society rather than individual, familial or employer-specific risks. They were redefined as social risks – those that inevitably face all of society across time and space.
2. The state took on the role of arbiter of social risks, by developing, mandating and regulating risk-pooling in the form of social insurance – nationally mandated, regulated and/or provided insurance. A process that usually started with regulating work-related accidents.
“Accidents no longer seemed an interpersonal matter to be sorted out between workers and employers in court. Instead, they became a social problem and a target for social policy.”
(Moses 2018:4)
After its late 18th Century origins in Europe, work-injury laws in the form of social insurance were written into the law books of nearly every country in the world.
Fig 1. visually demonstrates that after 1990 social insurance for work accidents covers most of the globe. Fig 1. visually understates the prominence of these laws. For example, in the U.S. where there is no national work-injury insurance, nearly all states have their own social insurance laws for work-injury.
The institutionalization of social risk in the hands of national policy appears widespread, but a careful look at coverage suggests it is perhaps not as strong or as widespread as welfare state scholars often claim.
In the Global North, the legal coverage of work-injury law is 69% of the labor force on average, whereas in the Global South it is only 37% (for those 169 countries for which data are available). There are some extreme outliers in the Global South with less than 6% coverage. As shown in Fig 2., many countries have very weak coverage as of 2010, despite having social insurance laws.
Given the prevalence of laws explicitly targeting the industrial, blue-collar workforce of so many countries, it is surprising that the coverage rates among the formal labor market are so low.
A main cause of this is lack of enforcement mechanisms. Many of these laws require extensive legal and governmental institutions. Infrastructure, bureaucracy, a system of enforcement thorough inspections, safe reporting possibilities for workers and measures that prevent corruption of funds or executions of justice tend to be weaker in the Global South.
Another reason is through clauses that all implementation on the whim of certain officials, additional laws that undermine national social insurance laws, or a lack of de facto punishments for not following laws.
Myanmar and Bangladesh are two cases that help illustrate these points. Both involve high density garment factory production. Myanmar enacted social insurance for workers in 2012, but the coverage for work accidents is less than 6% of the labor force. How is this possible when the law explicitly states “compulsory registration” in the social insurance system for employers? One possible answer is in the wording of certain clauses in the law. For example:
“The President may… exempt the regions which is not yet necessary to implement currently according to the plan to be implemented all or any part of the provisions contained in this Law or any establishment applied by this Law or any type of employer or worker.”
Myanmar Social Security Law, 2012, Art. (99)
Governments regularly grant exempt legal statuses to industries that are deeply embedded in global supply chains, industries that arguably would not be located in their countries if the cost of operation were too high. This might explain why Bangladesh did not bother to implement social insurance and instead relies on outdated forms of work-injury laws such as employer liability and a provident fund.
In 1980, the Bangladesh government passed the Export Processing Zones Act which created geographic areas that were no longer under the jurisdiction of national law and instead controlled directly by the president’s office. This meant that work-injury laws did not apply to all the ‘sweatshops’ feeding the global demand for inexpensive garments. With such an exception in place, the introduction of a social insurance law would not make any difference in coverage or addressing risk, unless it overrides the exemptions granted to special regions, those that can be granted in Myanmar on the whim of the president.
There have been horrific accidents in Bangladesh, with the collapse of the Rana Plaza as one of the worst in human history. This led to new coordinated plans between the industrial stakeholders, the Bangladesh government and the ILO, but it remains to be seen if the coverage rate increased from 12.5% – where it stood in 2014. Given the capacity for corollary laws to undermine social insurance laws, it remains doubtful. This suggests that the underlying normative notion of social risk and the government’s role as insurance arbiter it has not been institutionalized in Bangladeshi society and politics the way it has among many Global North countries.
With such low rates of coverage, the idea that social risks and welfare states are institutionalized at a global level may be overstated. Yes, laws are prevalent (see Fig. 1), but, no, they do not actually address the risks affecting nearly all workers (see Fig. 2) for a variety of reasons. As such they do not address social risk and may nurture sociological and path dependent institutions where work-related risks are normatively embedded as an individual or family risk rather than a social risk.
Data sources and replication files available on GitHub.
[1] Young coal miners (left), Bangladesh garment workers (upper-right), Uganda textile workers (lower-right)
References:
Breznau, Nate, and Felix Lanver. 2020. “Global Work-Injury Policy Database (GWIP).” Harvard Dataverse.
ILO. 2014. “Global Programme Employment Injury Insurance and Protection | GEIP Data.”
Moses, Julia. 2018. The First Modern Risk: Workplace Accidents and the Origins of European Social States. Cambridge: Cambridge University Press.
To replicate a study, you need information. Probably information that is not fully disclosed in a 6-12,000 word journal article. Except for a recent trend, information such as data and analytical procedure are not going to be available publicly. This means you should, or must in case the data are not retrievable from some other source, contact the original author. Be prepared for rejection. One study demonstrated that among the top sociology journals, less than 30% of replication materials were available (even though as many as 75% claimed otherwise). Political science was only marginally better at around 50% as of 2015. Professors are likely to ignore emails asking for their data and code. One group of sociology students contacted 53 different authors asking for replication materials and only 15 provided them (28%). Ten never responded to the requests at all, despite several follow up emails. So don’t take it personally, social scientists are not known for their forthcomingness in this area.
Imagine being a student who tries to verify the results of a prolific, senior scholar and cannot. If it were me, I would be anxious that I made a mistake. But the only real mistake would be to assume my lack of verification is a refutation of my own skills. Of course, it’s good to double check everything. Have a colleague look at your work if you are unsure, a teacher or supervisor if you are a student. Un-verifiable results are common, no need for self-doubt. Things like reverse coding biological sex so that women appear less supportive of welfare state policies or accidentally analyzing values of 88 (a missing code) as a relevant value of coital frequency leading to a surprising rate for older persons are actually a normal part of social science.
When replicating a study just assume there will be at least one mistake. Like a treasure hunt.
Verification comes down to the availability of the materials. If the data and code are not fully available, it really is a treasure hunt because you will be unsure what you are going to find or learn. On the other hand, if the data and code are available and in good order, then it is more like cooking than hunting. This often comes down to the difference between teaching replication – the recipe approach, where students should come to the same results every time when following the exact same steps, and replication as a form of social research – the treasure hunt approach, where researchers (i.e., students) may not have a coherent recipe from the original ‘chef’. But make no mistake(!) even fully transparent studies often come with mistakes in the code or data.
If I am not making mistakes, I am not doing research. You will make mistakes and there is nothing to fear. There are all kinds of reasons that replication results will diverge, not all of them are mistakes. Recently a well-known and well-respected sociologist retracted his own paper after someone trying to replicate the study identified coding errors. One journal started checking that data and code produced results in accepted papers, and almost none were verifiable on the first attempt. In a crowdsourced replication, mostly PhD students, postdocs and a few professors came to an exact verification of the original study only 82% of the time, despite having the original code!
Designing statistical models using a software is like learning a new language. Student replications often involve methods unfamiliar to the them. This is a great didactic tool – learning by doing. There is nothing to fear here. Professors’ original studies often involve methods that they are not experts in. One extremely famous scholar and his colleague ran a regression with an interaction term in it and botched the interpretation of the effects, the results were basically the opposite of what they reported.
Science is a process of exploring the unknown. Replications use what is known as a tool for finding what is unknown.
Fear of rejection, Part II
Students may be interested in publishing their replications, they should be, because how else will others put their knowledge into practical use? Get prepared again for rejection. Journals and reviewers across the social sciences are not very excited about replications. A pair of researchers studied the instructions and aims of 1,151 psychology journals in 2016 and discovered that only 3% explicitly accepted replications. One sociologist pointed out not so long ago that replication is just not the norm in sociology, and another one recently came to the same conclusion. The good news is that we don’t need journals anymore to make useful science, at least in theory. Students can immediately publish their results as preprints and share data and code in a public repository. If a student elects to use Open Science Framework preprint servers, their work will be immediately found in scholarly search engines.
Fear of ego
Scientists tend to overestimate the impact of a negative replication on their reputations. Ego-alert. Assume a scientist worried about a replication is a professor. This is a person who is most like tenured, certainly the highly cited professors are. This is also a person who “professes” knowledge on a topic, meaning that they should be an expert and engage in teaching students, policymakers, the public and really anyone interested about this topic. If any of this professor’s results were shown to be unreliable or false, this would be a critical piece of information if that professor’s goal was to actually profess knowledge on that topic. Unfortunately, professors regularly suffer from some kind of ‘rock-star syndrome’ or ego-mania where they are doing science as a means to get recognition and fame. This leads them to react aggressively against anything that contradicts them. This is very bad for science. If a student replicator can help deplete a runaway professor ego through replication, then that student is doing a great service to science.
Fear of not addressing fear
In a typical primary or secondary school chemistry class, students repeat the basic experiments of chemical reactions that have been done for hundreds of years. These students are learning through replication. They are gaining knowledge in a way that cannot be simply taught in a lecture or by reading a book. They are also affirming the act of science, thus developing a faith that science works. In social science especially, we face a reliability crisis if not a public image crisis. Students should be reassured that there is a repetitive and reliable nature to doing social science, whether they will continue as a social scientist or (in the most likely case) not. Part of this reliability can be a lack of reliability. Science is simply a process of trying to understand the unknown, and even quantify this unknown. I fear that without more student replications, we are diminishing the value of social science and contributing to the perception that social science is unreliable.
Good social science should be reliably able to identify unreliability, and this is best taught through conducting replications.