Installment V. If Only Science Had Superheroes
Installment IV. Science Mart
Installment III. Dystopian Tenure
Installment II. In Science We Trust
Installment I. Impact Factor Fetish
Installment V. If Only Science Had Superheroes
Installment IV. Science Mart
Installment III. Dystopian Tenure
Installment II. In Science We Trust
Installment I. Impact Factor Fetish
Forward
This blogpost is an edited and abridged transcription of the talk ‘Open science is just good science’ given by Jon Tennant in 2018 at the annual meeting of DARIAH (“Digital Research Infrastructure for the Arts and Humanities”) a European Union based network to support digitally-enabled humanities research and teaching. This talk took place on the eve of major events in the Open Science Movement to which Jon contributed an enormous amount in a short period of time. The talk, presented here in readable format and appended with links, additional info and citations, provides an excellent primer on open science. This post is offered in honor of his memory as Jon passed away April 9th, 2020 in a motorbike accident. Any changes to the original wording of his speech are intended to focus, clarify and better communicate his message – a candid voice for open science.
Jon Tennant = open science
My story began about seven years ago… I was a master student in London at the time at Imperial College and I was talking to a friend and I was saying ‘you know I’d really like to publish my master’s thesis’. And he said ‘well, you know, just make sure that it’s open access’. I was like ‘what the hell is open access?’, and he said ‘it’s where you make your research freely available to everyone.’ And I thought ‘well doesn’t everyone have access to… oh wait…’, and I had everything sort of like “unraveled” about my academic history up to that point.
When I was a student I would hit paywalls all the time, and it just seemed like a common everyday thing. Like, ‘oh crap this one’s paywalled, guess I can’t use that, move on to my next one’. I realized that despite being in a ridiculously privileged position at a very elite institute in the UK, I still couldn’t actually access the things I needed to do my own research. The more I thought about it, the worse it became.
So what are the things that motivate me in the morning?
These are the big picture things that the open science movement is trying to solve, like the fact that the vast majority of scholarly research is still held hostage by private corporations. Around 75% of all published knowledge which should be available to humanity, is instead owned by shareholders or private companies. This disadvantages almost everyone on this planet except for those who are fortunate enough to be in a privileged position at an elite institute. These commercial giants are ruthless racketeers. They have profit margins often in excess of 35 to 40 percent, which is even bigger than Apple and all the big oil companies. The result is that as a global research community, or scholarly community, we are not communicating our results effectively, and then so many massive issues that are affecting our planet are suffering.
There are major barriers to the dissemination of scholarly knowledge. Copyright is a huge one. It is completely and utterly broken. It does not protect us as content producers; it protects the profits of scholarly publishers. Often we can’t even access our own research results and we are prohibited from sharing them due to anachronistic copyright laws. As consumers, often we don’t even know what we’re buying until after we’ve bought it. You can pay 40 bucks to access a research article and you have no idea what’s actually in it or if it will prove useful, and there’s no way in hell Elsevier is going to give you a refund for that. We have life-saving research; but most cancer and global health research is still hidden behind paywalls. And the real question is: how is any of this helping science or research, or having an impact on the global challenges that we are facing?
Consider the bigger picture, for example the sustainable development goals set by the World Health Organization includes things like economic growth, industry innovation infrastructure, reduced inequality, clean energy, and combating energy insecurity, water insecurity, and hunger. If you believe that research can help us achieve these goals and resolve these issues, then you must also acknowledge the corollary that preventing access to research stops us from achieving these goals. This is exactly the system which you’re playing in: you have an industry that thrives on preventing access to knowledge that’s how it makes its money. It’s not a bug. That’s a feature. It really is systemic and it’s parasitic as well if you want to use an ecology term.
One of the consequences of this is that public trust in research and expertise has plummeted over the last few years[1]. We see expertise effectively dismissed especially from scholarly experts as if what we’re doing is no different than just Googling something. I’ve created a hypothetical conversation here, but this sort of conversation happens even at the highest levels. Like in Congress when researchers go to present evidence, they get rejected because politicians are like ‘you know, isn’t that research published in Science or Nature just ‘fake’ basically?’.
From Jon’s Slideshow
Academic: “This research paper has been published and therefore is scientifically valid.”
Non-academic: “But it’s paywalled. I can’t access it. How do I know it’s valid?”
Academic: “Because it has been peer reviewed.”
Non-academic: “Can you show me the peer reviews?”
Academic: “No. But it was done by two experts in the field.”
Non-academic: “Which experts?”
Academic: “We don’t know. But it’s in a top journal.”
Non-academic: “Why is it in a top journal?”
Academic: “Because it has a high impact factor, so is highly cited.”
Non-academic: “Why does that make the research better?”
Academic: “Trust me. I’m a scientist.”
If you think about it, there’s not really much reason why politicians and the public shouldn’t think that. Trust has to be earned, and trust is something that opacity does not breed. In a world where transparency breeds trust we shouldn’t actually be surprised when expertise is rejected because we’re operating within a closed system. If we step outside of that system and look at it, or empathize with those who are outside it, then actually it makes sense why we have a sort of chaotic relationship with members of the wider public at the moment. The ivory towers of academia are certainly crumbling due to the wider open movement, but is it happening fast enough and what are the consequences when it doesn’t move fast enough?
What is open science?
There is actually no universally accepted definition of this. Open science is about using science to help address the major challenges to society. Ironically if you look at the one systematic review of what open science is (published and paywalled by Elsevier) it says that ‘open science is transparent and accessible knowledge that is shared and developed through collaborative networks.’ So does that mean that open science excludes anything done by the individual? It’s a pretty stupid definition if you ask me. But you can’t read it anyway because its paywalled. So, when people use ‘open science’, often people will think oh they just mean you know ‘physics, biology, chemistry’, but when I talk about open science I mean in the most inclusive sense possible. ‘Open science’ is often used interchangeably with open scholarship or open research, but we just have to make sure that we include everyone; so it includes humanists, social scientists, even artists, engineers, mathematicians, medics, and citizen scientists even are included under this umbrella of what open science encapsulates.
For me as well, open science is based on core principles. I’ve got this nice little table here from Tony Ross-Hellauer.
This is a combination of practical aspects and personal aspects behind open science. For example, accessibility, equality, and rigor are practical aspects; but there also are ones you might miss like freedom, fairness, justice and truth. For me, these are principles that you should adopt anyway as a good human being; and if so, then you’re basically an open scientist, or open scholar. You can embed these practices within your everyday life, or least practices you should be doing as a researcher. But in the practical aspect, open science is bloody complicated. I don’t want to hold that fact back.
Open science includes things like discovery, analysis, writing and publication, all the way through to different tools used for assessment and evaluation. There are entire workflows here which we need to be trained at, but no one’s really teaching us how to use them. Imagine we are all aware of at least one of the above tools or practices, but integrating all of these into your everyday workflow as a researcher can be quite complicated. Another really important question I think we need to ask is:
How is open science objectively different to science?
Mick Watson, in 2015, just wrote this beautiful article ‘When will open science simply become science?’. Those principles and tools above are just good science. Mick said ‘open science describes the practice of carrying out scientific research in a completely transparent manner (good science) and making results of that research available to everyone. Isn’t that just science?’ And it’s difficult to disagree really. So when we talk about what open science is, it really is just better science; and the opposite of open science is just bad science, because if you’re not sharing in a transparent manner, then you’re basically creating anecdotes rather than research.
Is open science a movement?
A lot of people describe open science as a movement. A movement is defined as a group of people working together to advance their shared political, social, or artistic ideas. The implication of this is that a movement has a direction with shared common goals based on commonality. So if open science is a movement, then who is defining the direction? Who is defining the shared goals? What’s the strategy behind it? Who’s leading it? Is it DARIAH? Is it the Open Science Framework? What happens to those who don’t feel included in that movement? A nice example of this is one time when I had to go to the Humboldt Institute with some open science colleagues as part of an outreach workshop to teach them different methods of doing open science. When we went there, they actually ended up schooling us in doing something way better than what we were doing using virtual machine environments. And we were like ‘oh cool so you’ve basically already done open science anyway!’ and they were like, ‘yeah but we just don’t call it that.’
Is open science a process? A set of principles? A vision, a club, a political agenda, fad, a distraction, is it exclusive?
What happens when we, as a supposed movement or community, actually can’t answer any of these questions? I think that’s kind of important because it gets to the root of what open science is and how it is objectively different to what we as a scientific community are doing. Then that can help us define a strategic direction for the future of what we actually want to achieve, once we fall back upon that.
Among academics, there is this mantra publish-or-perish. But there is no publish-or-perish anymore. It’s publish-and-perish. You can publish a lot during grad school and still be told that you’re not qualified enough to get a postdoc. There’s too much competitiveness, too much is driven by funding, and too many people coming in through the pipeline being drilled into this sort of narrow ivory tower mindset of how to do academia as soon as we start. One of the first things I was told when I became a PhD student is within like your first year or so you better publish a high-impact paper. I was like ‘dude I don’t even have any data yet’. I did publish eventually, but it was a lot of strain and it’s a lot of stress to deal with that at Imperial College. Out of the 50 or so PhD students that were part of my cohort, pretty much every single one of them left with insomnia, alcoholism, depression, anxiety, or stress because they were treated like farm animals because of this publish-or-perish mentality. And we wonder why PhD students have almost twice the amount of mental health problems as people who work in emergency health services. We don’t have any sort of support framework.
These giant mega-publishers are partly to blame. We’ve heard of Springer and Elsevier et al. mentioned before. There’s a great quote from the journalist George Monbiot. He said that “academic publishers make Rupert Murdoch look like a socialist.” I think that’s a very ‘positive’ outlook. It’s also known as the industry the ‘internet could not kill.’ Back in 1995 Forbes wrote a really great editorial saying that Elsevier would be the Internet’s first victim. Elsevier went on then to basically have unbounded profit increases. It is still ongoing. It’s a twenty-five-billion-dollar-a-year industry in 2018. It’s extremely fat and bloated and 35% profit margins are fairly typical. We still talk about “papers”, I mean its 2018, and we’re still referring to “papers”, and what we have at the moment for the vast majority of our scholarly communication process is a 19th Century process of peer review applied to a 17th Century communication format around journals and articles. We still mostly use PDFs as well. I think it’s probably about time we adapted to the web of 1995 for scholarly communication because we’re seriously lagging and it’s a very strange system to be part of right now.
I’m sure we’ve all heard of Sci-Hub and ResearchGate as well. These are essentially platforms that want to provide increased access to scholarly research but are viewed as ‘pirate sites’, as if liberation of knowledge is equivalent to plundering and murdering. The American Chemical Society and Elsevier and their kin are suing them for millions of dollars and shutting them down preventing access to this research. For some reason sharing research is illegal; have a think about that one. The more you know the worse it gets. Only 25% of all scholarly research articles are open access and that comes about 20 years after the Budapest Open Access Initiative. We’re increasing our rate of free access to knowledge about 1% every year, so maybe in about 30 or 40 years we’ll finally have substantial access to research knowledge (see OS Timeline Updates at the end of this post for more info).
We have this prestige-based economy where your worth as a researcher is based on the commercial brands dictated by corporate values which you elect to publish in for whatever reason. There are various biases in this, for example if you are a minority researcher, woman or early career researcher then you are incredibly biased against from the outset[2]. As we all know, researchers write, review, and edit the papers; so they generate around 95% of the real value behind scholarly communication. Then we (the researchers) have that content stolen away from us by publishers and then sold back to us. When you wonder how they generate 40% profit margins, it’s like going into a restaurant and bringing all of your own ingredients cooking the meal yourself and then being charged 40 bucks for a waiter to bring it out to you on a plate.
Its bullshit. The old saying is, ‘it’s smart people doing stupid things for smart reasons’ and the reason is because our careers depend upon this publish-or-perish mentality. But at the end of the day we’re basically being duped as a global research community. We are no longer researchers. We are the oil for the machine. The provider, the product, and the consumer for this mega corporate entity out there. The market itself is an incredibly dysfunctional part of a wider oligopoly, similar to a monopoly. Yeah, we have lifesaving research about cancer, Ebola and Zika for example. All hidden behind paywalls – sold off to the highest bidder at the will of Elsevier’s stakeholders. The Ebola outbreak was, what, four years ago? Just two days ago, Nature finally decided to announce they would provide open access, for a limited period only, to Ebola research. So a round of applause to Springer Nature for acting four years after they were supposed to.
If anybody’s not angry at scholarly publishers yet, then I’m clearly failing because you should be.
In Germany, you have a national library consortium who are currently revolting against the revolting practices of Elsevier and their kin. At a publishing conference I attended earlier this year in Berlin [APE 2018], Martin Grötschel was a speaker. He was in 2018 the president of the Berlin-Brandenburg Academy of Sciences, and typically at these conferences he is supposed to get up and give a nice speech about what a great job everyone’s doing and give them a little round of applause for being who they are. But this was a conference held by Elsevier and Springer Nature and instead, he followed the vice president of External Relations of Elsevier onto the stage, and he spent about 15 minutes slamming the crap out of them in the most beautifully German way possible. He is one of the chief negotiators behind Project Deal. He is actually in the room when negotiating these big deal subscription contracts with Elsevier et al., and he was saying that he feels like he’s being bullied half the time.
He said during one of these negotiations, “we don’t want to pay Elsevier anymore because we don’t see the value in what you’re doing” and he described what followed:
“One publisher [Elsevier] stated ‘if your country stopped subscribing to our journals science in your country will be set back significantly’. I responded, ‘it is interesting to hear such a threat from a producer of envelopes who does not have any idea of the contents'”
Martin Grötschel, at APE 2018
Pretty harsh. Hilarious at the same time. But whichever side you are on, however you look at this, there are these enormous rifts happening in the world of scholarly communication at this moment. It’s basically big publishers versus everyone else. They are entering the legal realm. They influence copyright, career advancement, the structure of our research institutes, and there are really deep issues happening here. In response there is an open science revolution infiltrating into many of these aspects. Project DEAL is causing quite a mess for big publishing.
In France recently, we had a similar thing between the Couperin Consortium and and Springer Nature. Couperin served up the middle finger on a silver platter and said ‘we’re not going to subscribe anymore’. They saved 12 million euros every year in subscriptions which they are reinvesting into open scholarly infrastructure. Sweden announced two or three days ago that they, the Bibsam Consortium, are doing the same thing. They cancelled all subscriptions to Elsevier journals and they’re like ‘crap, we’ve got like all of this money we’ve been wasting on journals for these last 30 years now what do we do with it?!’ It’s fantastic and if you look in Taiwan, South Korea, Argentina, and Mexico they are all gearing up to do the same thing. Oddly it is just the UK who seems to be not too fond of this.
We have so many awesome web-based technologies to open scholarly communication
Why on earth are we still communicating in PDF articles with thumbnail sized images when we have an entire web at our fingertips? I assume we all know about tools like GitHub, StackExchange and Wikipedia. Why not use something like the moderation or editing system of Wikipedia combined with the reward structure behind StackExchange combined with the version control of GitHub in order to create a fully integrated, community owned, very cheap, open scholarly communication system? And before anybody says ‘it can’t be done’, people are doing this already! I’m not sure if they are in digital humanities, but you know there are communities like computer scientists who are doing these sorts of things already, which brings me on to penguins.
What do Penguins, Cobras and Gimli have to do with open science?
Cultural inertia defines academia. It’s a crowd based physiological effect and pervades all aspects of the Academy. Have you ever met the average academic? 50% of academics are stupider than the ‘average academic’. They have this publish-or-perish mentality and they are generally terrified of new technologies. It took me about a year just to even set up a GitHub profile. We are also really bad at making predictions, like 10 years ago, people said ‘open access is never gonna happen’, eight years ago people said ‘open data is never going to happen’, three or four years ago people said ‘open peer review is never gonna happen’. Now people are saying preprints are never gonna happen. Yeah… all of these aspects of open science are happening in some sort of way in one form or another already, and it’s fantastic. But in doing so we’ve created a whole range of new technical social and language barriers around these new developments and that’s a bit of a problem.
We like to think that openness is supposed to be inclusive right? It is one of the key founding principles of the open science movement but is that actually the reality? Have we created a new system that’s open for some and not open for all? I would argue yes, and one of the reasons for this is that there are immense barriers to change within academia. We have a suite of social, cultural, technological, political, organizational things that create vast barriers, to each of us on an individual or community level, to actually progress in a way in which we think is most beneficial to our communities.
Three of the biggest stifling effects are fear, particularly for the most underprivileged, competition, because we all want to advance our careers, and the abuse of power dynamics from those at the top. All of this create inertia, which prohibits course correction. If you look at the values that are driving open science: things like how to reduce publication bias, how to increase access to knowledge, how to make research more efficient and reliable, how to make it more sustainable, and how to foster collaboration: almost every one of the barriers to these revolves around fear. For example, fearing of being scooped, or fear of information overload, fear of wasting time learning new practices or fear of poor research quality. Fear of errors and public humiliation is a big one for grad students. It’s this concept of fear, and this is where the Penguins come in. Researchers are like penguins.
Penguins spend most of their day huddled up together on an ice cap. Eventually they all start to get a little bit hungry, and they look longingly at the water. Food is out there, but so are killer whales. And no one wants to be the first one to jump into the water because they’re afraid of being eaten. Eventually one of them gets so hungry that he slides down off of the the iceberg into the water, and goes hunting for fish and he’s very happy. Then all of the others are like ‘oh, well he was safe, so maybe we can go down too’, and eventually one-by-one they all start laying down and they go off. And no one gets eaten, well sometimes someone gets eaten but that’s life, but it’s the same as academics.
There are new technologies and new processes and everyone’s terrified of being the first one to jump in. This fear is coupled with the fact that we’re almost singularly only rewarded for gaining academic capital based on the journals that we’ve published in. So we have an academic industry that relies on creating this ‘stifling effect’ over innovation and progression of a field. We are generating a lot of value for the publishing industry but we’re losing out as a global research community in the process. People talk about providing incentives to do open science or ‘sticks and carrots’ to make people do better science; but it’s kind of missing the point that we should actually be doing good science in the first place. We should not need to be incentivized to be transparent about our work, that’s the completely wrong way of looking at things in my view. That’s why the penguin analogy sort of works.
The next analogy is cobras. The cobra analogy is about key performance indicators and how having a performance-based evaluation system revolving typically around publication is damaging to academia and to global scholarly research. I call this the ‘cobra effect’, i.e., perverse incentive. There’s this really well-known anecdote about when the British ‘occupied’ India. Administrative officials were concerned that there were too many cobras in Delhi. So they created a new policy that members of the populace would be given money in exchange for any dead cobras. In return, the locals started breeding thousands and thousands of cobras, and then got a lot of money for them. So a policy designed to cull the number of cobras perversely led to a population boom in the end.
And the same thing happens in science if you look at how we are rewarded based on citations and impact factors. That’s what we what we end up aiming for. There’s another great paywalled article which came out last year that looked at this effect in Italian researchers. What it found was that within four years after the Italian Research Council’s policy saying that citation metrics were going to be used in hiring practices, it led to as much as a 179% increase in the number of self-citations. So it was a great idea executed in the wrong way, and led to an unintended consequence. It’s called Goodhart’s Law. When a measure becomes a target it ceases to be a good measure.
“When a measure becomes a target, it ceases to be a good measure.”
Goodhart’s Law; quoted in Jon Tennant’s slide
When high impact journals are the target for researchers they shift their priorities from scientific method to ‘how do we get into high impact journals?’. They conclude, ‘oh we have to tell a really good story, we have to get better data for our work.’ And that skews the research process, because the research process should never be about aiming for a high-impact publication. It should be about discovery of truth. Right? But we’ve skewed that.
So this is the game; and people always respond saying ‘well you know this is just a system that we’re a part of.’ But the system comprises of people, right? So anybody who is complicit in citation gaming, should be accountable for those actions. The fact that we are rewarded for high-impact journals is backwards. If you look at peer reviewed publications in the top journals it’s of typically a lower quality. The research has the highest probability of being retracted not due to more eyes but due to the probability that researchers have committed fraud or tried to cut corners in order to get into those journals[3].
According to this figure, ‘top journals’ mean worse research. It also demonstrates that the impact factor has perhaps nothing to do with the quality of research itself. The inverse of what we expect. One thing I tell researchers is ‘if you use the impact factor to evaluate the quality of another person’s research or of an individual researcher, then all of your papers that use any form of statistics should be retracted; because you sure as hell don’t know a thing about statistics’.
“If you use the impact factor to evaluate the quality of another person’s research or of an individual researcher, then all of your papers that use any form of statistics should be retracted; because you sure as hell don’t know a thing about statistics”
Jon Tennant, ‘Open science is just good science‘.
That’s a powerful message to tell people especially in senior positions.
How does open science factor into this? Some possible ways forward.
One solution is to use altmetrics and article-level metrics so that we don’t just use one crap proxy to evaluate an incredibly complex system of research. If you haven’t signed the Declaration on Research Assesment (DORA) or looked at things like the Leiden Manifesto or NISO yet, these should be high up on your agenda. But ultimately it’s down to the individual researcher to ‘stop breeding cobras’, because that just contributes to a worse system.
Now, about Gimli. Has anybody not seen Lord of the Rings? There’s a scene there where towards the end of the movie against all hope, the last of the good guys go to march on the gates of the bad guys, and it’s a no hope situation. They’re basically all worrying themselves to death, and Gimli says, “certainty of death, small chance for success, what are we waiting for?!”, and they all march off and most don’t die; and it’s another perfect analogy for academia. We are told that you can’t do various aspects of open science because they will harm your career; and that’s due to these social internal barriers mentioned before. The effect of a divergent attitude which has been imposed upon us: that people who want to innovate and explore and create or do good science are chased out of the system. The effect is that all of us are straining in perpetuity as part of the status quo, and research suffers. Statistically less than 1-out-of-200 grad students will get a full-time professorship, according to recent research done in the UK. The question is then, why would you try and be the worst version of yourself via publish-or-perish to get a job that you’re probably not gonna get because you are going to publish-and-perish anyways? Researchers become trapped in this cycle. We feel like we’re forced to play the game because leaving academia is perceived as failure. This leads to a reinforcement of the power imbalances, cultural inertia, commercial interest and governing systems of academia, and the cycle continues.
Can we break this cycle through training?
A paper published last year showed that in 60.8% of research articles published in global health journals, the researchers did not self archive, i.e., post a preprint, even though it was free and allowed within journal policy. This is life-changing research which researchers themselves are not sharing in a field where you would think access to knowledge is important for saving people’s lives. We have to ask ‘why?’. In the UK, a study showed that 93% of researchers believe that open access is important but less than half of that number have actually published in an open access journal. Why again this massive discrepancy? It is quite shocking that researchers can expressly promote open access but not practice it, even when if you publish in an open access journal statistically you will gain an increase in citations. The same can be said if you share your data and your code openly. You make your work more reusable therefore more open to being cited. In a system where ‘dead cobras’ still count, this is a good thing for you. A lot of people will counter this by saying ‘open access is too expensive.’ If you say that, all you are saying is that you can’t Google properly, because self-archiving costs nothing. There are so many routes out there to free instantaneous sharing that help to level the playing field for everyone.
“I honestly don’t know, it just it blows my mind that researchers can promote one thing with one hand and then fail to uphold their own values with the other. And I don’t understand why, because all of the evidence points towards being open as enhancing your career”
Jon Tennant, from talk
Now there are these policies and mandates saying ‘you have to publish your work open access’, and then publishers swooped in and said ‘we’ll give you open access for three thousand dollars a pop.’ Why would you pay three, four or five-thousand dollars for something that you can get for free? Preprints are amazing. Again, sharing is generally good for your career because you generate more citations faster. More importantly, you get free rapid communication for your research which could benefit society and its problems. There’s an explosion of preprint venues in the last five months. The concept here is that it’s your own work, don’t stick it behind a paywall. You do have choices to publish it where you want and the future is definitively going open, and you can already be a part of this.
On the left we have the exponential increase in the rise of preprints by platform, and on the right are preprints as a percentage of all papers by discipline. At the same time, open access mandates are appearing across the globe from funders, institutions and governments. Openness isn’t going anywhere so you might as well ride that wave.
In summary
I think it’s time to change the conversation because open science is pretty awesome. It increases the dissemination and reusability of your research and ultimately enhances your academic profile which is good for you. More importantly it helps to combat the reproducibility crisis, and makes you a better researcher both ethically and methodologically. It disseminates potentially life changing and saving knowledge freely to all.
The first step in achieving this is that we need to take responsibility and educate ourselves about open science or good science. This is one of the reasons I’m building this open science MOOC (Massive Open Online Courses) to help training and support and education for researchers around the world. We are going to hopefully use this to empower the next generation to become leaders in their own research fields. There are challenges though. We need to not just act within our own little communities, but act across them to increase interdisciplinarity and community building.
[speaking to the DARIAH audience specifically] I’m honored to be here with humanists and social scientists because I don’t get to speak to you very often and I know that at the conferences I attend [paleontology] that humanists and social scientists often aren’t invited and I think that’s a real problem. We have this gulf between physical science, and humanities and social sciences. We need to be working together building bridges, not walls. Open science for me is about breaking down barriers and generating equity in science. Things that can help us to foster collaboration and increase the power of communities against the entrenched crap which we’re all trying to fight against. This means we have to work together towards a common goal, ultimately that common goal for me is pooling knowledge and resources to create a decentralized scholarly infrastructure. With communities as the actual focus, then we can actually achieve the principles of open science.
Be that penguin. Don’t hold back from trying new things; be one of those people who jumps into the water first because you’ll be remembered amongst your community as a champion. Be fearless Gimli. The career pipeline is leaky anyway so why not diversify your skill set. Go out as an awesome researcher ‘guns’ blazing, or train yourself to become an awesome researcher through open scientific practices predicting what the future of your field is going to be rather than doing what professor X tells you to do because it worked 50 years ago. Don’t be a cobra farmer. Be focused on good science and responsible evaluation and let the quality of your research speak for itself.
What open science is
It’s a tautology. Science was always open. This is where we want to get to in 10 years. We don’t want open science to exist anymore eventually, because this is going to be the period when we woke up and realized that what we were doing before wasn’t really science. It was anecdote, and we need to change that. Science without open is just anecdote, open science is just good science that’s your take-home message.
Afterword – OS timeline updates
During and after Jon’s 2018 talk and his passing in 2020, major changes took place that would reshape the science landscape. Two in particular are of centrality to Jon’s core values about free knowledge communication, and reinforce that open science is actually good science. For one, and despite many lawsuits and attempted shut downs, Sci-hub has very little apparent impact on publishers’ profits. The figure below is Elsevier’s parent corporation RELX’s revenue and earnings per share since 2016. Sci-Hub had massive usage throughout this time yet RELX’s revenues continued to climb.
What happened in 2019 was not a sudden flocking to Sci-Hub. It was that most of Germany’s institutions, the University of California and many other global institutions cancelled their Elsevier contracts. This clearly had a huge impact. At the same time, already in 2016, researchers on every continent used Sci-Hub en masse and had way more scientific knowledge in their possession than at any time prior. This was a huge achievement for scientific communication, that still continues today. Why didn’t Sci-Hub matter for corporate publishing profits?
Simple: Presumably those who use Sci-hub in the Global South and as members of less-well endowed institutions in the Global North do not have legal access to the articles they download. These institutions do not have the funding to have legal access, so whether the researchers at these institutes use Sci-hub or not, the institutes are not a current source of profit for publishers. Moreover, given the persistence of global inequality, it is unlikely that these institutes will ever afford subscriptions, thus they are not potential sources of profit either. For those who use Sci-hub at well-endowed institutions with subscriptions – maybe they are lazy, unaware of the subscription or somehow unable to access their library (e.g., a VPN or technical glitch) – they are not cutting into profits. These universities already have subscriptions for the most part so when their researchers use Sci-hub, the publishers still profit. Thus, all the noise made by publishers about Sci-hub eating their profits is greatly overstated.
The percentage of research available through legal open access channels has doubled or even tripled between 2015 and 2020, but one of the greatest achievements politically is Plan S supported by cOAlition S. This is a mandate that all funded research be open access as of 2021. This has completely shaken the for-profit megalithic publishing system and its role in Europe. Already, two of Jon’s favorite targets, Springer Nature and Elsevier, have come to the table to offer solutions for researchers to be compliant with Plan S. Its not perfect as it still calls for often large APCs on the part of the author, but it allows the authors to retain CC BY rights to their work. We cannot expect science communication to be perfect, just like human nature, but we can in perpetuity strive to do just good science and continue to push publishers to reduce their fees as we move away from print distribution.
Footnotes
[1] This is one of Jon’s more controversial claims. While there is evidence the public may have trusted science less heading into the 2010s, there is also a lot of evidence that trust has been stable for decades and of course that this varies greatly by country.
[2] And statistically underrepresented in the output. The exception being that female solo authors are not apparently penalized in the peer review and publication process itself.
[3] The evidence on this is mixed.
WHAT
By now you’ve heard the term “open science”. Although it has no global definition, its advocates tend toward certain agreements. Most definitions focus on the practical aspects of accessibility.
“…the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods.”
FORSTER, open science teaching resource
Some definitions enter the realm of ethics, feminism and social justice.
“…to imagine and design inclusive infrastructures, practices, and workflows for scientific practice that intentionally enable meaningful participation and redress (these new) forms of exclusion.
Others focus on the communicative interplay between scientists and the public.
“Openness in Open Science also means opening up science to society… The democratic ideal of Open Science argues for equal two-way communication with the public: one should not solely focus on the question of how to foster the uptake of science in society, but also on how to foster the uptake of societal insights in science.
Anne-Floor Scholvinck,ZBW Mediatalk
Whatever the ontology, open science is inevitably something that challenges the status quo in science. Usage of term indicates there is something undesirable about science, otherwise advocates would simply advocate “science”.
The “open” part of the concept refers to any number of things depending on whom you ask. Commonly it means:
Open access – making the results of scientific techniques, research and theory accessible to everyone; as opposed to only in paywalled journals.
Transparency <open process> – making all methods, code, data and any biases or conflicts of interest known before and after the research is conducted. So long as doing this does not harm human subjects or violate any laws.
Open source – on the technology side of science, all programs, apps, algorithms, tools and scripts should be transparent and usable by others. This means that when a scientist develops a new technology, anyone else’s technologies can interact and interface with it. Moreover, anyone can modify the technology to better suit their own needs.
Open academia <open communication/democracy/feminism> – allowing anyone to participate in academia. That academia has the goal of eliminating inequalities, prejudice and domination from academia that take place in the social world. That academia embraces feminism and critical race theory in its methods and institutional practices. That everyone has the same place in scientific discussions, and no science is conducted by pressuring others or taking advantage of existing power structures. That no science takes place in secret, except for research that requires obfuscation for its completion.
Again, the definitions can cover a broad range. The above are just a snippet, although they strike me as the most common usages; except for ‘open academia’, this is reserved for certain justice motivated scholars.
WHY
Although I do not proclaim to be the arbiter or knower of right or wrong in academia (and life in general), the following facts seem wrong to me.
Double-work and the co-opting of journals
Scientists provide their work as editors and reviewers, because the peer review and publication process is the centerpiece of all of science. Peer reviewers and editors are the only consistent form of quality control in science. The academic journal was a functional response to previous forms of knowledge transmission that required direct scientist/practitioner to student interactions which were geographically limited and reached a very narrow audience.
The journal made it possible to transmit knowledge across the globe. Moreover, the journal reduced the simultaneous discovery and re-discovery problems of science, because no one could prove they discovered something first, and others worked on problems that were already solved unknowingly. It represents one of the first ‘open science’ movements because it was driven by the idea that science was at an impasse and could only move forward through transparent and open exchange of ideas arbitrated by being part of the public record through publishing.
Ironically, the journal format came full circle and began to undermine science. After over two centuries of journals run by non-profit academic associations, for-profit publishing houses began ‘offering’ their services to meet the growing global demand for journals and their content and the rising costs of editing and distribution. In many cases, these publishing houses were able to purchase the journals by offering the academic societies the exclusive right to determine what went in them. Within just 30 years, five conglomerates owned the titles, content or certain features of over 50% of all journal articles published globally.
The content, as always, is still a product of the scientists and the voluntary work of editors and peer reviewers. The publishing houses make large profits, but pay nothing to these workers. The editors and peer reviewers earn their income from universities mostly. The very universities that pay high fees to purchase the right to provide the journals in their libraries. This is a double tax on the universities – paying the producers of content to produce and then paying the distributors of that content to consume it. The content does not change at any point in between these two forms of payment, in other words, the publishers do not add any scientific value to this content.
Matters got even worse with the publishing houses over the past decades. As creative and deceitful profit seekers, some publishing houses realized they could generate even more profit by collaborating with the private sector. For example pharmaceutical companies’ profits were directly determined by the findings of studies published in journals. Pharmaceutical companies, or any companies whose profits were determined by the outcomes of scientific experiments, would be willing to invest in shaping those outcomes if they could. Enter a novel concept pioneered by Elsevier: selling journals or journal space to private companies to boost their profits. Win-win for them. Elsevier also pioneered the process of monetizing open science by purchasing SSRN, engaging in massive lawsuits designed to stop the free sharing of (their) copyrighted knowledge and tries to copyright intellectual activities such as peer review.
Other ventures create journals that prey on scholars who do not know better, or seek to get easy publications to add to their CV. These publishers are often labeled “predatory publishers” and they “publish work without proper peer review and which charge scholars sometimes huge fees to submit should not be allowed to share space with legitimate journals and publishers, whether open access or not” (predatoryjournals.com). They also sometimes mimic reputable journals by copying their styles and their names and soliciting content from scholars, a procedure known as “hijacking“.
Publish-or-perish begets questionable research practices
Thanks to the advent of the scientific journal, knowledge could be evaluated, used and further transmitted across space and time. The utility of the journal and other forms of academic publication such as books, proved so effective that they became the primary source for others to evaluate the importance of scientists and their work. This gave rise to the norm we are all familiar with, publish-or-perish.
In a survey of psychologists, John et al. (2012) found that 50% claimed they had selectively reported studies that supported their hypothesis (as in, selectively excluding those that didn’t). Moreover, 35% admitted to reporting unexpected findings as having been predicted from the start. Nearly 2% outright admitted to faking data.
Publish-or-perish and questionable research practices have a causal relationship. Except for occasional sociopathic or psychotic individuals, there is no reason for a scientist to engage in questionable research practices. No reason, except scientists’ very existence on scientists may depend on it. So many studies in reality lead to results that go in all directions, support the null or (most importantly) do not provide groundbreaking new results.
Through the peer review and editorial process, journals select studies that are path-breaking. Studies that will move knowledge forward and be of the greatest interest to readers. When faced with prospects of not getting tenured, not getting grant funding and being forced out of academia, a human’s (scientist’s) rational calculations change. Suddenly, rounding that p-value from 0.054 to < 0.05 or even adding some cases to the data becomes a cognitively defensible decision.
Like any profession, science is competitive. Those who publish more, or get more citations to their publications tend to get ahead. Those who don’t, don’t. Professional athletes use incredible tactics to gain competitive advantage. Of course steroids are well-known, but other tactics are much harder to detect. For example, endurance athletes often use blood transfusions to boost recovery and performance. This is what it means to be human, scientist or not.
One of the most radical events in the social and behavioral sciences is Diederik Stapel’s entire career faking data and results that were published in at least 54 articles that consumed millions of Euro in funding. It took almost two decades for critics and whistleblowers to finally out him. Psychology is not alone. In political science LaCour and Green published a study in Science that attitudes toward gay marriage could be changed if heterosexual people listened to a homosexual person’s story, but it turns out LaCour fabricated results of a follow up survey that never took place as uncovered by Broockman. In economics Reinhart and Rogoff published numerous studies identifying a negative impact of high debt rates on national economic growth, when in fact several points in their dataset had conspicuously missing values. When these values were added there was no longer support for their claim as identified by Herndon, Ash and Pollin.
I suspect that most questionable research practices are not intentional. The sociopathic (~psychotic) Stapel’s of the world are rare. This pressure to find a job after doing doctoral studies and then to get tenured, means a trade off between conducting science in its ideal form – so learning as much as possible about the existing literature on a subject, mastering the necessary methods to perform the research and executing the research, possibly with several iterations, and facing the prospect of null results – with science in a form that will lead to publication as fast as possible.
This ‘fast as possible’ leads to amateur science. For example, in the rush to get my first publication I attempted to use “multiple imputation”, but lacked the time to properly learn this method. Instead I simply generated several datasets and averaged them into one and re-ran the analysis on this one. This was not an intentional misuse of a method. It is a questionable research practice as a result of context. Think about matrix algebra. It is the basis of many advanced statistical techniques regularly used by social scientists. How many of us have a strong grasp of matrix mathematics? I don’t. And yet I’ve published several studies using structural equation modeling.
WHAT & WHY in SOCIOLOGY
I am aware of nothing about sociology that suggests it needs a special adaptation of open science. Most research cannot be strictly delineated as sociology or not sociology anyways. The boundaries of a discipline, especially within the social sciences, exist mostly in the institutional structure of universities. Eliason suggested that sociology is unique because it overemphasizes quantitative techniques, has needlessly long articles, lacks writing for the popular press and emphasizes research at the expense of teaching. In my experience the previous sentence perfectly describes all social and behavioral science disciplines at once. Even article length, something I thought might be peculiar to sociology, is not special. Political science and management research have very long articles. Consider that and ASR and ESR for example, limit words to 9,000 and 8,000 or less – this is relatively average if not short for social science.
Actually, I would argue the most unique thing about sociology at the moment relates to open science. Two points in particular: (A) that sociology has not had the same incredible scandals as other disciplines and (B) that sociology lags behind other social sciences in promoting open science.
A lack of scandals, not scandalousness
Could sociologists be more scientific and ethical in their research behaviors than those in other disciplines? Given identical institutional and career structures that favor productivity and innovation over replicating or checking each other’s work, I doubt it. Sociology journals and their editors, for example, rarely retract articles despite evidence of serious methodological mistakes. Carina Mood once accurately pointed out mistakes in the interpretation of odds-ratios in some American Sociological Review articles, but the editors refused to publish her comments, much less consider retractions. She shared her exchange with ASR in an email to me and discusses some of it in a working paper. An exceptional recent event was the retraction of one of Legewie’s sociological studies, but this required he himself to initiate the retraction after someone pointed out errors in his work. Until 2020, the Retraction Watch database (www.retractiondatabase.org) listed no retractions from the top sociology journals, and only two among the well-known, one in Sociology and another in Social Indicators Research.
This year, something new happened. Five articles published in Social Problems, Criminology, and Law & Society Review were retracted. These articles had the common co-author Eric Stewart. It turns out that the data he provided were faked. There is no other logical conclusion that this after exceptionally rigorous work by Pickett (a co-author of Stewart) provided evidence that the Stewart studies had consistently incorrect means and standard deviations, unverifiable surveys (sources, methods, original materials), magically changing case numbers despite identical statistical results, sometimes half the data had duplicate cases and impossible clustering structures in the data.
As an aside, one of Pickett’s findings was that the data had non-uniform terminal digit distributions. This means that the right-most digits in the reported statistics differs markedly from a uniform distribution. In particular, at the third-digit numbers should be uniformly distributed with 0-9 appearing roughly 10% of the time. In one of the papers, zeros appear less than 2% of the time. If you are considering faking data, keep in mind that it is roughly impossible to do it in a way that cannot be detected by careful investigation. Any algorithm used to generate results (even copying and pasting) leaves is statistical marks.
Perhaps we sociologists should be partly relieved, as this is just confirmation that we are as much a part of social science and its problems, as any other discipline. However, the Stewart retractions which should have been breaking news for sociology, went mostly unnoticed. The results of the investigation leading to the retractions is not published in a flagship sociology journal where it belongs. Instead it appears in Econ Journal Watch – something unlikely to be read by any sociologist. Moreover, the retraction notices from the original journals do not cite outright fraud. Stewart continues to promote his work in print claiming the main findings still hold, and several other of his studies with similar irregularities have not been retracted.
Another, extremely important event was a case of ethnomethodological research conducted by Lindsay, Boghossian, and Pluckrose in the mid 2010s. This is sociological self-examination at its best, although their backgrounds are mostly outside of the discipline of sociology. They wrote a series of 20 papers presenting fake results and making arguably unethical claims. They invented the papers to mimic the style of articles published in journals well-known for sociological research on topics of identity, hegemony and marginalization. Seven of their papers were published or had revise and resubmit recommendations before whistleblowing forced them to cancel the project. Some highlights: one paper contained sections from Hitler’s Mein Kampf. Another suggested men should be trained similar to dogs to prevent rape, and a third that white men should be forced to sit in chains on the floors of university classrooms, instead of normal desks. I am not commenting on the merit contained in these ideas, only that they all contained faked data, non-existent methods or conclusions not supported by the data. That these studies easily flew under the radar of a number of high impact journals points out how easy it is to publish without doing the necessary research work.
Lagging behind closed doors
October 6th, 2020. I entered the search terms “open science” (with quotations to search the exact phrase) and “sociology” (with quotations to only return results that contain the word) into Google Scholar. Six pages of results without a single sociology journal. On page 7, Merton’s “Priorities in scientific discovery: a chapter in the sociology of science” appears. Publication date 1957.
In 1973, Wilson, Smoke and Martin found that 80% of studies published in the top three sociology journals of that time rejected the null hypothesis, in other words they had p-values below a threshold. This suggests publication bias, if not p-hacking. Sahner (Table 5) analyzed all article submissions to the Zeitschrift für Soziologie, 1972-1980. Of those that contained significance tests, 70% were significant at p < 0.05 suggesting that authors prefer to submit significant results. More recently, Gerber and Malhotra (2008) reviewed articles published in American Journal of Sociology, American Sociological Review and The Sociological Quarterly, and specifically looked at the boundary of t = 1,96 (i.e., p<0.05) to find that as many as 4-out-of-5 studies were ‘significant’. This suggests publication bias as well. Sociology has yet to have a systematic review of p-hacking by comparing p-values within ‘significant’ results. Meanwhile psychology and political science for example are teeming with papers on “p-hacking” and “publication bias”.
Sociology is rather intransparent. An estimated 78% of the major sociology journals have long-standing transparency policies. Unfortunately, these policies are mostly artifacts on paper without much enforcement. For example, only 37% of sociology articles published in the mainstream journals between 2012-2014 include shared data and/or materials. In 2015, a small group of sociologists tried to obtain materials from the authors of 53 prominent sociological studies. They obtained these from just 19%, and only 20% of all the authors they contacted bothered to respond despite several requests. This suggests sociologists are free to hide the data and materials that led to their findings without recourse, despite such guidelines.
Other disciplines have embraced the Transparency and Openness Promotion Guidelines (TOP). The TOP guidelines with help of the Center for Open Science support journals to improve science. Journals can become signatories of TOP, and in doing so they either adopt and enforce new transparency guidelines, or certify that they already meet certain transparency standards. Most of the top psychology journals and several political science journals signed on. Other major journals such as the Journal of Applied Econometrics and later the American Economic Review adopted their own enforced transparency guidelines.
Until 2017, the only higher ranking sociology journals that signed TOP were Sociological Methods and Research and American Journal of Cultural Sociology. In 2017, Elsevier dictated that all its journals adopt guidelines and this added Social Science Research to the list. At the time of writing this, the flagship journals American Journal of Sociology and American Sociological Review neither signed TOP nor enforce their own guidelines. Of top German sociology journals, the Kölner Zeitschrift für Soziologie und Sozialpsychologie is the only signatory.
If intransparency is pervasive in sociology, then research cannot be (a) checked for errors, (b) reproduced or (c) simply critiqued. Even when exact reproducibility is not the goal, as often is the case with context-specific interpretive research, most research methods remain shrouded in mystery. This requires readers to take a giant leap to trust what others report. Part of the problem is that sociologists express little interest in reproduction or checking others’ works. There are few replications in the history of sociology, and if anything, they decreased over time until recently. For example, searching the articles in American Journal of Sociology and American Sociological Review reveals 22 replication studies from 1950-1980 and only 8 from 1981-2010.
Something telling about a lack of willingness to open sociology comes from sociology’s most ‘powerful’ society, the American Sociological Association. They collectively petitioned the US government to not make data transparency a requirement attached to grant funding in 2019.
NOW
What to do about it? Here are some simple steps to consider especially for sociologists. Similar to steps advocated by many others for graduate students and academic institutions, or all of us for example.
Make all the materials – research design, methodological steps, data (when legally and ethically possible), analyses, conflict of interest and any software code – available online. The practical reason is that others can follow your work and expand it in the future. Doubly practical is that you don’t need to respond to email requests for your materials. So long as you are not a deceitful sociopath, you want others interested in your work and to replicate your work. Even if a study, seems to ‘prove you wrong’, the fact that it replicated your work is evidence of how important your work is and the topic of study. You are a piece of a much larger community of knowledge construction. Constructive exchange can lead to collaboration with critics to generate better future research without personal conflicts.
The immediate value of transparency is that being transparent forces you to be careful. Knowing everything will be public information increases the value of attention to detail. Put in its converse: not sharing your workflow publicly can indirectly foster lower quality standards, in addition to creating possibilities for misconduct. All this enables rather than hinders knowledge, and increases inter-researcher trust.
Transparency should not be much extra work. During the research process you should take high quality notes for yourself. You will often return to your data and research in the future and thus need those notes. This is a best practice with or without sharing your work. When you engage in this best practice, you have a deep familiarity with your data and can draw meaningful conclusions and easily redact identifying characteristics in your data in the case of qualitative research. In case you cannot share data, you can still reveal the design and expectations; or allow controlled access to the data. Human subjects must be protected at all costs, and yes this often means data sharing is not possible .
The ‘transparency work’ of the qualitative research process can be reduced by software platforms that provide semi-automated annotation and coding. Even if you do not share data, you can build an open workflow from the beginning that allows others to understand every step of the data generating process. However, this work can also be extremely tedious and the incentives not immediately clear. More fruitful discussion if not research assistant funding is needed in this area moving forward.
If you are using quantitative methods, immediately stop hiding your work. If you ran 100 models and 99 did not support your hypothesis, then this is your finding. If a journal does not want to publish this, point the editors and reviewers to the importance of null results and the problems of publication bias. If they still refuse, consider boycotting this journal and sharing your negative experience in public.
Preregistration can drastically reduce bias and hacking prior to collecting data. When you clearly outline your plans including how you will analyze the data, before conducting the research, there is little room for hacking so long as you stick to the plan. Moreover, preregistration can be done directly with a journal although sociology journals are laggards here because they generally do not offer this option. In a preregistration, even if you just put an pre-analysis plan or research design and goals online, you must think much harder about factors such as meaning, causality, inter-subjectivity and ‘how the world probably works’. You cannot hide behind results in this process and therefore you must anticipate counterarguments and explore counterfactual logic. This improves the clarity of theory and research, creating an immense gain in efficiency and effectiveness.
Regardless of the methods you use there are many opportunities to take advantage of preregistration. Some forms of qualitative research, for example those involving grounded theory and interpretivist methods, require decisions during the research process that cannot be foreseen. This uncertainty can be outlined in a preregistration stating explicitly when flexibility is and is not admissible. Moreover, simply putting a qualitative research plan online prior to conducting the research is equivalent to a pre-analysis plan. This research design need not compromise your data collection work because you can register the plan on a platform like the Open Science Framework and then embargo it, so that it is preserved but not made public until after the research concludes. Some scholars using quantitative methods might assume that preregistration is not possible because they work with secondary survey data. But the regularity and release of these survey data are known in advance, and these scholars can preregister their studies before the next round of data are collected with the knowledge of which questions and countries will be available.
The central functions of the scientific publishing industry are printing and disseminating knowledge, which historically solved a problem of how to share knowledge across universities and countries. The business functions of publishing, however, come with harmful byproducts. Publishing firms extract profits from scientists twice. First, scientists provide free labor in the form of editing and peer reviewing, in addition to producing the results for the articles to be printed. Next, researchers, or their employers, must purchase the product of their own labor; labor not paid for by the publishers. The journal article as a product comes at a high cost, and often only in packages of journals meaning that universities have to pay for extra material their scholars do not use.
Sometimes publishing houses neglect science in favor of profits, but Elsevier has been particularly problematic. They sponsored weapon fairs, created and sold ‘fake’ journals to pharmaceutical companies to publish ‘results’ supporting their drugs, purchased the Social Science Research Network and created paywalls or removed legally shared working versions of articles, charge fees for open access articles, and actively lobbied against open access legislation (For a concise summary with links see Tal Yarkoni’s blog entry). This brought massive counter movements against Elsevier in the scientific community (for example, The Cost of Knowledge). You can take action and refuse to review for or publish with unethical publishers if you feel it is justified. Thus, you should inform yourself about the publishers. Your libraries are a source of information, because they deal with the business side of publishers.
If you are in Europe, check if your institution is a signatory of ProjektDEAL. A consortium of universities are collectively bargaining with publishers via ProjektDEAL demanding that publishers reduce fees and eliminate the double paying of universities. The primary objective is that publishers sign country-wide subscription agreements that enable access for all universities at once. Wiley agreed to such a model and this marks a paradigm change. It indicates how the publishing industry looks in the future, so long as the OS Movement proceeds. If you are not in Europe, consider starting a similar initiative, for example the entire University of California system of 10 universities, 5 medical centers and several research institutions that collectively produce roughly 10% of the world’s academic publications recently followed ProjektDEAL and boycotted Elsevier.
You can work around the publishing business. Prior to submitting an article or after it is published, you have the right to share a preprint – a draft of the paper you share publicly so long as it is not published elsewhere or sold for profit. Posting preprints reduces the power that publishing firms have over science, in addition to giving others immediate access to your work. But simply posting preprints on your academic website is not open enough. Use a preprint service, for example through the Open Science Framework, to ensure that your preprints appear in search engines such as Google Scholar. SocArXiv for example, is the go to location for sociology. This enables scholars to find and directly access research results based on the words they contain, uninhibited by paywalls – a crucial aspect to practicing sociology in the Global South. Preprint services are free and open access.
Science and religion parted ways long ago. This is a historical struggle over power. If science claims to disprove that the earth is the center of the universe or that evolution undermines creation, it might falsify religious doctrine, said to be the word of a God or Gods and thus the ultimate Truth. Religions rely on their claims to this Truth to convert people to submit to their institutions. If science undermines this Truth, it undermines religious power. And power is something that changes human behavior; they might lie, cheat, steal and kill to get or preserve it.
But science and religion followers are not that different. Actually, they are the same. They are human.
Power is another way of describing status and prestige. In science, we know all about status. Scientific status comes from recognition. From making scientific discoveries and claims that garner attention. In particular, attention in the form of citations.
The absence of market prices results in prestige becoming the main reward and high prestige becoming the measure of exceptional ability. Rent seeking in academia, therefore, produces ego-maniacs and much destructive behavior
Sørensen (1996, p. 1358)
The seeking of status, what economists and Sørensen label as a form of ‘rent-seeking’, is presumably the reason scientists p-hack, and engage in other forms of malpractice. In some cases they ‘must’ p-hack in order to meet the demands of reviewers. Mostly, statistical research requires significance stars to attain publication. This is changing with the Open Science Movement in recent times, but only in the margins. Research using qualitative methods also requires its own form of statistical significance ‘p-hacking’. To be published, a paper must extract novel ideas from observational data, whether these reflect the actual data or are even based on actual data at all seems to be irrelevant as long as the story looks good to reviewers. Just like the significance stars that look all sparkly and comforting to reviewers of quantitative research.
So humans (scientists) cheat to attain status; intentionally or even unintentionally — without malicious intent because they are conditioned to play with their data until the stars appear. Therefore it should be no surprise to humans (scientists) that other humans (religious followers) also cheat.
If p-hacking in science is playing around with models so that they represent the data in a way that matches the researcher’s desire for status, rather than portray the results of scientific tests, then p-hacking in religion must be to interpret the dictates of God (or Gods) to fit one’s, or one’s group’s own status goals.
The conflict of science and religion it like p-hacking. Its a power struggle. Who has the power to make claims about the way the world is and the way it should be? Religious followers would attribute this authority to God, and then themselves as seekers and messengers of God. Science followers attribute this to factual knowledge about the world and then themselves as the testers and reducers of uncertainty to ‘uncover’ those facts. In both cases the process is corrupted by status seeking, a fundamental fallibility of humans. When acting as scientists and spiritual seekers, we are fundamentally still primates, and as such tend toward hierarchy, with many of us human-primates willing to cause harm to others in order to attain higher and higher positions.
For religious followers to gain status through p-hacking they would have to adjust the ‘word of God’ or the ultimate Truth in a way that it (a) is no longer a religious or spiritual truth so that it (b) serves their own ends. Do we have evidence of this practice? Wars fought in the name of religion do not really fit the criteria, as wars can be justified as right, as God’s (or the Gods’) will, for example Christian New Testament Revelation 19:11 about the righteous warring against the (presumably) non-righteous (i.e., ‘evil’); Christian/Jewish Old Testament Deuteronomy 20 calls Israelites to war against cities that do not accept their terms; Islam Qur’an 22:39 advocates war in self-defense and possibly 4:74 to fight in the name of God; and Buddhism taking the stance that war might be necessary in defense but not justified as an aggressor.
The point is that it is difficult to find direct evidence of p-hacking by religious followers in order to gain status for themselves or a group. The same problem lies with detecting p-hacking in scientists. Given that all sides in all wars tend to claim righteousness under God (or Gods) it seems obvious that some (or all) are misinterpreting what should be God’s will for their own gain. Given that so many p-values lay below 0.05 in published research, we can assume that not all are derived from a clean research design, method and presentation of results.
Openness is not needed because we are untrustworthy; it is needed because we are human
(Nosek, Spies and Motyl 2012, p. 626)
It is not only religious and scientific institutions that are antagonistic given their seeking of power. Political institutions, who wield a monopoly on force in the modern world divided into sovereign nation states, also do not always get along with both religious and scientific institutions as they have their own p-values to guard.