Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

A fake journal, algorithmic plagiarism and tricking Google Scholar

A case of a fake journal that passed Google Scholars’ bots and algorithm checking to become indexed. The journal contained articles that were completely plagiarized but assigned new titles, abstracts and authors.

See updates at end of blog for table of plagiarized papers, and a small win for open science after the domain host revoked the content of the plagiarized paper’s URLs.

A colleague of mine was searching Google Scholar using certain words and the number two hit appeared with the following title and direct download link.

The PDF looks like a journal article. The journal has an ISBN instead of DOIs, but this is not unheard of. It even has an impact factor listed: 7.11, it must be good.

It turns out that this seemingly interesting paper with an alleged author who is an expert in this area, is plagiarized.

How do I know it is plagiarized? Just ask the real authors of the paper which appeared with a different title and abstract in Social Science Research one year earlier. Oh and with an identical word-for-word text, with some mistakes relating to equations – possibly a failed scraping algorithm.

The journal title is Journal of Current Research and Review, but it is extremely difficult to find through an internet search. This requires endless scrolling (as of 10.08.2023), or removing the ending on the download URL to arrive at the index page.

Scrolling down on the home page reveals the “Latest Articles”. Here I find the same title that appeared in Google Scholar. However there is no download link, only the title and abstract. In other words, the URL that Google Scholar found to download the paper is not accessible through any link on the journal’s website.

How is it possible that Google Scholar would pick up and index such garbage?

I asked ChatGPT to tell me how Google Scholar works, and the results suggest that someone or something had advanced knowledge and programming skills that would provide the right meta-data and materials on the journal webpage to pass the automated Google Scholar bots which are constantly scraping the web for academic papers and publications.

From ChatGPT prompted with “How does a journal get indexed in Google Scholar?” (09.08.2023). I received the following answer. Note that I removed the descriptions of each point, as they are mostly self-evident.

Google Scholar indexes scholarly literature, including journal articles, conference papers, theses, and other academic resources. In order for a journal to get indexed in Google Scholar, there are several steps and criteria to consider:

  1. Quality Content
  2. Digital Format
  3. Open Access and Accessibility
  4. Consistent Publishing Schedule
  5. Meta Tags and Structured Data
  6. Proper Article Metadata
  7. Citations and References
  8. Author Profiles

It’s important to note that Google Scholar’s indexing process is automated, and there is no formal application process for journals to get indexed. Google Scholar’s algorithms discover and index content based on various factors. However, journals can follow best practices to increase their chances of being indexed and improving their visibility within Google Scholar’s search results.

The journal website and meta-data had to pass Google Scholar’s algorithm. This is no lucky feat. Although Google does not reveal any usage of AI to evaluate journals for indexing, it clearly has an advanced system of scraping, filtering and crawling. Who or whatever designed this journal, or really this website that looks like a journal, understood how to pass the test. For example, the “Latest Articles” appear to be from “Volume 14” suggesting to potential crawling bots that the journal may have been published for 14 years now.

This is of course false. There are no other articles readily accessible from the journal’s website other than the two that appear. The other article is also plagiarized, in case you were wondering. But, a closer look at the URL reveals that the article I am discussing has the number “800” at the end of its URL.

https://zapjournals.com/Journals/index.php/jcrr/article/view/800

Changing this number yields other papers. At least as far back as number 795; prior to that yields 404 errors. Moreover, I cannot find the papers from 795-798, only their titles and abstracts.

The journal website’s main page yields something even stranger. It looks like what a computer science student might create as an example page to try and sell as a template or to demonstrate the services on offer for a website construction gig. The content has absolutely nothing to do with journals, academia or publishing.

It even comes with fake testimonials. I wonder whose pictures were stolen for this…

The real question for me is, ‘what to do about this?’. Clearly this is fraud and constitutes both ethical and legal infringements on science. Looking up the host of the domain of the journal using ICANN reveals that it is provided by a Lithuanian company Hostinger.

Looking at this company’s website suggests they are legitimate. The provide contact information to report abuse. So that is what I did by sending them the info in this blog post.

Hostinger replied within two days and told me they take abuse of their services very seriously and asked me for more evidence so they could pursue the case. I gave them the two links for the plagiarized papers and their original versions published one year earlier in Social Science Research.

https://zapjournals.com/Journals/index.php/jcrr/article/download/799/1237

https://zapjournals.com/Journals/index.php/jcrr/article/download/800/1238

Word for word plagiarized from:

https://www.sciencedirect.com/science/article/pii/S0049089X22000321

https://www.sciencedirect.com/science/article/pii/S0049089X22000333

For all the negative publicity surrounding Elsevier, they are still a player in the academic world. In 2020 they owned roughly 16% of the academic publishing market. If there is anyone who would want to prevent abuse of their services, including plagiarism of their work, it is Elsevier. They have a trove of lawyers fighting and winning battles to protect their content across the world. As the two plagiarized papers that I can download are from the Elsevier journal Social Science Research, it made sense to contact them as well. Elsevier’s due process suggests contacting the journal editor first. Thus, I have sent this information to the lead editor.

The journal also lists an ISBN number. But this number is a fake, and returns an invalid search with the ISBN lookup tool.

Google certainly would not want its products indexing fake journals with plagiarized papers, so I took the liberty of contacting them as well.

What is striking about the journal is that they have a long editorial board list. Internet searching reveals that these are real scholars. I also contacted them. I will continue to report on this case as it unfolds.

The question for me is: ‘what motivated someone to create this site?’. There is clearly no profit associated with it. There is also no status gain, because the plagiarized papers have authors assigned to them who are not the original authors and clearly not players supporting the fake journal. They are highly established scholars in their respective sub-fields. These fakely assigned authors are perfect examples of what an AI might choose to assign to a certain topic. I tested this by asking Chat GPT if Seamus McGuinness could have written the abstract. The response points at AI as a source for potentially re-writing abstracts of the plagiarized papers and for finding suitable authors to assign to them. ChatGPT said:

Yes, Seamus McGuinness could be a potential author who might have written this abstract. Seamus McGuinness is known for his research on labor market issues, including education-job mismatches, gender disparities, and remote work. His expertise aligns with the themes discussed in the abstract, making him a plausible candidate as one of the authors who could have written it.

It remains a mystery for now what is up with this website and the fake journal. Was it a computer science project that accidentally got picked up by Scholar, one that was never intended for public consumption? Was it an attempt to create a journal, but try and hide the real content of the journal until it could pick up real submissions? Was it entirely AI generated, to showcase the power of an AI?

[Update 14/08/2023]

Thanks to Random Cat on Twitter, I learned that there are more papers than just two. Using Google Scholar they searched by journal.

This allowed me to compile a table of plagiarized papers.

Table of Plagiarized Papers in JCRR, found via Google Scholar

Original TitleOriginal Author(s)Original JournalJCRR TitleJCRR AuthorsLink to Original ArticleLink to Plagiarized Article
The motherhood wage gap and trade-offs between family and work: A test of compensating wage differentialsNick Wuestenenk & Katia BegallSocial Science ResearchCOMPENSATING WAGE DIFFERENTIALS AND THE MOTHERHOOD WAGE GAP: A COMPARATIVE ANALYSISHK Kleven & CL Landaislinklink
Conflicting signals: Exploring the socioeconomic implications of gender discordant namesAndrew Francis-Tan & Aliya SapersteinSocial Science ResearchBREAKING BOUNDARIES: EXAMINING THE INTERSECTION OF GENDER DISCORDANT NAMES AND SOCIOECONOMIC ATTAINMENTAL Roberts & M Rosariolinklink
Gender overeducation gap in the digital age: Can spatial flexibility through working from home close the gap?Ana Santiago-Vela & Alexandra MergenerSocial Science ResearchBRIDGING THE GENDER OVEREDUCATION GAP: EXPLORING THE ROLE OF WORKING FROM HOME IN THE DIGITAL ERASMG McGuinnesslinklink
How the Great Recession changed class inequality: Evidence from 23 European countriesJad MoawadSocial Science ResearchCLASS INEQUALITIES IN THE WAKE OF THE GREAT RECESSION: A STUDY OF 23 EUROPEAN COUNTRIESPD Allisonlinklink
Higher education and high-wage gender inequalityNatasha Quadlin, Tom VanHeuvelen & Caitlin E. AhearnSocial Science ResearchASSESSING THE CONTRIBUTION OF EDUCATION TO GENDER WAGE DISPARITIES IN HIGH-EARNING PROFESSIONSJ Jacobslinklink
TRANSFORMING RESEARCH: EXPLORING THE INTERPLAY OF DATA MINING, MACHINE LEARNING, AND KNOWLEDGE DISCOVERRobert M. Bond & Christopher J. FarissSocial Science ResearchsKnowledge Discovery: Methods from data mining and machine learning [b]Xiaoling Shu & Yiwan Yelinklink
SOCIAL MEDIA ADDICTION: PROPOSED INDICATORS AND STAGESZakaria I. Saleh & Omar Zakaria SalehInternational Journal in Commerce, IT and Social SciencesUNPACKING THE CYCLE OF SOCIAL MEDIA ADDICTION: UNDERSTANDING SYMPTOMS, PROGRESSION, AND RECOVERY [a]OZ Salehlinklink

[a] Published twice in JCRR with different authors

[b] Not indexed in Google Scholar

All are from Social Science Research except one. There are also other papers where I cannot find an original. These papers might actually be original papers, or stolen working papers that are not easy to find online.

Interestingly, some papers that appear that they may not have been plagiarized have a different display in PDF form. This includes contact details for the journal. I thus emailed them as well to inform them that they are committing ethical and legal fraud.

[Update 23.08.2023]

Hostinger investigated the reported problem and determined that the user of their domain had violated the terms of ethical/legal usage and removed the plagiarized papers. However, the journal website still appears. In other words, a journal that obviously intentionally plagiarized several articles, possibly to boost its reputation and encourage others to submit and thus pay the 45 dollar fee, is still out there lurking. Moreover, this journal is part of a larger company called Zenodo Publishing. On their main page the list dozens of journals. If I had to guess I would assume these journals are also predatory, and may contain plagiarized content. Only investigating this will prove if this is true.

More to come.

Academic-Status-Seeking Anonymous

  1. We admitted we were powerless over our results. That our status-seeking was unmanageable.
  2. Came to believe that the power of truth-seeking would restore us to sanity.
  3. Made a decision to turn our research and our careers over to the power of truth-seeking as we understood it.
  4. Made a searching and fearless moral inventory of our scientific conduct.
  5. Admitted to ourselves and to another trusted scientist and in public the exact nature of our questionable research practices (QRPs).
  6. Were entirely ready to let pursuit of truth remove our QRPs.
  7. Humbly asked truth-seeking to replace our hacking and bias.
  8. Made a list of all hacked results and became willing to update or retract them all.
  9. Made such updates and retractions, ensuring that in doing so all innocent co-authors reputations were not harmed.
  10. Continued to take academic-status-seeking inventory and when we engaged in QRPs promptly admitted it.
  11. Sought through logic and self-reflection to improve our conscious pursuit of truth, as we understood truth-seeking, praying only for the least biased research practices possible.
  12. Having had a scientific awakening as a result of these steps, we tried to carry this message to academic-status-seekers, and practice these principles in all our affairs.

Open science in sociology. What, why and now.

WHAT

By now you’ve heard the term “open science”. Although it has no global definition, its advocates tend toward certain agreements. Most definitions focus on the practical aspects of accessibility.

“…the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods.”


FORSTER, open science teaching resource

Some definitions enter the realm of ethics, feminism and social justice.

“…to imagine and design inclusive infrastructures, practices, and workflows for scientific practice that intentionally enable meaningful participation and redress (these new) forms of exclusion.


Denisse Albornoz,OCSDNet

Others focus on the communicative interplay between scientists and the public.

“Openness in Open Science also means opening up science to society… The democratic ideal of Open Science argues for equal two-way communication with the public: one should not solely focus on the question of how to foster the uptake of science in society, but also on how to foster the uptake of societal insights in science.


Anne-Floor Scholvinck,ZBW Mediatalk

Whatever the ontology, open science is inevitably something that challenges the status quo in science. Usage of term indicates there is something undesirable about science, otherwise advocates would simply advocate “science”.

The “open” part of the concept refers to any number of things depending on whom you ask. Commonly it means:

Open access – making the results of scientific techniques, research and theory accessible to everyone; as opposed to only in paywalled journals.

Transparency <open process> – making all methods, code, data and any biases or conflicts of interest known before and after the research is conducted. So long as doing this does not harm human subjects or violate any laws.

Open source – on the technology side of science, all programs, apps, algorithms, tools and scripts should be transparent and usable by others. This means that when a scientist develops a new technology, anyone else’s technologies can interact and interface with it. Moreover, anyone can modify the technology to better suit their own needs.

Open academia <open communication/democracy/feminism> – allowing anyone to participate in academia. That academia has the goal of eliminating inequalities, prejudice and domination from academia that take place in the social world. That academia embraces feminism and critical race theory in its methods and institutional practices. That everyone has the same place in scientific discussions, and no science is conducted by pressuring others or taking advantage of existing power structures. That no science takes place in secret, except for research that requires obfuscation for its completion.

Again, the definitions can cover a broad range. The above are just a snippet, although they strike me as the most common usages; except for ‘open academia’, this is reserved for certain justice motivated scholars.

WHY

Although I do not proclaim to be the arbiter or knower of right or wrong in academia (and life in general), the following facts seem wrong to me.

Double-work and the co-opting of journals

Scientists provide their work as editors and reviewers, because the peer review and publication process is the centerpiece of all of science. Peer reviewers and editors are the only consistent form of quality control in science. The academic journal was a functional response to previous forms of knowledge transmission that required direct scientist/practitioner to student interactions which were geographically limited and reached a very narrow audience.

The journal made it possible to transmit knowledge across the globe. Moreover, the journal reduced the simultaneous discovery and re-discovery problems of science, because no one could prove they discovered something first, and others worked on problems that were already solved unknowingly. It represents one of the first ‘open science’ movements because it was driven by the idea that science was at an impasse and could only move forward through transparent and open exchange of ideas arbitrated by being part of the public record through publishing.

Ironically, the journal format came full circle and began to undermine science. After over two centuries of journals run by non-profit academic associations, for-profit publishing houses began ‘offering’ their services to meet the growing global demand for journals and their content and the rising costs of editing and distribution. In many cases, these publishing houses were able to purchase the journals by offering the academic societies the exclusive right to determine what went in them. Within just 30 years, five conglomerates owned the titles, content or certain features of over 50% of all journal articles published globally.

The content, as always, is still a product of the scientists and the voluntary work of editors and peer reviewers. The publishing houses make large profits, but pay nothing to these workers. The editors and peer reviewers earn their income from universities mostly. The very universities that pay high fees to purchase the right to provide the journals in their libraries. This is a double tax on the universities – paying the producers of content to produce and then paying the distributors of that content to consume it. The content does not change at any point in between these two forms of payment, in other words, the publishers do not add any scientific value to this content.

Matters got even worse with the publishing houses over the past decades. As creative and deceitful profit seekers, some publishing houses realized they could generate even more profit by collaborating with the private sector. For example pharmaceutical companies’ profits were directly determined by the findings of studies published in journals. Pharmaceutical companies, or any companies whose profits were determined by the outcomes of scientific experiments, would be willing to invest in shaping those outcomes if they could. Enter a novel concept pioneered by Elsevier: selling journals or journal space to private companies to boost their profits. Win-win for them. Elsevier also pioneered the process of monetizing open science by purchasing SSRN, engaging in massive lawsuits designed to stop the free sharing of (their) copyrighted knowledge and tries to copyright intellectual activities such as peer review.

Other ventures create journals that prey on scholars who do not know better, or seek to get easy publications to add to their CV. These publishers are often labeled “predatory publishers” and they “publish work without proper peer review and which charge scholars sometimes huge fees to submit should not be allowed to share space with legitimate journals and publishers, whether open access or not” (predatoryjournals.com). They also sometimes mimic reputable journals by copying their styles and their names and soliciting content from scholars, a procedure known as “hijacking“.

Publish-or-perish begets questionable research practices

Thanks to the advent of the scientific journal, knowledge could be evaluated, used and further transmitted across space and time. The utility of the journal and other forms of academic publication such as books, proved so effective that they became the primary source for others to evaluate the importance of scientists and their work. This gave rise to the norm we are all familiar with, publish-or-perish.

In a survey of psychologists, John et al. (2012) found that 50% claimed they had selectively reported studies that supported their hypothesis (as in, selectively excluding those that didn’t). Moreover, 35% admitted to reporting unexpected findings as having been predicted from the start. Nearly 2% outright admitted to faking data.

Publish-or-perish and questionable research practices have a causal relationship. Except for occasional sociopathic or psychotic individuals, there is no reason for a scientist to engage in questionable research practices. No reason, except scientists’ very existence on scientists may depend on it. So many studies in reality lead to results that go in all directions, support the null or (most importantly) do not provide groundbreaking new results.

Through the peer review and editorial process, journals select studies that are path-breaking. Studies that will move knowledge forward and be of the greatest interest to readers. When faced with prospects of not getting tenured, not getting grant funding and being forced out of academia, a human’s (scientist’s) rational calculations change. Suddenly, rounding that p-value from 0.054 to < 0.05 or even adding some cases to the data becomes a cognitively defensible decision.

Like any profession, science is competitive. Those who publish more, or get more citations to their publications tend to get ahead. Those who don’t, don’t. Professional athletes use incredible tactics to gain competitive advantage. Of course steroids are well-known, but other tactics are much harder to detect. For example, endurance athletes often use blood transfusions to boost recovery and performance. This is what it means to be human, scientist or not.

One of the most radical events in the social and behavioral sciences is Diederik Stapel’s entire career faking data and results that were published in at least 54 articles that consumed millions of Euro in funding. It took almost two decades for critics and whistleblowers to finally out him. Psychology is not alone. In political science LaCour and Green published a study in Science that attitudes toward gay marriage could be changed if heterosexual people listened to a homosexual person’s story, but it turns out LaCour fabricated results of a follow up survey that never took place as uncovered by Broockman. In economics Reinhart and Rogoff published numerous studies identifying a negative impact of high debt rates on national economic growth, when in fact several points in their dataset had conspicuously missing values. When these values were added there was no longer support for their claim as identified by Herndon, Ash and Pollin.

I suspect that most questionable research practices are not intentional. The sociopathic (~psychotic) Stapel’s of the world are rare. This pressure to find a job after doing doctoral studies and then to get tenured, means a trade off between conducting science in its ideal form – so learning as much as possible about the existing literature on a subject, mastering the necessary methods to perform the research and executing the research, possibly with several iterations, and facing the prospect of null results – with science in a form that will lead to publication as fast as possible.

This ‘fast as possible’ leads to amateur science. For example, in the rush to get my first publication I attempted to use “multiple imputation”, but lacked the time to properly learn this method. Instead I simply generated several datasets and averaged them into one and re-ran the analysis on this one. This was not an intentional misuse of a method. It is a questionable research practice as a result of context. Think about matrix algebra. It is the basis of many advanced statistical techniques regularly used by social scientists. How many of us have a strong grasp of matrix mathematics? I don’t. And yet I’ve published several studies using structural equation modeling.

WHAT & WHY in SOCIOLOGY

I am aware of nothing about sociology that suggests it needs a special adaptation of open science. Most research cannot be strictly delineated as sociology or not sociology anyways. The boundaries of a discipline, especially within the social sciences, exist mostly in the institutional structure of universities. Eliason suggested that sociology is unique because it overemphasizes quantitative techniques, has needlessly long articles, lacks writing for the popular press and emphasizes research at the expense of teaching. In my experience the previous sentence perfectly describes all social and behavioral science disciplines at once. Even article length, something I thought might be peculiar to sociology, is not special. Political science and management research have very long articles. Consider that and ASR and ESR for example, limit words to 9,000 and 8,000 or less – this is relatively average if not short for social science.

Actually, I would argue the most unique thing about sociology at the moment relates to open science. Two points in particular: (A) that sociology has not had the same incredible scandals as other disciplines and (B) that sociology lags behind other social sciences in promoting open science.

A lack of scandals, not scandalousness

Could sociologists be more scientific and ethical in their research behaviors than those in other disciplines? Given identical institutional and career structures that favor productivity and innovation over replicating or checking each other’s work, I doubt it. Sociology journals and their editors, for example, rarely retract articles despite evidence of serious methodological mistakes. Carina Mood once accurately pointed out mistakes in the interpretation of odds-ratios in some American Sociological Review articles, but the editors refused to publish her comments, much less consider retractions. She shared her exchange with ASR in an email to me and discusses some of it in a working paper. An exceptional recent event was the retraction of one of Legewie’s sociological studies, but this required he himself to initiate the retraction after someone pointed out errors in his work. Until 2020, the Retraction Watch database (www.retractiondatabase.org) listed no retractions from the top sociology journals, and only two among the well-known, one in Sociology and another in Social Indicators Research.

This year, something new happened. Five articles published in Social Problems, Criminology, and Law & Society Review were retracted. These articles had the common co-author Eric Stewart. It turns out that the data he provided were faked. There is no other logical conclusion that this after exceptionally rigorous work by Pickett (a co-author of Stewart) provided evidence that the Stewart studies had consistently incorrect means and standard deviations, unverifiable surveys (sources, methods, original materials), magically changing case numbers despite identical statistical results, sometimes half the data had duplicate cases and impossible clustering structures in the data.

As an aside, one of Pickett’s findings was that the data had non-uniform terminal digit distributions. This means that the right-most digits in the reported statistics differs markedly from a uniform distribution. In particular, at the third-digit numbers should be uniformly distributed with 0-9 appearing roughly 10% of the time. In one of the papers, zeros appear less than 2% of the time. If you are considering faking data, keep in mind that it is roughly impossible to do it in a way that cannot be detected by careful investigation. Any algorithm used to generate results (even copying and pasting) leaves is statistical marks.

Perhaps we sociologists should be partly relieved, as this is just confirmation that we are as much a part of social science and its problems, as any other discipline. However, the Stewart retractions which should have been breaking news for sociology, went mostly unnoticed. The results of the investigation leading to the retractions is not published in a flagship sociology journal where it belongs. Instead it appears in Econ Journal Watch – something unlikely to be read by any sociologist. Moreover, the retraction notices from the original journals do not cite outright fraud. Stewart continues to promote his work in print claiming the main findings still hold, and several other of his studies with similar irregularities have not been retracted.

Another, extremely important event was a case of ethnomethodological research conducted by Lindsay, Boghossian, and Pluckrose in the mid 2010s. This is sociological self-examination at its best, although their backgrounds are mostly outside of the discipline of sociology. They wrote a series of 20 papers presenting fake results and making arguably unethical claims. They invented the papers to mimic the style of articles published in journals well-known for sociological research on topics of identity, hegemony and marginalization. Seven of their papers were published or had revise and resubmit recommendations before whistleblowing forced them to cancel the project. Some highlights: one paper contained sections from Hitler’s Mein Kampf. Another suggested men should be trained similar to dogs to prevent rape, and a third that white men should be forced to sit in chains on the floors of university classrooms, instead of normal desks. I am not commenting on the merit contained in these ideas, only that they all contained faked data, non-existent methods or conclusions not supported by the data. That these studies easily flew under the radar of a number of high impact journals points out how easy it is to publish without doing the necessary research work.

Lagging behind closed doors

October 6th, 2020. I entered the search terms “open science” (with quotations to search the exact phrase) and “sociology” (with quotations to only return results that contain the word) into Google Scholar. Six pages of results without a single sociology journal. On page 7, Merton’s “Priorities in scientific discovery: a chapter in the sociology of science” appears. Publication date 1957.

In 1973, Wilson, Smoke and Martin found that 80% of studies published in the top three sociology journals of that time rejected the null hypothesis, in other words they had p-values below a threshold. This suggests publication bias, if not p-hacking. Sahner (Table 5) analyzed all article submissions to the Zeitschrift für Soziologie, 1972-1980. Of those that contained significance tests, 70% were significant at p < 0.05 suggesting that authors prefer to submit significant results. More recently, Gerber and Malhotra (2008) reviewed articles published in American Journal of Sociology, American Sociological Review and The Sociological Quarterly, and specifically looked at the boundary of t = 1,96 (i.e., p<0.05) to find that as many as 4-out-of-5 studies were ‘significant’. This suggests publication bias as well. Sociology has yet to have a systematic review of p-hacking by comparing p-values within ‘significant’ results. Meanwhile psychology and political science for example are teeming with papers on “p-hacking” and “publication bias”.

Sociology is rather intransparent. An estimated 78% of the major sociology journals have long-standing transparency policies. Unfortunately, these policies are mostly artifacts on paper without much enforcement. For example, only 37% of sociology articles published in the mainstream journals between 2012-2014 include shared data and/or materials. In 2015, a small group of sociologists tried to obtain materials from the authors of 53 prominent sociological studies. They obtained these from just 19%, and only 20% of all the authors they contacted bothered to respond despite several requests. This suggests sociologists are free to hide the data and materials that led to their findings without recourse, despite such guidelines.

Other disciplines have embraced the Transparency and Openness Promotion Guidelines (TOP). The TOP guidelines with help of the Center for Open Science support journals to improve science. Journals can become signatories of TOP, and in doing so they either adopt and enforce new transparency guidelines, or certify that they already meet certain transparency standards. Most of the top psychology journals and several political science journals signed on. Other major journals such as the Journal of Applied Econometrics and later the American Economic Review adopted their own enforced transparency guidelines.

Until 2017, the only higher ranking sociology journals that signed TOP were Sociological Methods and Research and American Journal of Cultural Sociology. In 2017, Elsevier dictated that all its journals adopt guidelines and this added Social Science Research to the list. At the time of writing this, the flagship journals American Journal of Sociology and American Sociological Review neither signed TOP nor enforce their own guidelines. Of top German sociology journals, the Kölner Zeitschrift für Soziologie und Sozialpsychologie is the only signatory.

If intransparency is pervasive in sociology, then research cannot be (a) checked for errors, (b) reproduced or (c) simply critiqued. Even when exact reproducibility is not the goal, as often is the case with context-specific interpretive research, most research methods remain shrouded in mystery. This requires readers to take a giant leap to trust what others report. Part of the problem is that sociologists express little interest in reproduction or checking others’ works. There are few replications in the history of sociology, and if anything, they decreased over time until recently. For example, searching the articles in American Journal of Sociology and American Sociological Review reveals 22 replication studies from 1950-1980 and only 8 from 1981-2010.

Something telling about a lack of willingness to open sociology comes from sociology’s most ‘powerful’ society, the American Sociological Association. They collectively petitioned the US government to not make data transparency a requirement attached to grant funding in 2019.

NOW

What to do about it? Here are some simple steps to consider especially for sociologists. Similar to steps advocated by many others for graduate students and academic institutions, or all of us for example.

Transparency

Make all the materials – research design, methodological steps, data (when legally and ethically possible), analyses, conflict of interest and any software code – available online. The practical reason is that others can follow your work and expand it in the future. Doubly practical is that you don’t need to respond to email requests for your materials. So long as you are not a deceitful sociopath, you want others interested in your work and to replicate your work. Even if a study, seems to ‘prove you wrong’, the fact that it replicated your work is evidence of how important your work is and the topic of study. You are a piece of a much larger community of knowledge construction. Constructive exchange can lead to collaboration with critics to generate better future research without personal conflicts.

The immediate value of transparency is that being transparent forces you to be careful. Knowing everything will be public information increases the value of attention to detail. Put in its converse: not sharing your workflow publicly can indirectly foster lower quality standards, in addition to creating possibilities for misconduct. All this enables rather than hinders knowledge, and increases inter-researcher trust.

Transparency should not be much extra work. During the research process you should take high quality notes for yourself. You will often return to your data and research in the future and thus need those notes. This is a best practice with or without sharing your work. When you engage in this best practice, you have a deep familiarity with your data and can draw meaningful conclusions and easily redact identifying characteristics in your data in the case of qualitative research. In case you cannot share data, you can still reveal the design and expectations; or allow controlled access to the data. Human subjects must be protected at all costs, and yes this often means data sharing is not possible .

The ‘transparency work’ of the qualitative research process can be reduced by software platforms that provide semi-automated annotation and coding. Even if you do not share data, you can build an open workflow from the beginning that allows others to understand every step of the data generating process. However, this work can also be extremely tedious and the incentives not immediately clear. More fruitful discussion if not research assistant funding is needed in this area moving forward.

If you are using quantitative methods, immediately stop hiding your work. If you ran 100 models and 99 did not support your hypothesis, then this is your finding. If a journal does not want to publish this, point the editors and reviewers to the importance of null results and the problems of publication bias. If they still refuse, consider boycotting this journal and sharing your negative experience in public.

Preregistration

Preregistration can drastically reduce bias and hacking prior to collecting data. When you clearly outline your plans including how you will analyze the data, before conducting the research, there is little room for hacking so long as you stick to the plan. Moreover, preregistration can be done directly with a journal although sociology journals are laggards here because they generally do not offer this option. In a preregistration, even if you just put an pre-analysis plan or research design and goals online, you must think much harder about factors such as meaning, causality, inter-subjectivity and ‘how the world probably works’. You cannot hide behind results in this process and therefore you must anticipate counterarguments and explore counterfactual logic. This improves the clarity of theory and research, creating an immense gain in efficiency and effectiveness.

Regardless of the methods you use there are many opportunities to take advantage of preregistration. Some forms of qualitative research, for example those involving grounded theory and interpretivist methods, require decisions during the research process that cannot be foreseen. This uncertainty can be outlined in a preregistration stating explicitly when flexibility is and is not admissible. Moreover, simply putting a qualitative research plan online prior to conducting the research is equivalent to a pre-analysis plan. This research design need not compromise your data collection work because you can register the plan on a platform like the Open Science Framework and then embargo it, so that it is preserved but not made public until after the research concludes. Some scholars using quantitative methods might assume that preregistration is not possible because they work with secondary survey data. But the regularity and release of these survey data are known in advance, and these scholars can preregister their studies before the next round of data are collected with the knowledge of which questions and countries will be available.

Decommodify science

The central functions of the scientific publishing industry are printing and disseminating knowledge, which historically solved a problem of how to share knowledge across universities and countries. The business functions of publishing, however, come with harmful byproducts. Publishing firms extract profits from scientists twice. First, scientists provide free labor in the form of editing and peer reviewing, in addition to producing the results for the articles to be printed. Next, researchers, or their employers, must purchase the product of their own labor; labor not paid for by the publishers. The journal article as a product comes at a high cost, and often only in packages of journals meaning that universities have to pay for extra material their scholars do not use.

Sometimes publishing houses neglect science in favor of profits, but Elsevier has been particularly problematic. They sponsored weapon fairs, created and sold ‘fake’ journals to pharmaceutical companies to publish ‘results’ supporting their drugs, purchased the Social Science Research Network and created paywalls or removed legally shared working versions of articles, charge fees for open access articles, and actively lobbied against open access legislation (For a concise summary with links see Tal Yarkoni’s blog entry). This brought massive counter movements against Elsevier in the scientific community (for example, The Cost of Knowledge). You can take action and refuse to review for or publish with unethical publishers if you feel it is justified. Thus, you should inform yourself about the publishers. Your libraries are a source of information, because they deal with the business side of publishers.

If you are in Europe, check if your institution is a signatory of ProjektDEAL. A consortium of universities are collectively bargaining with publishers via ProjektDEAL demanding that publishers reduce fees and eliminate the double paying of universities. The primary objective is that publishers sign country-wide subscription agreements that enable access for all universities at once. Wiley agreed to such a model and this marks a paradigm change. It indicates how the publishing industry looks in the future, so long as the OS Movement proceeds. If you are not in Europe, consider starting a similar initiative, for example the entire University of California system of 10 universities, 5 medical centers and several research institutions that collectively produce roughly 10% of the world’s academic publications recently followed ProjektDEAL and boycotted Elsevier.

You can work around the publishing business. Prior to submitting an article or after it is published, you have the right to share a preprint – a draft of the paper you share publicly so long as it is not published elsewhere or sold for profit. Posting preprints reduces the power that publishing firms have over science, in addition to giving others immediate access to your work. But simply posting preprints on your academic website is not open enough. Use a preprint service, for example through the Open Science Framework, to ensure that your preprints appear in search engines such as Google Scholar. SocArXiv for example, is the go to location for sociology. This enables scholars to find and directly access research results based on the words they contain, uninhibited by paywalls – a crucial aspect to practicing sociology in the Global South. Preprint services are free and open access.

P-hacking. Religion and science aren’t that different

Science and religion parted ways long ago. This is a historical struggle over power. If science claims to disprove that the earth is the center of the universe or that evolution undermines creation, it might falsify religious doctrine, said to be the word of a God or Gods and thus the ultimate Truth. Religions rely on their claims to this Truth to convert people to submit to their institutions. If science undermines this Truth, it undermines religious power. And power is something that changes human behavior; they might lie, cheat, steal and kill to get or preserve it.

Wikimedia Commons: Thinker; Passion

But science and religion followers are not that different. Actually, they are the same. They are human.

Power is another way of describing status and prestige. In science, we know all about status. Scientific status comes from recognition. From making scientific discoveries and claims that garner attention. In particular, attention in the form of citations.

The absence of market prices results in prestige becoming the main reward and high prestige becoming the measure of exceptional ability. Rent seeking in academia, therefore, produces ego-maniacs and much destructive behavior

Sørensen (1996, p. 1358)

The seeking of status, what economists and Sørensen label as a form of ‘rent-seeking’, is presumably the reason scientists p-hack, and engage in other forms of malpractice. In some cases they ‘must’ p-hack in order to meet the demands of reviewers. Mostly, statistical research requires significance stars to attain publication. This is changing with the Open Science Movement in recent times, but only in the margins. Research using qualitative methods also requires its own form of statistical significance ‘p-hacking’. To be published, a paper must extract novel ideas from observational data, whether these reflect the actual data or are even based on actual data at all seems to be irrelevant as long as the story looks good to reviewers. Just like the significance stars that look all sparkly and comforting to reviewers of quantitative research.

So humans (scientists) cheat to attain status; intentionally or even unintentionally — without malicious intent because they are conditioned to play with their data until the stars appear. Therefore it should be no surprise to humans (scientists) that other humans (religious followers) also cheat.

If p-hacking in science is playing around with models so that they represent the data in a way that matches the researcher’s desire for status, rather than portray the results of scientific tests, then p-hacking in religion must be to interpret the dictates of God (or Gods) to fit one’s, or one’s group’s own status goals.

The conflict of science and religion it like p-hacking. Its a power struggle. Who has the power to make claims about the way the world is and the way it should be? Religious followers would attribute this authority to God, and then themselves as seekers and messengers of God. Science followers attribute this to factual knowledge about the world and then themselves as the testers and reducers of uncertainty to ‘uncover’ those facts. In both cases the process is corrupted by status seeking, a fundamental fallibility of humans. When acting as scientists and spiritual seekers, we are fundamentally still primates, and as such tend toward hierarchy, with many of us human-primates willing to cause harm to others in order to attain higher and higher positions.

For religious followers to gain status through p-hacking they would have to adjust the ‘word of God’ or the ultimate Truth in a way that it (a) is no longer a religious or spiritual truth so that it (b) serves their own ends. Do we have evidence of this practice? Wars fought in the name of religion do not really fit the criteria, as wars can be justified as right, as God’s (or the Gods’) will, for example Christian New Testament Revelation 19:11 about the righteous warring against the (presumably) non-righteous (i.e., ‘evil’); Christian/Jewish Old Testament Deuteronomy 20 calls Israelites to war against cities that do not accept their terms; Islam Qur’an 22:39 advocates war in self-defense and possibly 4:74 to fight in the name of God; and Buddhism taking the stance that war might be necessary in defense but not justified as an aggressor.

The point is that it is difficult to find direct evidence of p-hacking by religious followers in order to gain status for themselves or a group. The same problem lies with detecting p-hacking in scientists. Given that all sides in all wars tend to claim righteousness under God (or Gods) it seems obvious that some (or all) are misinterpreting what should be God’s will for their own gain. Given that so many p-values lay below 0.05 in published research, we can assume that not all are derived from a clean research design, method and presentation of results.

Openness is not needed because we are untrustworthy; it is needed because we are human

(Nosek, Spies and Motyl 2012, p. 626)

It is not only religious and scientific institutions that are antagonistic given their seeking of power. Political institutions, who wield a monopoly on force in the modern world divided into sovereign nation states, also do not always get along with both religious and scientific institutions as they have their own p-values to guard.