Science is often celebrated for its precision and impartiality, and the public tends to accept research published in scientific journals as absolute truth. However, numerous concerning studies, especially within life sciences, suggest that scientific publishing may not be as trustworthy as we typically assume.
10. The Struggle to Replicate Many Pre-Clinical Studies

Extensive research is currently underway to understand the mechanisms of cancer. However, turning these insights into treatment targets has proven to be quite challenging, with clinical trials in oncology experiencing a failure rate higher than in many other fields. A significant portion of these failures can be attributed to flaws in the pre-clinical research that these trials were based on.
A review discovered that only six out of 53 landmark pre-clinical cancer studies could be successfully replicated. Many of the non-reproducible studies didn’t utilize blind testing, meaning the researchers knew whether they were working with the control or experimental groups, which could lead to investigator bias. Other studies selectively reported results that supported their hypotheses, even if these didn’t accurately reflect the entire dataset. Shockingly, there is no specific rule against this, and papers are often accepted for publication without including the complete data collected.
Another study examining 67 research papers, primarily in oncology, found that less than 25 percent of the data could be replicated in the lab without significant inconsistencies. This issue has become so widespread that venture capital firms have an informal understanding that about 50 percent of academic studies will be impossible to reproduce in industrial labs.
9. Negative Results Are Often Left Unpublished

A negative result occurs when researchers expect a certain outcome but fail to achieve it. One study analyzed over 4,600 papers from various fields between 1990 and 2007, finding that the publication of positive results had increased by 22 percent during this period. By 2007, an astounding 85.9 percent of papers reported positive findings. This trend is also supported by research indicating that negative phrases like “no significant differences” have become less common, while significant results are more likely to be fully reported. If negative results are published, they are usually found in low-impact journals.
This publication bias creates several significant issues. For one, scientists often cannot determine if a study has already been conducted, resulting in unnecessary repetition. It also skews the findings of meta-analysis studies, which attempt to evaluate all the research on a given topic. Publication bias introduces immense pressure to produce positive results, which can lead to exaggerated conclusions, research misconduct, or less risk-taking among researchers. After all, if your career success hinges largely on your ability to publish positive results, this inevitably influences how you design and interpret your studies.
8. Peer Review Often Misses Major Mistakes

The peer review process is considered the gold standard for validating research papers. However, when researchers deliberately submitted flawed biomedical papers to peer reviewers at a leading academic publisher, they found the reviewers only identified an average of 2.58 out of nine major errors. Even more troubling, training to improve performance had only a minimal impact. In fact, a quarter of the 607 reviewers tested detected one or fewer errors. (To be fair, some reviewers rejected the paper before completing their review, which could have led to discovering more errors if they had finished.)
Reviewers struggled the most with spotting errors related to “data analysis and inconsistencies in reporting results.” This may stem from the limited statistical knowledge among biologists. The area least impacted by training was “placing the study in context, both in terms of existing literature and its implications for policy or practice.” This is understandable, as becoming proficient in a specific area of literature requires extensive time and effort.
Other studies have arrived at similar findings. One presented 221 reviewers with a previously published paper that had been altered to introduce eight errors. On average, the reviewers detected only two of these new errors. Another study found that reviewers missed at least 60 percent of the major errors in a paper on migraines, although most reviewers still didn’t recommend accepting the paper for publication. Taken together, these studies suggest that the peer review process in biomedical sciences has significant room for improvement.
7. Peer Reviewers' Performance Deteriorates Over Time

A 14-year-long study in 2009 further highlighted problems with the peer review process. The study, in which journal editors assessed the quality of every review they received, found that the performance of peer reviewers declined by an average of about 0.8 percent annually. These results are supported by a 2007 paper published in PLOS Medicine, which indicated that no specific type of training or experience was linked to reviewer quality, but younger scientists generally provided higher-quality reviews. A 1998 survey also revealed that younger scientists performed better as peer reviewers, while those on editorial boards tended to perform worse.
The reasons for this trend are unclear, although possible explanations include cognitive decline, increased competing interests, or growing expectations for reviewers making the task more difficult over time. There is also some evidence that older reviewers are more prone to making hasty decisions, failing to follow the required structure for reviews, and possibly having a reduced knowledge base as their training becomes outdated. This is concerning, as older reviewers often hold more authority.
6. Most Rejected Papers Find a Home Elsewhere

After undergoing the peer review process, journals will either accept or reject a paper for publication. But what becomes of the rejected papers? One study tracked papers rejected by Occupational and Environmental Medicine, discovering that 54 percent of them were eventually published in other journals. Interestingly, more than half of these were published in a group of seven other prominent journals in the same field.
Another study found that 69 percent of papers rejected by a general medical journal were published elsewhere within 552 days. A separate study revealed that at least half of the studies rejected by Cardiovascular Research ultimately appeared in other journals. A fourth study showed that 56 percent of papers rejected by the American Journal of Neuroradiology eventually found another publishing outlet.
Before you panic, this may simply indicate that rejected papers are improved before being submitted to other journals. Moreover, not all rejected papers suffer from poor methodology. Popular journals often have space constraints, leading to rejections even when the paper’s quality is high. Additionally, many papers are rejected because the topic may not align perfectly with the journal's focus. (The Cardiovascular Research study accounted for this, finding that most rejected papers later published elsewhere were not declined due to topic mismatch.)
Studies have indicated that top-tier journals are under significantly more scrutiny. In fact, well-regarded journals often retract more papers than their less prestigious counterparts, simply due to the increased attention they receive post-publication. The issue arises when a flawed paper is circulated among various journals until one agrees to publish it, allowing it to enter the scientific record. As a result, these studies suggest a gap in the quality of papers across different scientific journals.
5. Questionable Practices

With the immense pressure on researchers to publish positive outcomes, there are signs that falsification and other unethical practices are relatively common. A meta-analysis of 18 surveys found that 1.97% of scientists admitted to falsifying their own work, 33% confessed to other unethical practices, 14.1% acknowledged knowing colleagues who falsified their work, and 72% reported being aware of other unethical behaviors by peers. The actual figures might be higher—surveys of researchers and trainee researchers suggested that larger percentages would be willing to engage in such practices, even if they did not confess to actually doing so.
Some of the questionable actions identified included 'discarding data points based on intuition,' 'failing to report data that contradicts previous research,' and 'altering the study design, methodology, or results in response to pressure from a funding source.' These behaviors can be extremely difficult to prove, and preventing them entirely may be impossible without moving away from a system that prioritizes publishing only positive results.
4. Scientists Don’t Share Enough Information

While scientists are trained to document experiments in such a way that they can be fully replicated from the provided details, a 2013 study on resource sharing in biomedical research found that 54% of research materials (like antibodies or organisms used in experiments) were not described in sufficient detail to allow proper identification. Additionally, research on animal studies showed that only 59% of papers included relevant details about the number and characteristics of the animals used. Without this crucial information, accurately reproducing experiments becomes difficult, hindering the ability to confirm results or identify potential methodological issues with specific resources.
Efforts have been made to improve this issue, such as the ARRIVE guidelines, which aim to improve the reporting of animal models. The Neuroscience Information Framework has developed antibody registries, and the esteemed journal Nature has advocated for better reporting of resources used in research.
Despite these efforts, the authors of the 2013 study pointed out that only five out of 83 biomedical journals enforced rigorous reporting standards. Furthermore, a concerning survey on biomedical research revealed that the willingness to share all resources with other researchers declined by 20% between 2008 and 2012. Some researchers indicated they would be open to sharing data if requested, but were less inclined to make it publicly available in an online database.
3. Studies Done On Mice Often Can’t Be Extrapolated To Humans

Biomedical research frequently relies on animal models, especially mice, due to their considerable genetic and biological similarities with humans. Approximately 99% of human genes have a corresponding homolog in mice, making them a key model organism for various studies.
Despite the successes of mouse studies, translating these findings to human conditions such as cancer, neurological diseases, and inflammatory diseases has been problematic. Although mice and humans share many genetic similarities, recent studies reveal that the gene sequences encoding specific regulatory proteins differ significantly between the two species.
These genetic variations are particularly significant in the nervous system, where the differences in brain development are pronounced. The mouse neocortex, which is crucial for perception, is less developed than its human counterpart. Additionally, many neurological diseases involve cognitive symptoms, and the gap in cognitive function between mice and humans is substantial.
So the next time you come across a groundbreaking medical discovery on the news, make sure to verify whether the research was conducted on mice or humans.
2. Psychology As A Case Study Of The Issues

Psychology is deeply impacted by many of the challenges mentioned in this list. Studies have shown that approximately 97 percent of published research is positive, a trend that has persisted since at least 1959. Additionally, a survey found that more than 50 percent of psychologists would delay publishing until they achieved positive results. Over 40 percent admitted to only reporting studies with positive outcomes. This may be due to psychology journals favoring novel findings over methodological rigor. As a result, psychology, along with psychiatry and economics, is among the fields most vulnerable to publication bias.
Another issue within psychology is the infrequent replication of studies. Since many journals refuse to publish simple replications, researchers often lack the incentive to invest the necessary effort. A potential solution is Psychfiledrawer, which publishes replication attempts. However, the professional benefits of submitting are minimal, and some researchers are deterred by the potential for peer criticism.
On a more positive note, there has been a recent push for replication in studies, with a promising example where 10 out of the 13 studies attempted were successfully replicated, yielding the same results. Although this doesn’t address the absence of published negative results or the frequent issues with statistical analysis, it's still a step forward.
1. Hoax Papers

In 1994, physicist Alan Sokal submitted an article to the cultural studies journal Social Text, intentionally filled with unfounded “nonsense” while also aligning with the editors' ideological biases. Among other claims, Sokal's paper connected quantum field theory with psychoanalysis and suggested quantum gravity had significant political consequences. His aim was to demonstrate that postmodernist theory had detached itself from reality: “Incomprehensibility becomes a virtue; allusions, metaphors, and puns replace evidence and logic. My article is, at best, a modest example of this established genre.”
Though Sokal targeted cultural studies, hoax papers have also emerged in computer science. In 2014, an astonishing 120 papers were retracted from journals owned by Springer and the Institute of Electrical and Electronic Engineers after being exposed as computer-generated nonsense. These papers were created using SCIgen, a software originally developed at MIT in 2005 to highlight the inadequate vetting process at academic conferences.
SCIgen generates random combinations of words to craft hoax papers, one of which humorously claimed to 'focus our efforts on disproving that spreadsheets can be made knowledge-based, empathic, and compact.' The software turned out to be more popular than its creators anticipated—the 120 retracted papers were successfully submitted to Chinese conferences and published. Despite Springer’s assertion of a peer-review process before publication, it remains puzzling that 16 of these fraudulent papers made it through.
Another hoax paper, authored by John Bohannon, which claimed to examine the anti-cancer properties of a particular lichen, was accepted by 157 journals (98 rejected it). This occurred despite the fact that 'any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s shortcomings immediately.' Bohannon aimed to expose how certain open-access journals prioritize making money from publication fees, often at the expense of publishing reliable research.
