Misconduct in the biomedical sciences part 1: issues and causes

Misconduct in the biomedical sciences part 1: issues and causes

By Dr Gerald H. Lushington and Rathnam Chaguturu

Science pursues truth. Real advances in biomedical sciences improve our quality of life and save lives, but the path to these advances is cluttered with the distraction of irreproducible results – an affliction that has reached epidemic proportions and is now a global crisis spanning developed and developing countries alike, with much of the problem arising from scientific misconduct.

Since scientific progress builds incrementally upon a solid knowledge foundation laid by our forebears, improprieties not only damage trust in scientific endeavours, but also hinder the ability of honest scientists to produce legitimate research.

This article, the first of a two-part series on research malfeasance in the biomedical arena, characterises some of the key forms of deliberate misconduct, including falsification of results, peer-review rigging, data over-interpretation and improper or willfully selective sampling practices.

The discussion also explores problematic grey areas such as choice of inappropriate analytical protocols, the failure to retract erroneous findings and the use of textual plagiarism for manuscript assembly.Figure 1 Misconduct in the biomedical sciences quotes



These headlines are sensational, but in fact they represent only the most drastic manifestations of a systematic malady – the norm rather than the exception – that has recently pervaded many technical disciplines, including the biomedical sciences.

There is a growing chorus in scientific media and the public press decrying this increasingly commonplace behaviour. The Hewlett Foundation has funded scientists at Rutgers and Stanford Universities to assess the prevalence of problematic practices that produce questionable scientific findings, alongside the feasibility and potential impact of proposed intervention strategies to improve ‘scientific integrity’ (https://hewlett.org/facts-are-stubborn-things-except-when-theyre-not/). President Obama’s Council of Advisors on Science and Technology has begun to address the ‘irreproducibility problem’ as one its priorities.

Scientific misconduct, as defined by British Mathematician Charles Babbage in the 1800s, is a deliberate effort to ‘cook’ or ‘trim’ data to support a stated hypothesis (1). Hypothesis-driven research, a cornerstone of knowledge discovery, is the foundation for our current system of awarding grants to conduct innovative research and for publishing in high impact journals. It is ironic and unfortunate, therefore, that the greatest malaise in technological research areas today is largely due to abuse of the basic scientific method – research that is driven not by a desire to determine objectively whether a hypothesis is valid, but rather by the will to make hypotheses appear true.

The extent to which this attitude has taken root in our community is alarming. Consider, for example, the fact that in the journal Science (widely reputed to have the highest-impact scientific publications; ranked just ahead of the Proceedings of National Academy of Science), two-thirds of recent retractions have been incurred due either to demonstrable or suspected fraud. Specifically, this implies that a majority of important scientific conclusions that are later proven false have arisen not by error, but by an intentional desire to demonstrate findings that are simply false.

Daniel Koshland, a former editor of Science, considered as recently as 1987 that 99% of published reports are ‘accurate and truthful’. Several years later, National Academy of Sciences reiterated that ‘fraud in science seems to be quite low’1. A quarter century later, we lament that only 10% of published science articles are reproducible!2

Let us consider the public funds that have been wasted on producing such irreproducible findings, especially in this era of shrinking research budgets. We do not live in a society where the royal treasury sustains handpicked research, such as was the case for the support for Galileo Galilei by the Grand Duke of Tuscany, or for Isaac Newton by the Prince George of Denmark (1).

Present day biomedical research is now supported increasingly through competitive grants provided by governmental agencies (eg, the National Institutes of Health (NIH) in the United States, the Medical Research Council in the United Kingdom, etc), non-profit organisations (the Susan Komen Foundation, ALS Foundation) or the pharmaceutical industry (Bayer’s Grants4Leads, Sanofi’s Early2Candidate).

The NIH’s budget for FY2015 is $30.362 billion, and is derived from public, taxpayer money. Francis Collins and Lawrence Tabak, leaders at NIH which funds most of the biomedical research in the United States, acknowledge the prevalence of ‘data irreproducibility’, but argued that there is no evidence to suggest that irreproducibility is caused by scientific misconduct (3). This is contrary to the findings of Fang et al, who found misconduct to be the sole or primary reason for 67.4% of the papers retracted (from a total of 2,047), as indexed in PubMed (4).

Retractions are on the rise – a 10-fold increase over the past 10 years – and the irreproducibility phenomenon has reached epidemic proportions. To use a medical analogy, today’s epidemic, when not addressed appropriately, becomes tomorrow’s pandemic, with catastrophic consequences. The oft-quoted inability by the Amgen, Bayer and ALS Therapy Institute to reproduce seminal biomedical studies published in high impact journals is a testament to this malady (2,5). This fact contains a dire warning to the biomedical science community that one simply cannot take published findings at their face value – even those reported in high impact journals such as Cell, Nature or Science.

Science has long been considered ‘self-evaluative as well as self-correcting’, since it perennially lays a foundation for future studies (1,3). Self-correction is a slow, arduous process, however, and the greater the volume of scientific conjecture that requires correction, the worse the outlook for long-term progress. Warren Buffett’s often praised quote, ‘market corrects itself’ has been implied for scientific research, but one recalls that vast sectors of the global marketplace recently collapsed due to the ‘housing bubble’.

As scientists, we are supposed to be guardians of the discipline, not pillagers. One of the authors (RC) has cautioned the readers in his annual editorial to the journal he edits regarding the gravity of the situation and how scientific misconduct can upset our cherished apple cart bearing ‘Science, Peace and Prosperity’ (6). If we are to heed and propagate such warnings, it is helpful to have a full understanding of the underlying issues, how they arise and how they may be detected and prevented.

Many viewpoints discussed in this commentary are drawn from the personal experiences of the authors who have led core facilities involved in drug discovery research in academia, managed discovery research projects in industry, facilitated collaborative projects with academia and contract research organisations and led panel discussions on the subject at domestic and international conferences. As grant application reviewers, journal editors and manuscript referees, we are uniquely positioned to shed light on this global biomedical crisis. The perspectives are also quite personal as illustrated in the section dealing with plagiarism.

Definitions of scientific misconduct

According to the Office of Research Integrity (ORI), United States Department of Health and Human Services (DHHS), Research Misconduct means fabrication, falsification or plagiarism in proposing, performing, or reviewing research, or in reporting research results (http://ori.hhs.gov/definition-misconduct):

a) Fabrication: making up data or results and recording or reporting them, including writing of non-existent research (ghost writing).

b) Falsification: manipulating research materials, equipment or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.

c) Plagiarism: appropriation of another person’s ideas, processes, results or words without giving appropriate credit.

According to ORI, research misconduct does not include honest error or differences of opinion.

Lack of reproducibility in biomedical research

Scientists are the navigators in the ocean of knowledge guiding our passengers, the public-at-large. The associated technological progress is an apple cart bearing wellness, peace and prosperity, all achieved incrementally by scientists standing on the shoulders of their forebears, replicating and extending prior observations toward greater achievements. As such, we require the following from scientific work:

-Reproducibility, as an essential principle of the scientific process, and

-Acceptance that a discovery is valid only if any scientist in any lab can conduct the same experiment under the same conditions and obtain the same results.

Without reproducibility, we could not distinguish scientific fact from error or chance, and the enterprise can falter as it attempts to propagate today’s errors toward tomorrow’s breakthroughs. Consider the human genome as analogy. In such a huge volume of information as the human genome (3.2 billion base pairs), a single mutation may reset a normal cell toward a cancerous trajectory.

Similarly, the insertion of erroneous precepts into scientific canon upsets this Darwinian type of natural progression and evolution of ideas. Given the societal importance of efficient and accurate biomedical progress in areas such as genomic interrogation for identifying new drug targets and the associated modulators, any artificial introduction and propagation of error into a field can produce a huge and potentially devastating cost.

In the modern era, the earliest case of Scientific Misconduct can be attributed to William Summerlin from the Sloan-Kettering Institute in New York who faked transplantation experiments in white mice by blackening patches of their skin with a pen (7). This was shocking at the time of its revelation in 1974, but many more cases of fake and fabricated research have since been reported in both the scientific and popular press.

From 1970-96, there were about 235 retracted biomedical publications, with 40% of these retractions attributed to some type of misconduct, whereas a staggering five-fold increase (1,164 retractions) have since occurred from 1997 through to 2009; 55% of these retractions were due to misconduct (8). Some of the retractions are voluntary, but most are forced by editors, publishers or external adjudicators of misconduct complaints.

Recognition of the growing misconduct problem by scientists and institutions was very tepid until recently, with the 90s being a decade of response. With the rise of the digital age (the internet and social media), reporting of scientific misconduct now occurs practically in ‘real time’. Retractions are typically much slower to emerge, however. These may take years to unfold, and it is frequently unclear just exactly what aspect of a reported study has been retracted!

The cause of advancing clean, meticulous science is now being propagated effectively through dedicated media such as the watchdog blog Retraction Watch, run by Ivan Oransky and Adam Marcus (www.retractionwatch.com). This blog monitors research misconduct as a pulse on scientific integrity and transparency. The awarding of a $400,000 grant in December 2014 from the prestigious MacArthur Foundation acknowledges the relevancy of RW mission, furthering the RW aim and scope to provide a ‘comprehensive and freely available database of retractions’.

RW’s relentless pursuit of truth has provided convenient access to detailed information (2,000+ posts, 15 million page views since August 2010) regarding withdrawn papers and, most importantly, the reasons for retraction. RW has shone light on the surprising fact that retractions are not as rare as one would have thought, and many are due not to honest errors as the community once believed but more frequently to deliberate efforts to artificially validate initial hypotheses, regardless of what the real data may suggest.

Some top retractions that occurred in 2014 are listed:

1. Haruko Obokata et al (2014). Bidirectional developmental potential in reprogrammed cells with acquired pluripotency. Nature 505, 676–680; doi:10.1038/nature12969.

2. Haruko Obokata et al (2014). Stimulus-triggered fate conversion of somatic cells into pluripotency. Nature 505, 641–647; doi:10.1038/ nature12968. Readers detected significant problems with the research, and Haruko Obokata, who led the studies, was ultimately unable to replicate the findings. Nature has defended its decision to publish the articles, saying editors could not have detected the errors. Science, however, had earlier rejected one of the manuscripts for being too flawed to publish. One of Obokata’s colleagues, Yoshiki Sasai, committed suicide following the scandal.

3. Han, D et al (2012). Retraction: eliciting broadly neutralising antibodies against HIV-1 that target gp41 MPER. Retrovirology 2012, 9(Suppl 2):P362. Retracted: Retrovirology. 2014 Feb 6; 11(1): 16.

A former researcher at Iowa State University (ISU) spiked rabbit blood samples with human blood to make it look as though his HIV vaccine was working. Dong-Pyou Han is now facing criminal charges and ISU was forced to pay back nearly $500,000 of his salary – both rare events.

4. Kramer, A et al (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proc Natl Acad Sci USA 111:8788-8790.

Just two weeks after publishing a paper on the psychology of Facebook users, PNAS issued an Expression of Concern about the work. The article’s many critics complained that the study violated ethical norms because it did not alert participants that they were taking part in a research project. As The Atlantic put it: “Even the Editor of Facebook’s mood study thought it was creepy.”

5. Cyril Labbé of Joseph Fourier University in Grenoble published between 2008 and 2013 more than 120 bogus papers produced by the random text generator SCIgen. Sixteen appeared in publications by Springer and more than 100 were published by the Institute of Electrical and Electronic Engineers (IEEE).

6. Kajstura, J et al (2012). Cardiomyogenesis in the Aging and Failing Human Heart. Circulation 126(15): 1869–1881 (Retraction in Circulation. 2014 April 22; 129(16): e466). Legal counter attack: The study, led by a group of Harvard heart specialists, was retracted over concerns of corrupt data and the university is investigating. The group was led by Piero Anversa, a leading cardiologist, who along with a colleague filed a suit against the institution on the grounds that the inquiry was damaging to his career prospects. Excerpted with permission from The Scientist, December 2014. http://www.the-scientist.com/? articles.view/articleNo/41777/title/The-Top-10- Retractions-of-2014/

The true cost of misconduct

Scientific misconduct does not merely produce conceptual failures that hinder our ability to understand the world around us. Rather, the economic impact is every bit as real as that arising from financial impropriety; it impacts us broadly in terms of degrading technical productivity and effective innovation, and it can produce the same sort of direct, unvarnished fiscal pain that is encountered with more traditional forms of criminal fraud. One disheartening example affected a research group in Toronto who suspected that an antibody for pancreatic cancer biomarker study was faulty. After two years, $500,000 spent and thousands of patient samples wasted on follow-up characterisation, the group determined that the antibody used was misassigned and actually corresponded to CA125 ovarian cancer cells9. As this aptly illustrates, pursuing false research findings, almost certainly:

-Reduces research efficiency.
-Increases the cost of discovery.
-Wastes public and private capital.
-Diminishes the rate of practical discovery.
-Delays drug development and delivery.
-Affects life expectancy and human health.

The fiscal impact on pharmaceutical productivity is staggering as well. For example, in 2011, Bayer halted nearly two-thirds of its drug-target validation projects based on ‘existing published data’ because of the inability of its researchers to substantiate key findings reported in the literature (10). In 2012, Pfizer incurred a $750 million loss for failing to reproduce results published in Lancet related to the use of Dimebon for Alzheimer’s disease treatment.

Similarly, researchers at Amgen were only able to reproduce just 11 of 53 landmark cancer studies, estimating millions of dollars wasted (11). In March 2014, Steve Perrin, Chief Scientific Officer of the ALS Therapy Development Institute, reported its failure to validate published reports of the slowdown of the fatal neurodegenerative disease amyotrophic lateral sclerosis by 100 potential drugs. This can be a devastating blow to patients who may have been optimistically awaiting new treatment options from such studies.

Why does misconduct happen? As a research community, we are realising that misconduct happens more often than anyone might wish. As we shall discuss in detail in the Summer 2015 issue of Drug Discovery World, certain mitigation policies and procedures are emerging with the US HHS/ORI leading the way, but despite this progress we have not yet completely comprehended the origin, causes and preventative measures. One of the authors (RC) has led various panel discussions on the issue at various conferences in recent years, but the comprehensive causality is complex and nebulous at best. Here are some consensus factors:

Personal character flaw: The catalyst for indulging in misconduct, scientific or otherwise may be an inherent flaw in one’s character. Contributing circumstances may cause carelessness, narcissism and an ‘I am too smart to get caught’ presumption.

Personal life: Family and personal difficulties, however attenuating, must be uncoupled from professional considerations; crossing the line may cause irreparable damage. Generally, claims of situational factors are almost always a screen to hide responsibility for one’s actions.

Funding environment: Grant funding and tenure, the primary factors for career advancement, often dictate that academics embrace the ‘publish or perish’ dogma. This in turn drives recognition-hungry scientists to fabricate results to amplify publishing successes, gain recognition from peers, garner grant funding and accelerate career advancement.

Hypothesis-driven research: This is the hallmark of academic research, and is key to innovation, but frequently graduate students or postdocs are under enormous pressure to generate data to support preconceived hypotheses, and may suffer ramifications if amenable data is not produced! To satisfy this pressure, researchers may handpick a single corroborative data set rather than build rationale based on broader consensus. Given the complexity of biomedical research (see below, RNAi screening), it is proving increasingly difficult to duplicate or reproduce even valid studies, thus making it easier to propagate fraudulent work in support of a hypothesis.

Inadequate training: Any scientist planning or executing a research project must have rigorous training with regards to experimental reagents, study design, validation and statistical analysis. Without this basis, resulting research studies may produce flawed conclusions that may not be caught in subsequent review.

An example of endemic continuing failure: RNAi screening

The most important measures of methodological utility comprise a dyad of 1) reproducibility, and 2) applicability. In other words, in order for a researcher to use a given method with confidence, he or she must fully understand the approximate range of errors that this method may produce, and must be able to map his or her application into a regime within which the method is expected to produce an accuracy and precision level commensurate with the project goals.

Science is rife with projects that fail to align goals with appropriate methods. Failure to do so does not necessarily rise to the level of misconduct, because many instances of inadequate protocol validation arise out of basic naiveté rather than from a deliberate intent to mislead. It is fair to refer to such shortcoming as malpractice, since the conclusions produced by questionable protocols stand a reduced chance of reproducibility, are likely to consume resources in a wasteful manner, and may well lead numerous other researchers down fruitless paths of research pursuit.

A prodigious example of malpractice that has entailed dubious investment of money, time and energy revolves around massive efforts aimed at identifying specific drug targets using RNA interference (RNAi) screening. As the subject of the 2006 Nobel Prize in Physiology or Medicine, RNAi promised a rapid, cost-effective experimental technique for rapidly gauging the physiological and cellular effect of gene-specific knockdown experiments – a tactic of tremendous prospective value in ascertaining specific genes to focus on for a diverse array of phenotypic applications, including most cancers.

Although RNAi screening has prompted tremendous interest in basic science and pharmaceutical research in the 15 years that followed its introduction in 1998, we must now admit, most disturbingly, that the technology has not yielded a single new lucrative drug target, and the capability is falling into an increasing state of disuse. This disillusionment has arisen largely from a level of experimental unreliability in far excess of original expectations.

Experimental outcomes apparently demonstrate substantial dependence on imprecise transcription rates. Rather than produce truly genespecific outcomes as advertised, off-target effects have abounded that quite plausibly disrupt intracellular equilibria in mechanisms largely unrelated to the specific target (12,13). As a consequence of these mechanistically complex underpinnings, major variations in experimental findings are encountered, even when care is taken to effect comparable analytical procedures.

Hit lists produced in one study frequently exhibit little or no overlap with the results of similar analyses, and supposedly validated target candidates produced in one campaign are often not revalidated under independent scrutiny (13). The combination of various sources of experimental error have unfortunately produced numerous studies that have yielded scientific conclusions based on data that are not tangibly more valuable than what might be elicited from a random number generator, corresponding to a tremendous waste of taxpayer and corporate capital.

Given the past intense and enthusiastic scientific interest in the RNAi technology, many studies producing ambitious but fallacious pronouncements of novel prospective drug targets can be forgiven any underlying ethical lapse arising from over-optimistic faith in the technology. Unfortunately, many erroneous papers from the debacle still clutter the research annals.

Outright retractions of irreproducible results, such as voluntarily conducted by Lipardi and Paterson (14), are rare, and conversely the imprecision inherent in the methodology has produced an unfortunate temptation to fraudulently claim validation of indefensibly erroneous findings (12). Ultimately, even the best scientists may be duped by false promises of exciting new technologies, but perhaps it is a measure of greatness to admit such mistakes and help to ensure that errors are not propagated to the next generation of research studies.

Scientific misconduct in clinical trials

Misconduct in basic science can cause damage to patients when translational research starts with the wrong premise. When misconduct and fraud involve clinical research, it compounds quickly, increases costs and can contribute to safety risks and even death. This is not a trivial issue – at least 2% of medical researchers admitted to fabricating, falsifying or modifying data at least once and 17% of surveyed clinical trial authors knew of research fabrications over a 10-year period (15).

Some famous, and chilling, examples of clinical research misconduct include:

-Robert Fiddes and his Southern California Research Institute falsified more than 90 studies on human reactions to drugs intended for treatment of numerous conditions, including hypertension, diabetes, asthma and vaginitis (Los Angeles Times, September 16, 1998).

-Werner Bezwoda and his South African highdose chemotherapy and bone marrow transplant clinical trial data for lymph node-positive and metastatic breast diseases (New York Times, March 11, 2000).

-Ann Kirkman-Campbell’s fraudulent clinical trial data on antimicrobial drug, telithromycin for treatment of outpatient upper respiratory infections and pneumonia (Wall Street Journal, May 1, 2006).

Peer review rigging

One form of misconduct that is on the rise in the last few years entails rigging the peer review process. Peer reviewers are the primary gatekeepers that a researcher must satisfy in order to transition their scientific findings from their own laboratories to the public eye. The most active arena for peer review is in manuscript publication (more than one million publications each year) although comparable scrutiny can be found for selection of presentations for many scientific conferences, and elaborate peer review panels are often formed to adjudicate the allocation of research grants and contracts.

Any form of competition pitting intelligent competitors in a struggle for limited resources or a finite modicum of valuable exposure will tempt people to game the system. Manuscript evaluation has proven to be a ripe target for such gamesmanship. The prodigious rate of publication growth (the rate at which peer-reviewed papers are published, has been doubling roughly every nine years (16), which equates to an increase of 8-9% per year) has taxed the ability of journals to provide quality and timely peer scrutiny to all of the submissions they receive.

Some of the highest impact journals triage many (and in some cases a large majority of) publications based on editorial instinct without tangible peer review, while other journals aspire to review all submissions but find it difficult to sustain levels of quality control attained in previous decades. The most challenging task is the identification of suitable and willing peers to partake in the review process.

Many journals still rigorously draw their manuscript referees from a pool composed of authorities publishing in a closely related discipline, but numerous others rely substantially on suggestions provided by the manuscript authors themselves. The right of authors to recommend specific people whom they feel well-qualified to evaluate their work, and the courtesy of permitting an investigator to disqualify judgment from potentially biased rivals, seem sensible, this unfortunately provides a mechanism through which substantial abuse can be injected into the process.

According to Ferguson et al, recent egregious examples of referee-stacking have taken place whereby authors recommend sending their manuscripts not merely to personal friends, but in fact even to fictitious email addresses the authors personally administer (17). There are agencies that offer ghost writing of manuscripts and provide fabricated email contacts and peer reviews. Email addresses that appear fictitious, and peer reviews that are returned very quickly, comprise some of the red flags that an editorial team may use to identify review sham reviews.

The plagiarism debate

Plagiarism is like the proverbial street lamp beneath which scientific integrity is searching for lost house keys. Even though integrity may have lost its keys far off down some dark alley, it spends much of its time searching beneath the light because that is where it is easiest to detect anything. Indeed, plagiarism is the most facile form of scientific misconduct to spot since one need only scan prior literature to uncover it, rather than having to rigorously repeat and methodologically scrutinise experimental protocols for evidence.

But do the detection, prosecution and prevention of the writing of unattributed ideas truly merit a major fraction of our collective focus? Perhaps a worthwhile conjugate question is to wonder what the state of scientific misconduct would be if we, in a triumph of diligence, were to completely eliminate all plagiarism?

Among the many forms of wrongdoing discussed in this paper, plagiarism is somewhat unique in that it is usually does not promulgate flawed or questionable research. In this sense it does not undermine science through the insinuation of indefensible conclusions, because plagiarised material is no more likely (in general it may be less likely) to promote misguided or irreproducible technical conclusions than research that has been formulated and communicated in a legitimate manner (18).

It is nonetheless a dishonest practice that, in its worst form, can harm the fabric of scientific achievement. In particular, the direct, unattributed repetition of other researchers’ novel technical findings and insightful new scientific interpretations can damage the trust and collegial spirit that is a cornerstone of collaborative, incremental advances in knowledge.

To understand the potential harm of plagiarism, we need to recognise that any scientific advance achieved today is a pyramidal apex made possible by foundational contributions by many prior findings – a set of conceptual dependencies described for drug discovery by Lushington et al (19). The implication of this is that by publishing the results of any single research study, a researcher opens the door for colleagues (and indeed for even competitors) to build on these advances in ways that may produce increasingly important or lucrative outcomes.

The scenario, whereby published release of findings is little more than a sacrificial invitation for this work to be superseded, is made more palatable by assurances of commensurate credit as a cornerstone of the burgeoning pyramid. Severe plagiarism (in which dishonest researchers claim others’ actual data and discoveries as their own) can erode these assurances, potentially stifling communal brainstorming by encouraging silo-minded protectionism in which researchers hold back their most intriguing results.

Fortunately the electronic distribution of most research studies achieved in recent years (as well as gradual retroactive digitalisation of earlier work) has provided a powerful basis to support efforts by the publishing industry to reduce the plagiarism incidence. Admittedly, a sheer volume of newly published material still hinders the exhaustive validation of novelty for every emerging publication, but key steps are being taken in this direction.

Ultimately, however, a large portion of plagiarism inherent in science and technology, although dishonest, distasteful and not to be encouraged, produces far less of a deleterious impact. Specifically, for plagiarism of sentences and paragraphs describing well-established knowledge (eg, descriptions of scientific protocols or introductory background context), it is difficult to convincingly demonstrate a degradation of core knowledge or prospects for technical advancement.

Ironically, one might even argue that some degree of plagiarism might aid in scientific advances, such as for papers crafted by researchers with poor English language skills wherein textual plagiarism may enhance the transmission of useful information relative to poorly composed original text. As a practical illustration of this consideration, journal editors are quick to point out that the most obvious flag for textual plagiarism is the presence of lucid, well-crafted phrases in an otherwise poorly written paper.

Research supervisors similarly report suspicions when suddenly encountering fluid prose by a researcher whose writing is known to be weak. Ultimately, an increasingly large preponderance of scientific achievement in the modern world is now being accomplished by an emerging generation of researchers who, in growing measure, are not native English speakers and are receiving their postgraduate and postdoctoral training in non-English-speaking communities.

In many cases, scientific writing is the only medium within which these researchers have any motivation for acquiring English language skills. Not surprisingly, many of these researchers either produce poor manuscripts or succumb to the temptation to borrow profusely from explicative material from previously published texts. The conditions under which modern technical scholarship is carried out thus tend to culminate in three undesirable alternatives:

1. A substantial portion of potentially relevant technical developments will be communicated to the community in poor text of limited intelligibility.
2. An increasing portion of prospective developments will not be communicated to the broader community at all due to poor English language skills on the part of the aspiring researchers.
3. Researchers with poor language skills will borrow text extensively from prior sources, sometimes (for reasons that will be elaborated shortly) without due attribution.

This unpleasant choice begs the question of how our scientific community got itself into this mess, and what can it do in order to practically mitigate the problem? In particular, how can global science foster and effectively utilise the energy and insight manifest in non-English cultures in a manner that productively unveils such contributions to the broader global community?

Fundamentally, the technical community has developed a substantial disconnect between a growing portion of global innovators and the ability of those people to conveniently relate their innovations within a broadly accessible medium. This disconnect is exacerbated by highly standardised expectations for the content and format of papers – the assumption that a technical paper will contain an Abstract, Introduction, Methods, Results, Discussions and Conclusions components of fairly predictable format.

Many new manuscripts submitted today report incremental research achievements that share substantial similarities (in protocol, application or both) with prior studies. This fact largely guarantees that previously existing papers will be available to potentially serve as templates for sections such as the Introduction and Method-ology. Linguisticallychallenged researchers are thus presented with a tempting source from which to borrow text.

While it can be morally defensible to copy (with attribution) sizable paragraphs describing introductory or methodological material, many manuscript reviewers and journal editors view this is the antithesis of originality. This assumption may be apt for some studies, but it is easy to overlook key exceptions where researchers use well-establish protocols for unique goals or on unprecedented systems.

The non-originality stigma may induce authors to compound the original weakness of borrowing text by prompting them to omit citations that would expose the extent of the borrowing. In other words, in years prior to automated text-matching software, some researchers may have formed a perverse (but sometimes justified) impression that a poorly referenced manuscript that contains textual plagiarism will appear to be more novel (and hence more publishable) than a well-referenced text.

The availability of applicable templates can hardly be viewed as the most fundamental cause of textual plagiarism, however. The more serious issue is unquestionably a technological research environment where information is exchanged primarily in English – a popular but syntactically very challenging language. The rise of English as a global language of technology can be ascribed largely to the politics and economics of a postwar world which saw the once-mighty German cultural and economic influence practically shattered, the French and Italian societies largely humbled, and Russia marginalised behind the iron curtain, while the net pre-eminence of American and British spheres collectively peaked even beyond the heyday of the British Empire.

This sociopolitical imbalance arose at very nearly the precise moment when the economic values of science and technology had become more abundantly obvious than at any prior historical juncture. Among the manifold implications of this imbalance was an implicit understanding that in order for any technological advancement to receive proper credit (and hence be financially lucrative to the authors), it must be communicated to the world in English, irrespective of the cultural background of the originating researchers. As this standard manifested itself across subsequent decades since World War 2, it began to take an obvious toll on the linguistic quality of scientific communication.

It is difficult to definitively trace the origin of the semi-humorous adage that ‘the language of science is broken English’, but one may at least plausibly attribute a thought-provoking article by Uwe Justus Wenzel20 who describes to a lay audience the implications of a quasi-concerted decades-long effort by the scientific community to implement a standard linguistic medium for information exchange. Until recently, the value of a common language for international technical discourse was obvious – knowledge and ideas could be expected to spread much more freely if all significant concepts were communicated within a single syntactical framework. Upon scrutinising the impact of this policy, readers should ask themselves:

-Where do the ultimate effects reside on a scale between enhanced discourse on one hand, and some mélange of hypocrisy, stultification and sociopolitical discrimination on the other?

-Is this linguistic standard likely to remain important, as emerging automated translation utilities are further refined?

-Finally, if a new world order is emerging within which the imposition of a 20th century model of a scientific Lingua Anglica becomes obsolete, what form might the new mechanism of communication take, and how might this improve upon the old?

Impact of a common dialect: facilitation or obstacle? Has a common linguistic standard for technological information exchange enhanced our global dialogue by placing within a single medium (ie, the English scientific print literature) most of the disparate pieces of knowledge necessary to solve challenging multidisciplinary problems? Conversely, has this stipulation suppressed large volumes of potentially important findings that have been intuited by researchers without the linguistic wherewithal to adhere to strict grammatical and formulative standards required to communicate in high-impact journals?

As editors of a mid-tier scientific journal, the authors (GL, RC) have frequently found ourselves mediating between frustrated referees and our own sense of the underlying technical merits intrinsic in many poorly written papers. However, our own scientific experiences clearly evince that even the barest consideration in high impact periodicals requires exemplary presentation skills. While the more tolerant publishing environment of our own journal ultimately permits us the latitude to coax and cajole potentially interesting papers toward reasonable levels of intelligibility required for scientific relevance and publication, we are fully aware that such patience is not viable for journals with the highest paper submission rates.

What this means is that poorly-written papers may ultimately get published, but only low- to mid-impact periodicals, and thus may never get a chance at the broadest audience exposure, no matter how exceptionally novel or important the underlying science may be.

To what lengths might a researcher go to counteract this disadvantage? Editors and journal referees frequently exhort authors to enlist the assistance of a native English speaker for manuscript preparation and revision, but this is not an easy proposition for all international scientists – many might not have had opportunities to network broadly enough to acquire linguistically-skilled contacts to call upon, and may not have the funds to pay for professional writing services.

Furthermore, even for those researchers who can take advantage of such an option, there are associated ethical pitfalls currently under debate. National Institutes of Health Director Francis Collins has decreed that publishing text whose authorship is attributed to people who did not write the material, and/or failing to credit with authorship those people who did contribute to the writing, can be considered a form a plagiarism (21). This attribute can then be extended equally to grant writing as well.

For a primer on authorship and the associated nuances, the reader is well advised to consult: http://en.wikipedia.org/wiki/Academic_authorship. Although this practice is extensively utilised by administrators in most academic disciplines, as well as in corporate and government settings, it is difficult to find any practical measure by which such writing is any more ethically sound than plagiarism.

One might argue that the unattributed writers are complicit in the fraudulent misattribution and generally receive remuneration for their complicity, but the comparison of an administrator who pays to use material that surrogate author has anonymously prepared versus a researcher from the developing world who cannot afford to have a manuscript professionally proofread, amounts as much to a difference in financial status as to a distinction in moral mandate. Neither the affluent administrator nor the poor researcher exemplifies scholastic honesty, but in one case money is being used as a substitute for morality.

Self-plagiarism

Like plagiarism, the issue of duplicate publication (often called self-plagiarism) falls into the streetlight category of scientific misconduct. Specifically, it embodies only marginal harm to the composite scientific edifice, but instances of uncited repetition of materials from prior papers frequently receives a substantial amount of attention due to the ease with which it may be detected.

From an ethical perspective, one may argue that self-plagiarism is an offence of lesser magnitude than plagiarism, since the latter involves the borrowing of copyrighted material from other authors and other publishers, whereas the former involves misappropriation only from copyright-holding publishers.

Just as with plagiarism, the act of text-recycling is unlikely to exacerbate the critical problem of endemic irreproducibility among scientific findings and may, in a small way, reduce the problem since a recycled description of scientific protocol may be easier to interpret and reimplement than a deliberately obfuscated version.

The incontrovertible fact remains, however, that many instances of self-plagiarism constitute violations of both the copyright ownership of the originating paper and the publishing agreement associated with any subsequent overlapping paper. In this sense, the moral breakdown lies in the fact that self-plagiarism is a form of copyright theft, and thus represents a legal liability within the arena of civil law.

Just as the practical damage associated with duplicate publication is generally less than other unethical practices profiled in this paper, so are the tangible rewards relatively modest. It has been argued that self-plagiarism is a mechanism for scientists to artificially boost publication records. It is hard to confirm or deny this without having taken a survey of known plagiarisers, but such a tactic seems dubious considering that most metrics for publication success dwell less on the number of publications, and more on quality of the publishing medium.

In this sense, multiple duplicate (or near-duplicate) publications in low impact journals thus rarely endows much prestige, while attempts to publish highly similar material in higher impact periodicals is destined for failure due to the ease with which manuscript originality may be validated. More likely scenarios involve the mistake of being, as the famous serial self-plagiariser Jonah Lehrer admitted, ‘incredibly lazy’ (22).

Such laziness is tempting in cases where there is some plausible rationale for desiring republication, but wherein the crafting of completely novel text seems unnecessary. One specific example may including the desire to introduce a given research study to an audience that does not regularly read the journal of original publication (for example, authors of a paper that bridged the biomedical and information science disciplines might be tempted to publish comparable papers in journals corresponding to these two very distinct fields).

A second plausible case might involve a study in which an interesting incremental achievement is obtained shortly after an original manuscript has been published – the authors may wish to have this new finding entered into the public record and may feel compelled to provide suitable background and methodological treatment, as may have been offered in the original paper.

Interim thoughts

Scientific misconduct exacts a serious fiscal and social cost in ways that are just as serious as accounting malfeasance and other forms of fraud. But just as numerous people who commit white collar crime or become embroiled in political campaign finance improprieties are intelligent individuals who can (with some justification) claim to be idealistic and well-intentioned, many researchers who stumble into scientific misconduct do not wish to damage the fabric of our discipline.

Many are driven more by desperation than greed or ego. It is fair to say that every person alive with a PhD is guilty of straying at least slightly into a morally grey area from time to time; this is in our very nature as imperfect human beings. Just as is the case with abuse of performance-enhancing drugs in professional sports, it truly may be easiest to view the growing problem as a systemic malaise that feeds off a fundamental research environment that does as much to encourage malfeasance as it does to discourage it.

We have reached the point where it is critical to begin scrutinising the systemic problem and seeking ways to tip the environmental balance to truly favour good practice. In the second segment of this two-part discussion of scientific misconduct (to be published in the Summer 2015 edition of Drug Discovery World) (23) we aim to provide some insight that may help practicing scientists identify prospective cases of flawed research. Based on the suggestions of various authorities, we will further raise some prospective community strategies that might help to reduce the incidence of highly damaging research practices. DDW

 

Acknowledgements

We thank our many colleagues who have influenced us in innumerable ways over the years and for being the beneficiary of their collective wisdom. We are particularly indebted to Hakim Djaballah (CEOInstitut Pasteur-Korea), David Vaux (Walter and Eliza Hall Institute of Medical Research), Deborah Collyar (Patient Advocates in Research), Elizabeth Iorns (Science Exchange) and Ivan Oransky (Retraction Watch) for shaping our ideas, but the viewpoints expressed here are our own

This article originally featured in the DDW Spring 2015 Issue

 

Dr Gerald H. Lushington, an avid collaborator, focuses primarily on applying simulations, visualisation and data analysis techniques to help extract physiological insight from structural biology data, and relate physical attributes of small bioactive molecules (drugs, metabolites, toxins) toward physiological effects. Most of his 150+ publications have involved work with experimental molecular and biomedical scientists, covering diverse pharmaceutical and biotechnology applications. His technical expertise includes QSAR, quantum and classical simulations, statistical modelling and machine learning. Key interests include applying simulations and artificial intelligence to extract. After productive academic service, Lushington’s consultancy practice supports R&D and commercialisation efforts for clients in academia, government and the pharmaceutical and biotechnology industries. Dr Lushington serves as Informatics Section Editor in the journal Combinatorial Chemistry & High Throughput Screening, Bioinformatics Editor for Web- MedCentral and is on editorial boards for Current Bioactive Compounds, Current Enzymology and the Journal of Clinical Bioinformatics.

Rathnam Chaguturu is the Founder & CEO of iDDPartners, a non-profit think-tank focused on pharmaceutical innovation. He has more than 35 years of experience in academia and industry, managing new lead discovery projects and forging collaborative partnerships with academia, disease foundations, non-profits and government agencies. He is the Founding President of the International Chemical Biology Society, a Founding Member of the Society for Biomolecular Sciences and Editorin- Chief of the journal Combinatorial Chemistry and High Throughput Screening. He serves on several editorial and scientific advisory boards, has been the recipient of several awards and is a sought-after speaker at major national and international conferences, passionately discussing the threat of scientific misconduct in biomedical sciences and advocating the virtues of collaborative partnerships in addressing the pharmaceutical innovation crisis. ‘Collaborative Innovation in Drug Discovery: Strategies for Public and Private Partnerships’, edited by Rathnam, has just been published by Wiley.

References
1 Miller, DJ and Herse, M. Editors (1992). Research Fraud in the Behavioral and Biomedical Sciences, John Wiley & Sons.

2 Pritsker, M (2012). http://www.jove.com/blog/2012/05/03/studies-show-only-10-of-published-science-articles-are-reproducible-what-is-happening.

3 Collins, FS and Tabak, LA (2014). Nature 505 (7845): 612-613.

4 Fang, FC et al (2012). Proc. Natl. Acad. Sci. 109 (42): 17028-1137.

5 Dolgin, E (2014). Nature Rev. Drug Discovery 13: 875- 876.

6 Chaguturu, R (2104). Combinatorial Chemistry & High Throughput Screening 17 (1) 1.

7 Smith, R (2006). J R Soc Med. 99(5): 232–237.

8 Budd, JM et al (2011). In Association of College and Research Libraries National Conference Proceedings p390- 395 (Philadelphia, PA).

9 Translational reproducibility, Sigma-Aldrich White paper, December 2014. http://investor.sigmaaldrich.com/releasedetail.cfm?ReleaseID=887220.

10 Mullard, A (2011). Nature Drug Discovery 10:643-644.

11 Begley, CG and Ellis, LM (2012). Nature 483 (7391), 531-533.

12 Bhinder, B and Djaballah, H (2013). Drug Disc. World. 14: 31-41, and references therein.

13 Bhinder, B and Djaballah, H (2014). Drug Disc. World. Summer 15: 9-19.

14 Lipardi, C et al (2011). Proc Natl Acad Sci USA 108 (36), 15010.

15 Gupta A (2013). Perspect Clin Res. 2013 4(2): 144-147.

16 van Noorden, R (2014). Nature News Blog, May 7, 2014. http://blogs.nature.com/news/2014/05/global-scientific-output-doubles-every-nine-years.html.

17 Ferguson, C et al (2014). Nature 515: 480-482.

18 Beebe, DC (2013). The Scientist. 36177, June 25, 2013, http://www.thescientist.com/?articles.view/articleNo/36177/title/Opinion–Unethical-Ethics-Monitoring.

19 Lushington, GH et al (2013). Combinatorial Chemistry & High Throughput Screening. 16, 764-776.

20Wenzel, EJ (2008). Neue Bürcher Beitung, http://www.nzz.ch/nachrichten/kultur/literatur_und_kunst/the-language-of-science-is-broken-english-1.747112.

21 Kaiser, J (2011). Science Magazine, July 13, 2011. http://news.sciencemag.org/education/2011/07/penn-psychiatrist-accuses-five-colleagues-plagiarism.

22 Shawn O’Rourke, S (2012). Pop Matters, Jul 11, 2012. http://www.popmatters.com/post/160599-jonah-lehrer-and-the-debate-over-self-plagiarism/.

23 Lushington, GH and Chaguturu, R (2015). Drug Discovery World (summer edition), in press. 

NIH plans to enhance reproducibility

Suggested Reading

Join FREE today and become a member
of Drug Discovery World

Membership includes:

  • Full access to the website including free and gated premium content in news, articles, business, regulatory, cancer research, intelligence and more.
  • Unlimited App access: current and archived digital issues of DDW magazine with search functionality, special in App only content and links to the latest industry news and information.
  • Weekly e-newsletter, a round-up of the most interesting and pertinent industry news and developments.
  • Whitepapers, eBooks and information from trusted third parties.
Join For Free