Translational Chemical Biology: Gap assessment for advancing drug discovery, development and precision medicine
As yesterday’s lead molecule enters today’s clinical trial, the standard operating script seems to call for product developers and clinicians to push away the originating basic scientists, lest their lofty impractical ideals disrupt a delicate balance of strategic compromise.
Indeed many scientists may not understand the competing push and pull of efficacy versus toxicity and quality versus production costs. They do, however, know the molecule in question intimately.
Scientists insight into subtle vagaries of the incumbent chemical biology can prove invaluable in clearing unexpected hurdles. The sophisticated techniques they applied to advance the molecule this far have not lost their analytical magic.
They may very well save your bacon!
Translational research should never be considered a relay race hand-off, but rather a march together, hand-in-hand, toward a shared victory. In order to foster pragmatic collaboration, we endeavour to examine key technologies that not only enable early stage discoveries, but can also bridge the late-stage pitfalls that may threaten promising drug candidates on the path to market.
Translational chemical biology sits at the nexus of chemistry and biology. While the application of chemical biology principles helps in the designing of a bona fide chemical probe, it is translational chemical biology that helps translate basic research into meaningful clinical applications. This trajectory of chemical biology to its applied domain is interdisciplinary, and one that is yet to be mastered by academia. Successful outcome of any translational chemical biology effort is guided by several key factors: the impact of precision medicine, the reality of the ‘valley of death’ and what natural compounds and their clinical use can teach us.
These issues are critical to improving drug development and lowering barriers to their translation into clinical utility and commercial value. Target-based drug discovery, a solely bottom up rather than top down approach, limits effective translation, particularly when viewed as the progression from laboratory to clinic. Observational therapeutics, guided by the principles of reverse pharmacology – the bedrock of traditional medicine but lately forgotten by the pharmaceutical industry – holds the key to the bench to bedside holistic seesaw pendulum.
Current drug approvals
While biotechnological advances, genomics and high throughput screenings or combinatorial and asymmetric syntheses have long promised new vistas in drug discovery, the pharmaceutical industry is facing a serious innovation deficit.
The costs of drug development have escalated, the number of drug withdrawals has increased to historic highs and the transition from bench to bedside has been long and arduous. It is instructive to evaluate the ‘valley of death’ conundrum from the perspective of the drug approval process and drug product output. In 2016, the US FDA approved 22 New Molecular Entity (NME) and New Biologic License Applications (BLA) from a total of 41 actual filings.
This was the worst year since 2010, when only 21 NME and BLA drugs were approved from 23 filings. This is captured and summarised for the past 10 years in Figure 1, which shows the total number of NME/BLA filings and approvals as a function of each calendar year (1).
New drugs approved in 2016 included eight fast track designations (drugs that address unmet medical need); seven were deemed breakthrough treatments (clinical evidence indicated substantial improvement over other therapies); 15 received a priority review (six months instead of the standard 10 months); and six were accorded accelerated approval (for drugs that demonstrated superior performance over existing therapies for life-threatening disease indications).
The FDA has introduced these designations over the recent past in order to expedite the drug discovery and development process and hasten superior drug performance candidates to market in order to meet urgent, unmet medical need.
Janet Woodcock, Director of the Center for Drug Evaluation and Research at the FDA, noted that “..a lower than average number of novel drugs” were approved in 2016 by the FDA (1). However, she claimed that the quality, impact and unique contributions to improved patient medical care was extremely high for these approved NME and BLA therapeutics. For example, eight of the 22 approved drugs were first-in-class, representing new drug-target mechanisms of action.
In addition, nine approved drugs received rare or orphan disease status. This designation applies to disease indications that afflict less than 200,000 individual patients, and is significant because often times such patients have no available therapeutic options. Woodcock argued that “more important than the quantity of novel drugs approved in 2016 is their medical value and the important new roles they are serving to advance patient care.” (1)
The ‘quality versus quantity’ argument has some validity given recent trends in marketing new drugs that possess modest, incremental performance capabilities compared to other same in-class drugs (eg Statins). However given the vast number of disease indications with minimal, or ineffective, treatment options, it is imperative to increase the quantity as well as the quality and effectiveness of new drugs coming to market.
In that regard it is revealing to review the past 70 years of new drug activity at the FDA (2). There has been a slow but notable trend in the increased number of new drugs introduced into the US market by the FDA on a per decade basis. For example in the 1940s (1940-49) 192 new drugs were introduced, subsequently 1950s-200; 1960s-151; 1970s-170; 1980s-217; 1990s-311; 2000s-235 and projected for 2010- 2019-321 (2).
Based on these approved drug numbers and trends, proponents of the pharmaceutical sector argue that the Drug Discovery and Development process is functional and innovation and creativity is alive and well. There is also an argument made that the ‘valley of death’ is overhyped and overemphasised.
Translational chemical biology – the knowledge deficit
Since the launching of the National Institutes of Health (NIH) Roadmap Initiative, NIH continues to invest millions of dollars to harness the repertoire of druggable therapeutic targets that have come to life from the human genome-sequencing project. This effort made the academic scientist appreciate, if nothing else, the ingenuity of its brethren from the pharmaceutical industry in the discovery and development of drugs.
NIH’s investment resulted in the development of exciting assay technologies, brought high throughput screening to the academic corridors, thousands of compounds screened, millions of data points generated, data analytics became the mainstream language, made inroads in optimising screen hits and strengthened chemical biology as a discipline in designing chemical probes.
While this list of accomplishments comes across as seemingly impressive, the chemical probes discovered through this effort (almost) never saw the light of the day, especially from a clinical perspective. Inability to secure intellectual property rights and the dearth of medicinal chemistry expertise proved too steep a hill for the academic scientist to climb. Lack of translational medicine experience contributed to a non-starter in what makes a therapeutic target hit leap from being an in vitro active compound to become a candidate with clinical potential.
This is translational chemical biology, and is still in its infancy in both academia as well as in many budding biotechnology companies. NIH has made a strategic effort in addressing this knowledge deficit by instituting Small Business Innovation Research (https://sbir.nih.gov/) grants program to bring pharmaceutical experience to entrepreneurs, and support research and development that has a strong potential for commercialisation. Even with research dollars available, it is incumbent upon the chemical biologist to learn and implement the principles of translational medicine as discussed in the ensuing sections.
Deploy modern tools to augment scientists’ repertoire
The mismatch between academic efforts and that of industry is clearly of some concern. The pharmaceutical sector may argue that the drug discovery and development process is already efficiently aligned. We would suggest that the efforts of the academic community may need to be somewhat realigned if they are to participate in contributing and improving the drug discovery and development process. In part this can be done by a more careful consideration of implementing a useful translational chemical systems biology approach.
Modern day combinatorial chemistry (CombiChem)/high throughput synthesis and screening were amenable only to production of libraries of flat, planar molecules with remarkable lack of chirality and scaffold diversity. Creative biology has been retrofitted to mundane, uninspired and uninspiring chemistry.
Moreover, plant derived molecules with embedded chirality, or synthesised with high stereospecificity have been neglected over many years due to proclaimed difficulty in securing intellectual property and perceived problems in obtaining large quantities of materials. Critics suggest: “We have become high throughput in technology, yet have remained low throughput in thinking.” Post-marketing failures of blockbuster drugs have become major concerns of industry, leading to a significant shift in favour of single to multi-targeted drugs and affording greater respect to traditional knowledge.
The typical reductionist approach of modern science is being revisited over the background of systems biology and holistic approaches of traditional practices. Translational chemical biology for drug discovery is useful; it is extremely imperative that the newer tools and techniques are brought to the forefront. Sophisticated organic synthesis must still be the order of the day.
Professor Robert Burns famously stated: “The art and science of chemistry is one that is more easily exemplified and epitomised than it is articulated and summarised.” (3). We discuss herein, some approaches and technologies that can indeed pay rich dividends in discovering and progressing a bioactive from mind to marketplace.
Reverse pharmacology observational therapeutics
Reverse pharmacology is the development of drug candidates by validation of clinically-documented experiential data on plant-derived natural products (4,5).
In this approach, scientists and clinicians integrate bedside-documented experiential clinical data from plants used in Indian Traditional Medicine (Ayurveda) and from Chinese Traditional Medicine. Reverse pharmacology involves isolation, structure elucidation and biological mechanism of action studies on active compounds and assessment of novelty.
One can expand on the already known activities of natural products derived from reverse pharmacology by selective functionalisation, generation and derivatisation of privileged structures and conducting selective studies on pre-clinical development. As the starting point(s) are compounds with established safety and efficacy, one can then establish early proof of concept in humans and differentiation.
Scientifically-validated and technologically-standardised botanical products may be explored on a fast track using innovative approaches such as reverse pharmacology and systems biology, which are based on traditional medicine knowledge (Figure 2).
Many modern drugs have their origin in ethnopharmacology and traditional medicine. Indian Ayurvedic and traditional Chinese systems are living ‘great traditions’. Ayurvedic knowledge and experiential databases can provide new functional leads to reduce time, money and toxicity – the three main hurdles in the drug development. Extensive information on Ayurvedic medicine research, clinical experiences, observations or available data on actual use in patients can be used as a starting point.
Principles of systems biology where holistic yet rational analysis is done to address multiple therapeutic requirements are coalesced with chemistry. Since safety of the materials is already established from traditional use track records, it is necessary to undertake pharmaceutical development, safety validation and pharmacodynamic studies in parallel to controlled clinical studies. Thus, drug discovery based on Ayurveda follows a reverse pharmacology path from Clinics to laboratories (Figure 3).
Renaissance of plant-based natural products
Natural products have been historically regarded as rich sources of novel molecules of broad utility and architectural complexity (4,5). The natural world is a source for inspiration for chemists and biologists. The exquisite and varied architecture of natural products provides a rich palette for discovery. Natural products can be considered ‘pre-validated by nature’, having been optimised for interaction with biological macromolecules through evolutionary selection processes.
Embedded in these bioactive natural products are a number of diverse, chiral functional groups that are potential sites for protein binding. This diverse source of novel, active agents serves as leads/scaffolds for elaboration into desperately-needed efficacious drugs for a multitude of disease indications. The chemical entities present in natural product may or may not be useful as a drug, just like the hits from a target-based high throughput screen.
However, their activity can be enhanced either by making modifications or combining with other similar pharmacological activities that will make these hybrid molecules more drug-like. Scientists must aim to reconfigure products into chemical hybrid ‘molecular legos’ and to screen the deck of diverse compounds against targets. Investigations should centre on:
i) Expansion of the already-known activities of natural products derived from reverse pharmacology by selective functionalisation, generation and derivatisation of privileged structures and conducting selective studies on pre-clinical development.
ii) Designed organic synthesis of high recognition libraries focused on specific biological targets.
iii) Synthesis of building blocks/scaffolds/high value intermediates.
Expanding chemical space
The ability to easily access new chemical space, however defined, is a major challenge for discovery chemists. Although advances in diversity-oriented synthesis have made great strides towards expanding the accessibility of synthetic compounds that have high levels of diversity (including stereochemical, shape and bond connectivity), there is room for further improvement.
The diversity expansion inherent in transformations that mimic the metabolism of small molecules and natural products can, in conjunction with modern synthetic organic chemistry, provide a new direction in the pursuit of natural product-derived substances. Simple and well-controlled transformations (oxidation, halogenation and alkylation) can afford compounds that have unique structures that possess a wide range of physicochemical and biological properties (Figure 4).
Indeed, such biomimetic transformations are the source of remarkable levels of diversity in natural products and investigational agents. The enthusiasm for natural products discovery has sometimes been dampened by difficult syntheses. A significant disadvantage of natural products, with the exception of those derived from fermentation, is the draconian organic synthesis/medicinal chemistry effort required for commercialisation or future functionalisation.
In many cases, the natural product compound has not been available in sufficient quantities for various biological assays, thereby limiting their exploration. Plant-based natural products are at times available in greater quantity.
Recently, the concept of hybrid molecules has gained currency. Couplings of diverse molecules such as artemisinin and chloroquine with Vitamin K3 have generated new molecules with effectiveness in oncology. The therapeutic properties of several natural and synthetic products can be modulated by such hybrids, possible to get alternate therapeutic indications, and hybrids can be structural or functional.
Hybrid molecules can also be made with different molecular scaffolds in the same structure: Synthesis of differently substituted 2-[2-amino-6- (2-chloroquinolin-3-yl)-5, 6-dihydropyrimidin-4- yl] phenol (see diagram). Future investigations can centre on
i) expansion of the already-known activities of natural products derived from reverse pharmacology by selective functionalisation, generation and derivatisation of privileged structures and conducting selective studies on pre-clinical development;
ii) designed organic synthesis of high recognition libraries focused on specific biological targets; and
iii) synthesis of building blocks/scaffolds/high value intermediates.
In recent years flow chemistry has become a powerful tool for the rapid optimisation and refinement of organic reactions and processes (6,7). Most are typically employed on a micro scale for proof of principle studies, modifications allow flow to be also used for bulk production of candidate compounds of interest. The three most common variants of flow chemistry are micro flow, meso flow and tethered reagent flow.
Flow offers many advantages over conventional batch chemistry: pressurisation allows reactions to be performed at temperatures exceeding the boiling points of solvents, multi-step reactions can be performed in continuous sequence and gaseous reagents can be incorporated into reaction sequences with limited effort. The processes are also amenable to rapid automation, improving throughput in the discovery phase.
Additionally, improved impurity profiles, one of the benefits of the controlled mixing/ heat transfer characteristics that the method offers, is typically observed, as in the case of microwave mediated synthesis. The flow approach allows rapid assembly of diverse libraries of building blocks for synthesis, one can envision its use in optimising a number of intra and intermolecular transformations, including bioconjugations, PEGylations and glycosylation of NP building blocks.
Bound reagents: The use of bound reagents and scavengers is an emerging technology frequently employed in streamlining reaction set-up, work-up and purification. A key advantage of the use of bound reagents is the ease of conversion of batch reactions into flow format. This has been demonstrated by Ley et al on a number of examples, such as the synthesis of oxomaritidine6. There is a huge variety of commercially-available and made-forpurpose resins that can be employed. These include oxidants, reductants, bases, acids and bound or encapsulated catalysts.
It is tempting to regard molecular modelling and chemical informatics as predominantly ‘basic science’ tools – abstract engines for hypothesis generation, rather than rigorous techniques for addressing the advanced, fine tuning requirements of translational drug optimisation, wherein promising leads are tweaked for efficacy, selectivity, toxicity and deliverability.
In truth, however, this preconceived notion of computational analysis has little pragmatic or logical basis. When both the methodological strengths of computational chemistry and the specific requirements of translational science are considered, it is actually easier to argue that modelling and informatics have significantly greater aptitude for the later stages of therapeutic optimisation process. Indeed, rigorous computational exploitation of data on-hand may provide a critical helping hand for navigating the ‘valley of death’.
To understand the strengths of computational chemistry at various stages in the drug development process, let us consider how such tools are used. Early in the basic science stage of a chemical biology investigation, modelling and informatics methods may (with a level of confidence similar to a primary screening assay) provide glimpses of factors that could play roles in a given physiological process and the modulation thereof.
Such early phase investigations, however, tend to be idea-rich yet data-poor. Without a solid basis of reliable analytical measurements or empirical understanding, such speculative glimpses bear little predictive value. The veracity of computer-generated observations may be difficult to assess in such an environment, or if the analysis elucidates real phenomena, it may not be immediately clear that these features have a strong bearing on the key issues of medical interest.
By contrast, in the later, translational stages of an investigation, one is almost always better armed with extensive data and empirical experience. Much of this insight can be directly leveraged to intelligently select computational protocols that can be shown to reproduce what is already known, and to foster effective method and parameter calibration that optimise the methodological fidelity for de novo calculations.
As a practical example, the early stage application of molecular modelling and chemical informatics analyses (eg, docking and QSAR respectively) toward hit elaboration will typically be constricted by either the small size, or the great heterogeneity, of bioactivity data. Later on, after many laboratory studies have compiled homogeneous data sets spanning a good number (one can expect at least several dozen) of close structural analogs to the lead compound, this basis of knowledge enables far more rigorous computational perception of subtle SAR trends. This perception can foster accurate refinements in deliverability, efficacy, target specificity, toxicity, or a host of other important concerns.
When the existing data contains biomolecular structural insight (eg, receptor crystal structures or NMR), one can employ data trends to train detailed Comparative Binding Energy (COMBINE) models that prioritise specific ligand-receptor interactions according to favourable or problematic contributions to the observed SAR (Figure 5) (8).
If the data do not contain receptor structure information, computational methods can still prove very helpful. Machine learning methods are exceptionally well suited for parsing the properties of specific hits or lead-analogs and contrasting those features against therapeutic efficacy, selectivity or toxicity assessments (9). Such analyses can pinpoint the precise combination of desirable molecular attributes that favour some candidate ligands over others, and can suggest opportunities for further refinement, as well as plausible tradeoffs, such as those between efficacy and drug deliverability, or between pathogen cytotoxicity versus host toxicity.
An impressive array of lead optimisation issues may be plausibly addressed through protocols that rely at least in part on molecular modelling or chemical informatics. Such issues include (among other things) many aspects of oral availability (10), volume of distribution (11), blood brain barrier permeability (12), pharmacodynamics (13) and pharmacokinetics (14). Computational techniques are also relied upon for the design of non-drug delivery formulation components suitable for stabilising a therapeutic (15) or facilitating its integration into pill form (16).
Great reliance may also be placed on modellers and informaticians in Phase II and Phase III clinical filings for the assessment of trace component toxicity. Specifically, while there is no substitute for careful and incontrovertible in vivo toxicology testing of the primary therapeutic ingredient or any drug formulation, late stage scrutiny of medicinal formulations places significant emphasis on prospective complications that may arise from trace contaminants present in the formulation as synthetic impurities, or that may emerge (postadministration) due to metabolic conversions.
Since trace impurities and metabolites may number in the dozens or hundreds of distinguishable chemical entities, few regulatory boards require that they all be subject to exhaustive toxicology testing. However, some quantifiable risk assessment is nonetheless required, and a commonly sought level of diligence entails computational assessment of any toxic or mutagenic indications, or other potential adverse consequences.
Requested evaluations rarely require unanimous declaration of non-toxicity across a broad range of predictive measures. Instead, approval is generally granted if the petitioner can provide assurances from at least two unrelated computational techniques that the threshold of adverse effect for any given trace entity is below the quantity presented in a standard dose, or in the metabolic process thereof. A significant number of free internet-accessible computational resources are available for such determinations, including packages based on either QSAR model training (17) or on empirical toxic effect rules (18).
One final area where chemical informatics is blending with sophisticated new bioinformatics methods toward enhancing translational aspects of pharmaceutical practice is in precision medicine, whereby new software tools (19) are emerging to exploit personalised genome profiling, in combination with extensive chemical biological data in order to predict toxicity and efficacy on a patient level.
Process chemistry/route selection
Knowledge of process chemistry is pivotal to activities in the path of a drug from concept to commercialisation. Medicinal chemistry synthesis routes are low yielding and fraught with capricious reactions, tedious chromatography and scale-up problems. Research by the authors and their collaborators/teams led to development of novel, cost-efficacious, scalable processes for chiral molecules natural and synthetic.
Processes were efficient: development of new methodologies resulted in more than 100-fold reduction in costs and dramatic increases in reaction efficiencies and yield. Prominent among these drugs were Tiagabine (Gabitril), Ziprasidone, Celiprolol CMI 977 and CMI-392. These concepts for ‘Process Chemistry-Driven Drug Discovery’ have been back-integrated into drug discovery methodologies and have led to the discovery of new chemical entities for clinical development. A unique combination of creativity with sophisticated technology, strategic collaboration, global commerce and refined logistics led to discovery of new drugs.
The essential complement of expertise in chemistry and infrastructure facilities allied with strong institutional linkages built up with various universities and pharmaceutical industry ensured successful upscaling, seamless technology transfer and implementation of new technologies
Drug metabolism: the case of cytochrome
P-450 Polypharmacy, involving co-administration of several drugs, is common among the elderly and chronically ill. It is a risk factor for adverse drug reactions and drug-drug interactions (DDI). One plausible DDI occurs when a drug interferes with another, causing irreversible changes to formation of metabolites from one or both. Such suppression or attenuation of metabolism could cause variances in toxicity and efficacy.
In humans and other animals, most drugs are metabolised in the liver. Many drug metabolites are formed by oxidative mechanisms catalysed primarily by heme and cytochrome-containing enzymes. Most biological oxidations involve primary catalysis provided by the cytochrome P-450 mono-oxygenase enzymes. All heme proteins that are activated by hydrogen peroxide, including catalases, peroxidases and ligninases, function via a two-electron oxidation of the ferric resting state to an oxoferryl porphyrin cation radical.
While this oxidation state has yet to be characterised for the cytochromes P-450, most of their reactions and those of the biomimetic analogues can be accounted for by oxygen transfer from/to a variety of substrates to give characteristic reactions such as hydroxylation, epoxidation and heteroatom oxidation. Other products resulting from hydroxyl and hydroperoxyl radicals have also been detected. The metabolic processes in vivo contribute in substantial measure to the efficacy, sideeffects and also the toxicity of a pharmaceutical entity.
These factors are responsible for the success or failure of a clinical candidate. Metabolic processes of drugs are always the subject of intense scrutiny in pharmaceutical companies. Pharmacologists have traditionally been involved with isolation and identification of the metabolites of a drug. It is imperative to conduct such studies early in the drug development process. Toxicological and pharmacological studies on the metabolites form a crucial segment in the identification of a clinical candidate (20-26).
Several problems are currently associated with the use of biological systems in studying drug metabolism:
i) In vitro studies produce very small quantities of the product. Primary metabolites are often hydrophilic and difficult to isolate. Most of the reactive metabolites and unstable intermediates are reacted away by the biological nucleophiles.
ii) Animal studies necessitate the sacrifice of animals and are extremely expensive to conduct. Liver slice preparations are of variable potency; it is difficult to quantitate the precise stoichiometry of the oxidant.
iii) Pharmacologists do not know, in advance, the structure of the metabolites they should seek.
iv) Many of the metabolites are not amenable to organic synthesis by conventional routes.
Metalloporphyrins as chemical mimics of cytochrome P-450 systems
It will be useful to study metalloporphyrins as mimics of the in vivo metabolic processes. Efficient, sterically-protected and electronicallyactivated organic biomimetic catalysts have now been developed. Early synthetic metalloporphyrins were found to be oxidatively labile. Few catalytic turnovers were seen due to rapid destruction of the porphyrin macrocycle. Introduction of halogens on to the aryl groups (of meso-tetraarylporphyrins) and on the Beta-pyrrolic positions of the porphyrins increases the turnover of catalytic reactions by decreasing the rate of porphyrin destruction.
The combined electronegativities of the halogen substituents are transmitted to the metal atom making the oxo-complexes more electron deficient and more effective catalysts. Conventional catalysts for oxidation are prone to oxidative dimerisation with low catalytic turnover numbers (around 5 to 10).
These catalysts function with catalytic efficiency reflecting turnover numbers exceeding 100,000. Structural scaffolds incorporate the aza macrocycle into the primary structure. Further structural variants are effected by modulation of the size of macrocycle (number of rings), the substitution pattern at the periphery of the aromatic rings, the substitution on the internal hydrogens, the metal ions, the choice of axial ligands, the inorganic counterions and polymers used for immobilisation (20-26).
Diversified analogues through automated oxidation chemistry
Development and implementation of automated oxidation chemistry to obtain diversified analogues, both as new chemical entities in their own right, and also as substrates for further synthetic conversions, hold significant promise. The oxidation procedure is extremely facile as compared to biochemical and enzymatic processes. This approach affords an efficient method for the systematic preparation and identification of the entire spectrum of metabolites from a chosen drug.
One could take a library and create another library of new compounds quite easily and efficiently. Relatively low cost as the starting library is already made, and this would provide new compounds which are more polar, water-soluble and contain handles for further derivatisation.
The pharmaceutical industry assumes that drugs developed and marketed against a disease are indeed effective against the entire patient population. However, such is not the case since a particular disease-targeted drug affords clinical efficacy only in a small fraction of that patient population. The need for translational chemical biology is more than ever in ensuring that the drugs do bring forth therapeutic efficacy to all the patients afflicted with that disease; if not, what is it that differentiates one patient from the other, and what needs to be done to the drug molecule to be effective against the unresponsive patients?
The current modus operandi of modern medicine is based on the determination of an individual’s symptoms, along with an associated diagnosis and subsequent response to a specific treatment as compared to a statistically similar and relevant patient population dataset or database. The current healthcare system tends to be reactive providing treatment post-onset of the disease, with limited attempts at prevention and prediction.
All this reliance on the comparative analysis of an individual compared to a defined population tends to neglect and disregard human individuality, complexity and variability. It also fails to recognise the systems level interconnectedness of human molecular biology, biochemistry, metabolism and physiology in the form of systems biology (27).
The lack of progress in the effective diagnosis and treatment of disease as well as a growing awareness of the complexity and variability of individual patients and our limited understanding of causal mechanisms of onset, and progression of most 21st century diseases, has led to a growing demand for paradigm change. The clamour for change has led to the emergent growth of ‘P-Medicine’ that includes Personalised and Precision Medicine, and a call for more effective drug discov ery and development process that includes Translational Chemical Biology (28).
Precision versus Personalised Medicine
The US National Research Council Report in 2011 (29) attempted to define and differentiate Precision Medicine from Personalised Medicine: Precision Medicine is the tailoring of medical treatment to the individual characteristics of each patient. It does not literally mean the creation of drugs or medical devices that are unique to a patient, but rather the ability to classify individuals into subpopulations that differ in their susceptibility to a particular disease, in the biology and/or prognosis of those diseases they may develop, or in their response to a specific treatment.
Preventive or therapeutic interventions can then be concentrated on those who will benefit, sparing expense and side-effects for those who will not. Although the term ‘personalised medicine’ is also used to convey this meaning, that term is sometimes misinterpreted as implying that unique treatments can be designed for each individual. For this reason, the Committee thinks that the term ‘precision medicine’ is preferable to ‘personalised medicine’ to convey the meaning intended in this report.
A Precision Medicine approach utilises individuals and defined (sub)-population-based cohorts that have a common knowledge network of disease (or health) taxonomy. In addition it requires an integrated molecular and clinical profile of both the individual as well as the sub-population-based cohort. Zhang has described Precision Medicine, predicated on the individual patient/sub-population model as “one-step-up” from the individual patient focus of Personalised Medicine (30).
Implicit in his statement is that Personalised Medicine is based on the single individual “N-of-1” model whereas Precision Medicine uses a “1-in-N” model predicated on widely-used biostatistical data analysis and ‘big data’ analytical tools. Precision Medicine can best be described as an amalgam of Personalised Medicine and modern conventional medicine.
Alzheimer’s Disease – a case study in precision medicine
It was recently suggested that the “...goal of Precision Medicine is to deliver optimally targeted and timed interventions tailored to an individual’s molecular drivers of disease” (28). As we have already suggested, the utilisation of Translational Chemical Biology and systems biology tools in this type of endeavour is clearly synergistic and can and will facilitate such efforts.
However, what specifically can Precision Medicine provide for patients in the form of safe and effective therapeutic treatments?
We will use the minefield of drug discovery and development efforts in Alzheimer’s Disease (AD) as a focal point for consideration. The fate of AD drug candidates in the drug development process over the past 20 years stands at a remarkable 99.8% failure rate (31). In addition, the cost of those failures to the pharmaceutical industry has been in excess of $15 billion for amyloid-beta trials alone during that same time period.
Currently there are 23 Phase I, 47 Phase II and 18 Phase III AD candidate drugs in US clinical trials. However, evaluation of the individual clinical trial drug candidates reveals that the vast majority is focused on individual druggable targets in the amyloid- beta or tau pathways. Such specific approaches have already proved to be somewhat futile, and have been compounded by the lack of understanding of AD causal onset, as well as the limited application of Translational Chemical Biology.
Translational chemical and systems biology drug discovery
The emergence of systems biology in concert with the development of a suite of accompanying analytical and bioinformatics tools and technologies has facilitated the evaluation and unravelling of complex disease mechanisms (27). Therefore it is not surprising that we and others have suggested such an approach should ultimately find widespread use in understanding causal onset, progression and effective treatment of any complex disease such as AD.
Recently we and others have suggested an approach to drug discovery and development for effective and safe AD drugs using a systems biology approach (31,32). We have argued that any lead drug candidate must disrupt molecular networks “...that lead to the accumulation of AD neuropathology and trigger the neurodegenerative process that leads to cognitive decline and ultimately to the clinical manifestations of cognitive impairment and dementia due to AD.”
Note the emphasis on targeting a network as opposed to a single pathway such as the amyloid-Beta pathway. Based on Bennett’s suggestions as well as our experiences we would propose the following broadbased chemical and systems biology approach to drug discovery:
i) Network biology discovery – Multi-omics analysis at the gene, protein and metabolite level. This should include DNA methylation, miRNA and mRNA transcriptomic data from human brain material derived from brain region at the hub of neural networks sub-serving cognition, as well as AD pathologic and clinical quantitative traits, to nominate genes, and therefore proteins, from networks and nodes involved in the molecular pathways leading to AD. We were the first group responsible for developing a systems biology methods approach for integrating multi-level ‘omics’ data derived from a mammalian system and such approaches are now relatively routine (33).
ii) Identification of potential targets – The network biology analysis should provide a prioritised list of target genes. An assortment of analytical tools primarily employing mass spectrometry should be used to generate protein expression data, which can be analysed with standard regression techniques, pathway analyses, and structural equation models to provide empirical support for biologic pathways linking proteins to AD endophenotypes. As Bennett notes “the empirical data will be fed back to the network modelling stage to refine the network analyses. This component ensures that the candidate genes generated from the systems biology component above are translated in the human brain and that the measured protein variants themselves are related to AD endophenotypes” (32).
iii) Functional validation – Utilisation of RNAi screens to either overexpress or knock down each of the selected genes in neurons. This provides an efficient, high throughput approach to generating the data required for the analysis of transcriptional networks. RNA profiles derived from each experimental condition will allow the empirical reconstruction of the molecular networks in the target, human cell types to confirm the pathways identified in our initial integrative analyses. This component of the pipeline has several purposes:
a) It will refine the networks and confirm that the genes nominated in systems biology analyses actually have the expected effects when they are disrupted on an individual basis.
b) It will identify other genes which may be involved in the network because they have similar functional consequences when their expression is perturbed.
c) It will identify transcriptional programmes or ‘gene sets’ that, in vitro, capture aspects of the function of a given pathway and can be used as outcome measures in future drug screening.
d) It will also identify nodal points in each network, ‘hubs’ for a given pathway that may make particularly effective targets for the disruption of a given cellular pathway.
iv) Drug candidate screen – Selected prioritised molecular targets that are expressed in the ageing brain and are associated with pathologic and/or clinical AD quantitative traits, are interrogated with the appropriate chemical or biological library. It is important to note that the experimental pipeline is by design driven by empiric observations from the first three stages. This should culminate in the selection of target nodal points for molecular screening which, a priori, were not preselected. Bennett has further suggested that any target should be a nodal point in a network related to AD pathology and a clinically quantitative phenotype clearly identified by integrative omic data analysis from human brain tissue. The target must be expressed in the brain and also clearly demonstrated to be related to AD pathology and clinical phenotype. In addition we would argue that any consideration and scoring of the viability of the target need not simply fit into the amyloid, tau or amyloid- tau pathway of AD. We believe that the target selection criteria should take into account the systems analysis of the causal onset involving genetics, amyloid plaque, tau tangles, neurovascular and neuroinflammatory events that occur as a function of time.
Translational chemical biology – the gap assessment
Science, in general, and biomedical sciences, specifically, has tended towards reductionism in its efforts to solve complex problems. Although this can provide a series of achievable goals, this goes against the reality that biological systems are exactly that, complex systems exhibiting frequently unpredictable complex behaviours. We can no more take all the elements in a parts list for a digital camera and, putting them into a box, have the result function like a camera, or having a roster of football players, predict how they will perform in a game.
Complexity and systems behaviours are key to enabling function, adaptation and survival but they also work against being highly predictable in most instances. Thus a significant gap will almost always result when approaching a problem solely from the bottomup perspective in terms of predicting true systems function, or in the case of disease, dysfunction… while disassembling a complex system does not guarantee that each sub-component can be adequately analysed. Optimally, however, a combined, iterative approach could provide the best likelihood for success.
Currently the rapid development and implementation of new technologies, including chemical synthesis, has contributed to increasing the size of this gap (Figure 7).
This figure emphasises the relation- ship between ‘data’ and ‘information’, ie, ‘data’ that has been cleaned to remove redundancies, and its conversion into ‘knowledge’ and ‘clinical utility’. The process of data generation and cleaning comprise part of the bottom-up approach, but the ability to translate remains to clinical utility commonly remains elusive because of the lack of understanding the difference between ‘unmet clinical need’ and ‘unstated, unmet clinical need’.
Why does this gap exist and how can we effectively close it? As noted earlier, the focus of most efforts in biomedical research have used a ‘bottomup’ approach, with the development of technologies, where each provides access to a new, but exceedingly narrow, window on biological complexity. This leads to a model of the valley of death that includes only discovery, development and regulatory approval (Figure 8) (34), but neglects critical aspects of real world clinical practice and the complexity of real world patients.
The result is a limited potential to close the gap that translates to clinical utility. Attempts to overcome these gaps with ‘big data’ are also limited by applying a bottom-up approach because, as W. Edward Deming wrote: “If you do not know how to ask the right question, you discover nothing.”
One of the first steps in framing the right question involves recognising the incomplete understanding of terms that are being commonly used to describe these problems, including: translational research, precision medicine, unmet clinical needs, biomarkers, pathways, comparative effectiveness, etc. We address below several of these concepts, but expand the discussion to include the top-down (patient/physician) perspective and concerns into what has been primarily a bottom-up definition.
It provides an example of the difference between ‘theory and practice’. The NIH definition, from the All in Us (formerly Precision Medicine Initiative) website discusses “intersection of lifestyle, environment, and genetics to produce new knowledge with the goal of developing more effective ways to prolong health and treat disease” (35) although a more operational definition is “medical care designed to optimise efficiency or therapeutic benefit for particular groups of patients, especially by using genetic or molecular profiling” (36).
The two only differ in their apparent emphasis on the use of genetic or molecular profiling to accomplish their goals, but neither adequately states the importance of understanding also the clinical history of the patient, which rarely contains complete lifestyle and environmental exposure information. Accurate Medicine: Very early in a scientist’s career, the difference between accuracy and precision is taught: “Accuracy is the proximity of measurement results to the true value; precision is the repeatability, or reproducibility of the measurement” (37).
Precision in a diagnosis, ie the concurrence of multiple clinicians is a laudatory goal although not always achievable because of limitations in diagnostic guidelines and diagnostic testing and individual clinician experience. Accuracy in a diagnosis can be limited by the reality that patients commonly present with syndromes and/or complex disorders that present ambiguity in terms of signs and symptoms and laboratory results (Figure 9) (see diagnosis, below).
Conventionally, translational medicine is defined as a rapidly-growing discipline in biomedical research that aims to expedite the discovery of new diagnostic tools and treatments by using a multi-disciplinary, highly collaborative ‘bench-to-bedside’ approach (38). This goal has unfortunately met with somewhat limited success as it promotes translation only in one direction, thus ignoring the critical issue, as Deming wrote, of identifying the question first that needs to be addressed.
Initiating a ‘bedside-to-bench’ first step, to identify critical clinical needs presents a higher probability for success transitioning of research results into clinical practice. A first step involves the critical communication of unmet clinical needs and the even greater appreciation of unstated, unmet clinical needs (see below).
Valley of death:
As noted in Figure 8, the valley of death conventionally refers to the process involving progression from discovery to regulatory approval, costs associated with this process and limited access to adequate funding39. At least two opportunities exist that can change this dynamic.
First, Figure 8 does not adequately include two key considerations that impact potential clinical and commercial value, namely, “will the physician prescribe the medication” and “will the patient take the medication as prescribed”. Failure to incorporate these considerations adequately in early planning and evaluation of drug (target) development can lead to clinical (trial) success and approval, at great expense, but even greater commercial failures.
Second, front-loading investment into better understanding of the disease process, real world clinical practice and complexity of real world patients and result in more directed, smaller and shorter clinical trials (40). This reduces overall development cost but, more importantly, can extend the lifetime of the drug for revenue generation under patent protection even if the total population may be reduced.
Clinical trial data versus clinical data:
Simply put, patients who are enrolled in clinical trials, ie meet the inclusion/exclusion criteria, rarely reflect the real world population and are selected to optimise results of efficacy and minimise side-effects. It is well recognised that most patients have multiple co-morbid conditions (41,47,48) previously treated/ currently being treated/as yet undiagnosed as well as associated poly-pharmacy, including over-thecounter products.
These real world patients are not typically enrolled in clinical trials yet confront the clinician and impact accuracy in diagnosis and response to treatment. Most current diagnostic and treatment guidelines, however, do not adequately incorporate these factors in their development.
Drug safety is typically considered in preclinical studies and focuses on dosing, toxicology and potential for notable side-effects (42). Ongoing debate exists about the use and adequacy of animal testing models in reflecting human response and significant failure of these models to replicate human response is widely known. New testing paradigms are a major area of development including stem cells and also integration of computational modelling of systems biology.
Of note, however, is the increasing interest in potentially accelerating the administration of drugs, post-safety evaluation, into humans rather than following current regulatory procedures (43). It will be critical to consider how to enhance safety screening to incorporate aspects of precision medicine that should consider heterogeneity in the target population and differences in diagnostic guidelines, preexisting conditions and likely concurrent medications. This will present significant challenges to the development of new preclinical screening methods and procedures.
Comparative Effectiveness (CER):
CER involves direct comparison of existing healthcare interventions to determine which work best for which patients and which pose the greatest benefits and harms. The focus here is, however, on comparative efficacy (44). To truly consider effectiveness of an intervention, it is critical to consider if, upon regulatory approval, a physician will utilise/prescribe the intervention, difficulties in being incorporated into professional guidelines and whether a patient will appropriately follow a physician’s recommendation. A drug that is highly efficacious but not prescribed or taken by the patient is not effective.
There is a tendency to over-utilise biomarkers, as reported in the literature, as surrogates for functional response (45). Acceptance without adequate validation is one problem because of the noted lack of reproducibility that has become too common in the published literature. In addition, the vast number of published biomarkers presents significant challenges to identifying and evaluating the potential value of an individual, or set of, biomarkers.
George Poste, in a frequently misquoted article, wrote that as of 2011, there were 150,000 published articles on biomarkers with less than 100 being used clinically (46). (This is often misquoted as indicating 150,000 biomarkers rather than publications.) We have analysed a subset of only the oncology literature and found that there were 42,440 studies for which 38,426 biomarkers were observed. Of these, 24 were EMEA approved, 30 were FDA approved, where 23 were approved by both agencies. Of these, a very limited subset has proven to be commercially successful (47).
The enhanced sensitivity of new analytic methods will require significant advances in the processes for biomarker validation. Even diagnostic test results can be misleading, particularly when different methods are used to measure the same marker. For HER2/neu determination in breast cancer, both IHC (immune-histochemistry) and FISH (fluorescent in situ hybridisation) are approved, but one measures protein expression levels and the other measures gene copy number.
These processes are not equivalent and in large studies where both tests are administered, approximately 22% discordance exists (41). A common limitation in current translational research is using biomarkers as targets when they only show correlative relationship to disease, not causal.
As noted above, real world patients rarely present with a single, isolated disease but with co-morbid conditions that may have been treated (under management), concurrent or as yet undiagnosed. In addition, each patient is dealing with a poly-pharmacy situation that may extend to 15 medications, plus over-the-counter and dietary treatments, as well as variability in adherence to prescribed dosing (48,49).
This complexity, as we have observed, is apparent in the analysis of nationalised healthcare data, ie a unified patient-based health record, and reflects the reality that physicians may not fully utilise such data because of the limitations in existing guidelines, their personal experience and/or patient pressure. The impact on quality of diagnosis, response to clinical testing and to treatment can be readily appreciated but reflect limitations when attempting to apply highly refined molecular methods to diagnose or assess individual patients.
Compliance considers the clinician’s tendency to follow established clinical guidelines and protocols. Guidelines are frequently developed by professional societies but rarely consider the complexity of real patients in terms of likely comorbid conditions and medications except those that have been noted to present significant sideeffects and contraindications. Since randomised clinical trials are considered the highest level of evidence for the development of evidence-based guidelines, it should be noted that typical inclusion/ exclusion criteria limit the presence of co-morbid conditions and many additional drugs that a patient may be taking.
Separately, many guidelines are developed as ‘consensus guidelines’ and thus not be uniformly consistent in their quality and support, by the developers, across their multiple elements of decision support. And finally, physicians recognise that with such deficiencies, guidelines are only guidelines and frequently practice based on personal and personally-shared experience, eg off-label prescribing.
Patient adherence to prescribed drug/treatment regimes is acknowledged to cost at least US$300 billion annually in the US because of failures in adequate disease management and the need for hospital readmission (50). In addition, there is loss of prescription value to pharma and biotechs. Most efforts to improve adherence focus on monitoring patient behaviour post prescriptionfilling and/or reminders and counselling.
These attempts to apply technology may miss the critical issue involving individual patient behaviours that may start at the time of disease diagnosis and communication with the physician, patient’s perception of risk of disease versus risk from medications, patient preferences and concerns about impact on lifestyle factors, eg impotency, appearance, eg acne, etc. These observations should emphasise the need to include psycho-social and cultural factors into models that represent patient progression through the entire disease process and interaction with the healthcare ecosystem.
As noted above, accuracy in diagnosis is critical and also potentially one of the most critical factors in patient management… and successful development of new drugs and interventions. Physician evaluation of a patient needs to establish a more accurate diagnosis than currently available in terms of disease stratification (or subtyping based on temporal presentation) by evaluating progression of multiple clinical parameters over time, determining how far along this complex disease trajectory a patient has progressed, ie their disease stage, and how quickly they are progressing, ie disease velocity.
A late-stage, slow-moving disease may be treated very differently from one exhibiting early stage, aggressive progression. These factors are of particular value in dealing with current diagnoses involving syndromes rather than specific diseases (41).
Disease state versus disease trajectory:
As noted above, disease is a process that evolves over time, other than the initial impact of trauma, and needs to incorporate temporal processes to better diagnose and manage outcomes. In addition, in chronic diseases, eg diabetes and Alzheimer’s, it is critical to recognise that the patient will be undergoing physiological changes in normal development throughout the disease course and it may be critical to deconvolute the influences of each to be able to provide for the best diagnosis, treatment and outcome (51).
Unmet clinical need/unstated unmet clinical need:
While it is obvious that fundamental (and discovery) research can best impact healthcare by addressing unmet clinical need, it is important to understand how to identify and validate that need (41). The current pressures and practices in healthcare present the clinician with little opportunity to explore the needs of their practice much beyond what addresses daily operational issues.
This challenge is amplified with both the rapid development and commercialisation of new technologies for diagnosis and treatment as well as exploding, and frequently inadequately vetted, generation of research publications. As Henry Ford wrote: “If I had asked my customers what they wanted, they would have said a faster horse.” In initiating research into the development of new solutions to clinical problems, it is critical to go beyond that simple question of “what do you need?”
In extension to the discussion about Her2/neu testing mentioned above, we can examine triple negative breast cancer where HER2, ER and PR testing produces negative results (52). Variability in HER2 testing was noted, but also differences in ER and PR testing exists, along with the variation among hospitals/laboratories in establishing threshold values for +/- in such tests.
While it might be expected that three negative tests would be an easy discriminator for establishing a diagnosis, this is not the case, and may naturally interfere with additional testing and analysis that relies on consistency of that diagnosis. And while it might be suggested that further genomic and/or molecular markers, observed in tissue, could enhance the diagnosis and potential stratification, it is worth noting that significant heterogeneity has been observed and classified in breast cancer tissue, yielding discernible patterns that can confound the conventional diagnoses, and more recently, have been associated with the high degree of variability in patient response to targeted therapeutics (53).
We have tried here to initiate discussion about the complexities that face drug development, patient management and healthcare beyond their current horizons. It presents a perspective that current basic research could benefit from understanding the reality of the multiple, hierarchical systems that impact the translation of laboratory results into utility and clinical practice.
These systems operate at the molecular, personal, society and population levels, but do not function independently of their interactions across these levels (54). We propose innovative network science that addresses the general need for improved compound diversity for a variety of applications in bioscience, and translational chemical biology The approach applies a combination of natural product isolation, synthesis, diversification, automated purification and structure analysis. Such programmes can draw upon the highly complementary expertise and resources of leading academic and industrial research teams.
From foundation to capstone, any translational endeavour exemplifies a pyramidal enterprise. In all cases, an extensive knowledgebase yields a narrower subset of opportunities that are subsequently honed and winnowed down to a single critical new capability. This structure applies regardless of whether one is translating from general scientific understanding to broader health principles, from specialised chemical biology to a specific new drug, or beyond this from patient-specific omics data toward precision medicine, or from broad health records toward optimal medical outcomes.
While we can appreciate how translational chemical biology will contribute to improving patient management and reducing healthcare costs in the short-term, we believe that only through the development and evaluation of models of the true complexity of the healthcare ecosystem, real world clinical practice and real world patients, will longterm benefits be realised. Again, correlation does not infer causality, and modelling cause and effect will be required to create new systems for research, translation and delivery of improved healthcare.
We need to also understand that simply asking a clinician what they need will not necessarily result in understanding what the real need may be. Physicians are accustomed to working with what they have to address critical issues on a daily basis and not on exploring either the limitations that may exist or the potential for developing new approaches.
To really understand unmet clinical need requires an immersion in observing and modelling the clinical process. These models can serve as the basis for asking more directed questions and gaining the confidence and access to the experience of the physician in revealing their perceptions about what really works and what does not.
Acknowledgements We thank our many colleagues who have influenced us in innumerable ways over the years and for being the beneficiary of their collective wisdom. DDW
This article originally featured in the DDW Winter 16/17 Issue
Dr Mukund Chorghade is a serial entrepreneur, President and Chief Scientific Officer of THINQ Pharma/THINQ Discovery. He has had Adjunct Research Professor/Visiting Fellow/Scientists appointments at Harvard, MIT, Princeton, Cambridge, Caltech, University of Chicago, Northwestern and Strathclyde. He directed research groups at Dow Chemicals, Abbott, CytoMed and Genzyme. His current research interests are in Traditional Medicine-derived New Chemical Entities and the discovery of the new ‘chemosynthetic livers’ with utility in drug metabolism, valorisation of biomass and environmental remediation. Dr Chorghade received his PhD at Georgetown University, with postdoctoral appointments at the University of Virginia and Harvard University. A recipient of three ‘Scientist of the Year Awards’, he has been a featured speaker in several national and international conferences. He has been honoured by election as a Fellow to the Maharashtra, Andhra Pradesh and Telengana Academy of Sciences, Royal Society of Chemistry, New York Academy of Sciences, American Chemical Society, American Institute of Chemists, AAAS, Sigma Xi, Indian Society of Chemists and Biologists.
Dr Michael Liebman is the Managing Director of IPQ Analytics, LLC and Strategic Medicine, Inc after serving as the Executive Director of the Windber Research Institute from 2003-07. Michael is Chair of the Informatics Program and also Chair of Translational Medicine and Therapeutics for the PhRMA Foundation. He serves on several scientific advisory boards including the International Society for Translational Medicine and on the Editorial Board for the Journal of Translational Medicine, for Clinical and Translational Medicine and for Molecular Medicine and Therapeutics, for Clinico- Economics and Outcomes Research and Biomedicine Hub, and the International Park for Translational Biomedicine (Shanghai). His research focuses on computational models of disease progression that stress risk detection, disease processes and clinical pathway modelling, and disease stratification from the clinical perspective. He utilises systems-based approaches and design thinking to represent and analyse risk/benefit analysis in pharmaceutical development and healthcare.
Dr Gerald Lushington, an avid collaborator, focuses primarily on applying simulations, visualisation and data analysis techniques to help extract physiological insight from structural biology data, and relate physical attributes of small bioactive molecules (drugs, metabolites, toxins) toward physiological effects. Most of his 150+ publications have involved work with experimental molecular and biomedical scientists, covering diverse pharmaceutical and biotechnology applications. His technical expertise includes QSAR, quantum and classical simulations, statistical modelling and machine learning. Key interests include applying simulations and artificial intelligence to extract. After productive academic service, Lushington’s consultancy practice supports R&D and commercialisation efforts for clients in academia, government and the pharmaceutical and biotechnology industries. Dr Lushington serves as Editor-in-Chief of the journal Combinatorial Chemistry & High Throughput Screening, Bioinformatics Editor for Web – MedCentral and is on editorial boards for Current Bioactive Compounds, Current Enzymology and the Journal of Clinical Bioinformatics.
Dr Stephen Naylor is the Founder and CEO of ReNeuroGen LLC, a virtual pharmaceutical company developing precision medicine therapies for the treatment of stroke. In addition he is the Founder, Chairman and CEO of MaiHealth Inc, a systems/network biology level diagnostics company in the health/wellness and precision medicine sector. He was also the Founder, CEO and Chairman of Predictive Physiology & Medicine (PPM) Inc, one of the world’s first personalised medicine companies. He serves also as an Advisory Board Member of CureHunter Inc, a computational biology drug discovery company, and as a business adviser to the not-for-profit Cures Within Reach. In the past he has held professorial chairs in Biochemistry & Molecular Biology; Pharmacology; Clinical Pharmacology and Biomedical Engineering, all at Mayo Clinic in Rochester, MN, USA. He holds a PhD from the University of Cambridge (UK), and undertook a NIH funded fellowship at MIT located in the ‘other’ Cambridge, USA.
Dr Rathnam Chaguturu is the Innovation Czar, Founder & CEO of iDDPartners (Princeton Junction, NJ, USA), a non-profit think-tank focused on pharmaceutical innovation, and most recently, Deputy Site Head, Center for Advanced Drug Research, SRI International. He has more than 35 years of experience in academia and industry, managing new lead discovery projects and forging collaborative partnerships with academia, disease foundations, non-profits and government agencies. He is the Founding President of the International Chemical Biology Society, a Founding Member of the Society for Biomolecular Sciences and Editor-in-Chief-Emeritus of the journal Combinatorial Chemistry & High Throughput Screening. Rathnam passionately advocates the need for innovation and entrepreneurship and the virtues of collaborative partnerships in addressing the pharmaceutical innovation crisis, and aggressively warns the threat of scientific misconduct in biomedical sciences. He received his PhD with an award-winning thesis from Sri Venkateswara University, Tirupati, India. Correspondence can be addressed to him at email@example.com
1 US Food and Drug Administration. Novel Drugs Summary 2016. http://www.fda.gov/Drugs/DevelopmentApprovalProcess/DrugInnovation/ucm534863.htm.
2 US Food and Drug Administration. Summary of NDA Approvals and Receipts, 1938 to the Present https://www.fda.gov/AboutFDA/History/ProductRegulation/default.htm.
3 Woodward, RB. Nobelprize.org. Nobel Media AB 2014. Web. 16 Jan 2017. http://www.nobelprize.org/nobel_prizes/chemistry/laureates/1965/woodward-lecture.html.
4 Raut, AA, Chorghade, MS and Vaidya, A. Chapter 4 in Innovative Approaches in Drug Discovery” ed. Bhushan Patwardhan and Rathnam Chaguturu, Elsevier. ISBN: 978- 0-12-801814-9 (2016).
5 Chorghade, MS, Patwardhan, B, Vaidya, A and Joshi, SP. Current Bioactive Compounds, 4, 201-212 (2008).
6 Baxendale, IR, Deeley, J, Griffiths-Jones, CM, Ley, SV, Saaby, S, Tranmer, GK. Chemical Communications. 24, 2566-8 (2006).
7 Ley, SV, Sheppard, TD, Myers, RM and Chorghade, MS. Bull.Chem.Soc., Japan, 82, (8), 1451-1472 (2007).
8 Lushington, GH, Guo, JX, Wang, JL. Curr Med Chem. 14: 1863-1877 (2007).
9 Murphy, RF. Nat Chem Biol. 7: 327-330 (2011).
10 Dressman, JB, Thelen, K, Willmann, S. Expert Opin Drug Metab Toxicol. An update on computational oral absorption simulation. 7: 1345-1364 (2011).
11 Berellini, G, Springer, C, Waters, NJ, Lombardo, FJ. In Silico Prediction of Volume of Distribution in Human Using Linear and Nonlinear Models on a 669 Compound Data Set. Med. Chem. 2009; 52: 4488-4495 (2009).
12 Liu, H, Wang, L, Lv, M, Pei, R, Li, P, Pei, Z, Wang, Y, Su, W, Xie, X-Q. AlzPlatform: An Alzheimer’s Disease Domain-Specific Chemogenomics Knowledgebase for Polypharmacology and Target Identification Research. J Chem Inf Model. 54: 1050–1060 (2014).
13 He, YY, Liew, CY, Sharma, N, Woo, SK, Chau, YT, Yap, CW. PaDEL‐DDPredictor: Open‐source software for PD‐PK‐T prediction. Journal of Computational Chemistry. 34: 604-610 (2013).
14 Douglas, EV. Pires, DEV, Blundell, TL, Ascher, DB. 58: 4066-4072 (2015).
15 Shimpi, SL, Mahadik, KR, Paradkar, AR. Study on mechanism for amorphous drug stabilization using gelucire 50/13. 57: 937-942 (2009).
16 Bergström, CAS, Charman, WN. Porter, CJH. Computational prediction of formulation strategies for beyond-rule-of-5 compounds. Advanced Drug Delivery Reviews. 2016; 101: 6-21 (2016).
17 Young, DM. The Toxicity Estimation Software Tool (T.E.S.T.). Presented at New England Green Chemistry Networking Forum, Boston, MA, December 16, 2010.
18 Patlewicz, G, Jeliazkova, N, Safford, RJ, Worth, AP, Aleksiev, B. An evaluation of the implementation of the Cramer classification scheme in the Toxtree software. Environ Res. 19: 495-524 (2008).
19 He, J, Leung, RK, Li, Z, Cheng, R, Chen, Y, Pan, Y, Ning, L. Virtual Pharmacist: A Platform for Pharmacogenomics. 2016.
20 Andersen, KE, Begtrup, M, Chorghade, MS, Lau, L, Lee, EC, Lundt,BF, Petersen, H, Sorensen, PO and Thogersen, H. Tetrahedron, 50 (29), 8699 (1994) Erratum cited in Tetrahedron, 52 (10), 3375 (1996). (ii) Celebuski, JE, Chorghade, MS and Lee, EC. Tetrahedron Lett.35 (23), 3837 (1994). Corrigendum published in Tetrahedron Lett. 36 (52), 9414 (1995).
21 Andersen, JV, Chorghade, MS, Dezaro, DA, Dolphin, DH, Hill, DR, Lee, EC, Hansen, KT and Pariza, RJ. Bioorganic and Medicinal Chemistry Letters, 1994, 4 (24), 2867 (1994).
22 Chorghade, MS, Dolphin, DH, Hill, DR, Hino, F, Lee, EC, Zhang, L-Y and Pariza, RJ. Pure and Appl. Chem., 68 (3), 753 (1996).
23 Hill, DR, Celebuski, JE, Pariza,RJ, Chorghade, MS, Levenberg, M, Pagano, T, Cleary, G, West, P and Whittern, D. Tetrahedron Lett. 37 (6), 787 (1996).
24 Chorghade, MS, Dolphin, DH, Dupre, D, Hill, DR, Lee, EC and Wijesekara, TP. Synthesis 1320 (1996).
25 Chorghade, MS*(Editor) and Lee, EC (Associate Editor), Pure and Appl. Chem., 1998, 70 (2), Proceedings of the XXth IUPAC Symposium on the Chemistry of Natural Products, Chicago, September, preface page vi (1996).
26 Chorghade, MS. Metalloporphyrins as Synthetic Livers, published in Drug Metabolism: Databases and High Throughput Testing during Drug Design and Development, International Union of Pure and Applied Chemistry: DMDB Working Party, Ed. Erhardt, PW. Blackwell pp.152-162 (1999).
27 Naylor, S and Chen, JY. Unraveling human complexity and disease with systems biology and personalized medicine. Personal. Med. 7; 275-289 (2010).
28 Naylor, SJ. Precision Med. 2; 15-29 (2015).
29 US National Research Council. US National Academies Press, Washington DC USA (2011). http://www.nap.edu/catalog/13284/toward-precision-medicine-building-a-knowledge-network-for-biomedical-research.
30 Zhang, XD. Pharmacogenomics & Pharmacoproteomics 2015; 6; e14 doi: 10,4172/2153-0645. 1000e144.
31 Waring, SC and Naylor, SJ. Precision Med. 5; 38-53 (2016).
32 Bennett, DA, Yu, L and De Jager, PL. Biochem. Pharmacol. 88: 617-630 (2014).
33 Clish, C, Davidov, E, Oresic, M et al. OMICS 8:3-13 (2004).
36 Precision Medicine, Wikipedia.
37 Precision vs Accuracy, Wikipedia.
38 Wang, X. Clinical and Translational Medicine, 1:5 (2012).
40 PharmaFocus Asia, 9, 1-8 (2008).
41 Liebman, MN, Franchini, M and Molinaro, S. Technology and Healthcare vol. 23, no. 1, pp. 109-118, (2015).
42 Pre-clinical studies, Wikipedia.
43 Kaplan, S. Winners and losers of the 21st Century Cures Act. STAT News. (5 December 2016).
45 Barker, M. Nature, 533, 452-454 (2016).
46 Poste, G. Nature, vol 469, p156-157 (2011).
47 Liebman, MN. In collaboration with Excelra, unpublished results.
48 Braithwaite, D, Tammemagi, CM, Moore, DH, Ozanne, EM, Hiatt RA, Belkora, J, West, DW, Satariano, WA, Liebman, M, Esserman, L. Int J Cancer. Mar 1;124(5):1213-9 (2009).
49 Franchini, M, Pieroni, S, Fortunato, L, Molinaro, S and Liebman, MN. Current Pharmaceutical Design, Volume 21, Number 6, pp. 791-805(15) (February 2015).
50 Iuga, AO and McGuire, MJ. Risk Management and Healthcare Policy, 7, 35-44 (2014).
51 Franchini, M, Pieroni, S, Fortunato, L, Knezevic, T, Liebman, MN and Molinaro, S. Clinical Translational Medicine 5:24 1-13 (2016).
53 Maskery, S, Zhang, Y, Jordan, R, Hu, H. Hooke, J, Shriver, C and Liebman, M. IEEE Transactions on Information Technology in Biomedicine, 10, 3, 497-503 (2006). 54 Liebman, MN. Translational Scientist, 2701-2705 (2016).
Using autoantibody biomarker panels for improved disease diagnosis. READ MORE
Collaborative Research Leads To New Understanding Of Biomarkers READ MORE
Chemical Space, High Throughput Screening and The World Of Blockbuster Drugs READ MORE
Biomarker Discovery: The Need For New Generation Peptide-Protein Microarrays READ MORE
Could The Keys To Precision & Personalised Medicine Be Rooted In Predictive Safety & Research Methods? READ MORE
Automating The Drive Towards Personalised Medicine READ MORE
Individualised Systems Medicine: Next-Generation Precision Cancer Medicine and Drug Positioning READ MORE
With Synthetic Biology, Drug Discovery Is Going Virtual READ MORE
Metabolomics - A Playbook for Functional Genomics READ MORE
Reigniting Pharmaceutical Innovation Through Holistic Drug Targeting READ MORE