Innovate While Derisking Drug Development: Yes We Can!
The outlook for the drug industry continues to remain bleak in context of productivity and success rates. In spite of ongoing increase in R&D expenses and technology revolutions in the genomics and proteomics area, nearly 95% of drug programmes which enter clinical development fail.
This would be expected as serendipity and surprise plays a big role in almost all of the drug successes. What is the probability of the same individual winning the lottery more than once in his/her lifetime?
Given the fact that clinical development cost is significantly higher compared to pre-clinical research, the ability to failfast through stringent qualification prior to entering into the clinic is urgently needed. In this landscape the goals of successfully developing differentiated drugs which are best-in-class or first-in-class is challenging.
Industry trends and implications
The scientific and business trends have cumulatively created a perfect storm. Industry R&D costs are increasing year on year, with lower or flat productivity in terms of number of new drug approvals. The patent expiries of blockbuster drug products in addition are reducing barriers of entry for generic players. The high failure rates indicate that the currently used pre-clinical models are not adequate for making go/no-go decisions and the limited transparency does not provide adequate insights into the workings of the drug.
The industry’s high R&D cost structure as percent of sales (18% and above) and long development cycles (10-12 years) is the highest among all industries. Implicitly included in this cost structure are drug programme failure costs. Patent expiries have translated into lack of monopoly and resulting margin pressures due to generic drugs. Traditionally, the industry managed through the inherently high and inefficient cost structures and lower number of drug approvals because of blockbuster drugs under patent protection.
From 2011 to 2015, patent exclusivity will expire for drugs totaling more than $250 billion in sales. The pricing and margin pressures are also getting compounded as Results-Based Payment (RBP) structures are being pursued by healthcare payers such as government and insurance companies.
The industry is broadly addressing this by cutting R&D, focusing on personalised therapeutics and increased sales in emerging markets. The personalised medicine approach segments the patients into smaller subpopulations to which the drug can be marketed. This sub-population, though a relatively captive audience, is not large enough to sustain the inefficient cost structure of research and development of the industry.
The expansion into geographies such as Asia as a consumer market beyond cost arbitrage strategy is also in play. The challenge though is that while the target consumer base in these countries is large, the price points in these markets are much below the industry cost structures and are more elastic. This indicates that the same product and development strategy cannot be applied across geographic segments.
Why high failure?
The high failure rate of drug discovery programmes qualifies the industry to be several notches more complex than rocket science, if that metric can be used as a surrogate for complexity. Research spending per new drug analysis across the industry has been in the range from $12 billion to $3.7 billion due to differences in failure rates across companies. The higher the failure rate, the higher is the amortised cost of drug programmes. The often stated reason for high development time, high cost and high failures is the complexity of disease physiology and the unknowns in biology.
This has resulted in creation of a serendipity or discovery-driven methodology compared to rational design. High throughput screening based on single phenotypes or markers for narrowing down therapeutic candidates is the norm. The high complexity has also justified scientific teams specialising in specific targets and pathways without having a holistic perspective.
The industry has transitioned from natural drugs such as aspirin, whose mechanism of action and targets are still being discovered, to a focus on highly specific drugs. The consequence of inhibiting a single target or pathway has been toxicity and drug resistance as the other cellular pathways kick in to compensate for the drug manipulation.
Case in point being inhibitors targeting TNF, EGFR, JAK and others which have resulted in toxicity and drug resistance issues. Conventional drug design embraces the ‘one gene, one drug, one disease’ philosophy. Polypharmacology focuses on multi-target drugs and has emerged as a new paradigm in drug discovery.
Derisking attempts
The oft repeated word nowadays is derisking given the failure rates in the industry. This, to a large extent, is through funding proven drug development teams, working on safe or validated targets and in-licensing late stage molecules. There are not many teams that have successfully launched drugs from start to finish, so the concept of financing validated teams has not really played out.
Given the probability of success rates in the industry and the development timelines in the industry, the odds of the same team successfully taking two drugs through the process, using current techniques, is miniscule.
Biology is the big unknown in the industry, and the industry has derisked this aspect through a focus on ‘proven’ targets, effectively making the biology an equal risk. This also plays well into the strengths of the Pharma, which build large barriers to entry and high margins through patents on new chemicals.
The derisking strategy therefore primarily has been validated biological targets and bestin- class chemistry. This unfortunately creates a pipeline of ‘me-too’ programmes around so-called validated targets which may have very similar long-term safety profiles or the same weaknesses in treating patients.
The other strategy to manage risks is business innovation based on in-licensing drug programmes from small Pharma players. Here essentially the risk is being transferred to start-up teams with their financing through traditional equity-based VC routes and who are dependent on big Pharma partnerships or the IPO route (when that option existed) for exits.
The personalisation of therapies and the use of diagnostics to identify patient sub-populations that respond to the drugs is a trend that is becoming main stream in oncology and emerging in other disease indications. Here the hope is that there are specific therapies that are very effective in subsets of the population and that these subsets are identifiable, cleanly distinguishable and profitable from an economics perspective.
Reformulating drugs for the same indication has been used quite effectively by the Pharma industry for lifecycle management. The scope, however, gets limited from an IP barrier perspective. Repurposing of existing drugs in new formulations and across indications is another derisking strategy that is being adopted. Because these drugs are used in humans, exploring their use for new indications can drastically remove development time and eliminate many of the uncertainties of drug development.
Why do other hi-tech industries not have a similar failure rates?
The difference between drug development compared to aeronautics and semiconductors is the discovery versus rational design methodology. Extensive qualification and validation of designs by running massive simulations and scores of scenarios using simulations is the norm in industries with high success rates. Emulation of complex systems, use of standardised models and reuse of validated building blocks are some of the core approaches that have facilitated the high success rates in these industries.
The ability to emulate the system interactions and run ‘what if’ experiments with extensive validation is a key capability. Currently, decisions in drug development are based on studies in cell lines and animal models. The limited ability to see inside these closed systems and run thousands of different ‘what if’ scenarios makes the decision process error prone.
For example, in oncology, cell lines do not represent most of the disease phenotypes, and xenograft-based animal models do not incorporate the tumour microenvironment. However, these are the models on which decisions are made. There is a need for an in vivo model of cancer where one could create patient profiles using mutation information, environmental differences such as hypoxia and normoxia and variations in the dominance of pathways to mimic potential drug resistance conditions.
The other strategic advantage the referred hitech industries gain from is reuse of validated building blocks avoiding the reinventing the wheel syndrome. This strategy allows one to move up the value chain as some of the underlying components do not require redesign. The partnership based on AstraZeneca’s MEK inhibitor and Merck’s AKT inhibitors were steps in that direction.
Emulation of disease physiology and prediction of clinical outcomes
Orthogonal industries such as semiconductor engineering have met similar complexity challenges through extensive modelling prior to fabrication of silicon. Between 1990 and 2010 the average complexity of a semiconductor devices has grown from 25,000 to more than 2 trillion transistors. At the same time the development time and costs have reduced dramatically with greater than 98% success in working silicon post fabrication.
In drug development, the human physiology already exists and hence the modelling challenge is creating a comprehensive representation of disease physiology based on existing disaggregated information. In cancer, for example, for such models to be relevant they should include signalling networks for all phenotypes based on the stromal, angiogenic and inflammatory components, along with tumour cell type. The sheer quantity of disaggregated information available from peer-reviewed published research with biological unknowns has been a challenge for attempting such a development.
Is the quantum of data available today adequate to integrate and standardise for running predictive studies that can be validated and used for making decisions? Or should one wait for all biological unknowns to be resolved before initiating building such an approach and technology? Voltaire’s saying: “Don’t let the perfect be the enemy of the good”, seems to be the appropriate philosophy for making this decision.
Simulation of disease physiology – the Holy Grail of pharma
Simulation is relevant across various abstractions depending on the question being asked and clarity needed. These can range from representation of gene networks, to drug-protein structural binding, to functional representation of signalling and metabolic networks, to tissue and organ level modeling, to pharmacodynamics and pharmacokinetics, to clinical trial design. Extensive validations of the predictions based on simulations are non-negotiable requirements for the adoption and deployment of such systems.
Key decisions such as the selection of the right abstraction level for the model, functional, causal and physics modelling are critical to the problem that is being solved. The reference abstraction needed for understanding the biology of drug-disease interaction and designing therapies based on novel targets or repurposed drugs or reused combinations of repurposed drugs requires a representation of disease physiology at the signalling and metabolic pathways network level where the functional mechanism of action for a drug can be represented.
Another large impediment to building such functional models, by aggregating information from publications, is the lack of standardisation in existing published data. The urge and lure to automate the extraction and aggregation process would introduce noise and errors due to many factors, including contradictory data sets, which will be impossible to identify and resolve. A complex static network which cannot be perturbed, dynamically assayed for trends and dynamically validated would be of limited value. Hence development of simulation models based on manual aggregation and review, explicit interpretation and hypothesis testing to resolve differences is needed.
Concepts from software engineering to implement validation infrastructure based on regressions is very applicable to such an approach. This would entail running, in a batch mode, thousands of studies with automated comparisons between predictive readouts and retrospective and prospective experimental assays. Meeting the extensive validation requirements will also address the contradictory data sets and unknown biology challenges as they would cause failures in the regressions.
Repurposing-based therapeutics approaches
Drug repurposing, which refers to the redevelopment of existing drugs with known pharmacology, safety and toxicity profiles for new indications, is a strategy adopted in the industry to eliminate some of the risk factors associated with drug development. Drugs used for repurposing fall in one or more of the following categories: approved and off-patent, approved and on-patent, safe but failed in late-stage clinical development due to efficacy or abandoned due to other reasons.
Historically, finding new indications for existing drugs is mainly accomplished through serendipity. Approaches include high-throughput screening of existing drug libraries in new disease cell line models. Retrospective analysis of clinical trial data to leverage unforeseen side-effects or benefits for identifying repurposing opportunities is another technique widely used, as happened in the case of Thalidomide.
Finding new indications based on later discovered off-target effects of the drug or newly characterised targets are another approach. Gleevec was a targeted drug case study with new targets discovered later making it applicable to newer indications. Discovering relevance of the same target or pathway across disease indications, such as with anti-angiogenesis drugs, has made it possible to position the same drug for both oncology and macular degeneration.
What is inherently lacking in all the existing repurposing approaches, however, is the systematic design of an innovative multi-target mechanism of action using existing drugs to achieve a novel inhibition or activation of the targets.
‘Leap frog’ development path to generate differentiated products
Complex diseases such as autoimmune disorders, cancer and others have multiple phenotypes with interlinked signalling that the therapy must impact. Molecularly targeted drugs offer the ability to manipulate specific molecular interactions but have the vulnerability that the disease may eventually activate other, previously redundant, paths thereby bypassing the drug’s mechanism of action.
Attacking such diseases and preventing activation of parallel pathways suggests a strategy based on careful smothering of the disease through manipulation of multiple targets whose effect converges and gets synergised in the disease phenotypes.
Combination of molecular targets or therapies offers the only means to create a robust mechanism of action to treat such diseases. This strategy was echoed by Dr Ross L. Cagan, Professor and Associate Dean at Mount Sinai School of Medicine, when he stated that: “Scientists are beginning to recognise that single-target drugs can be problematic. I believe that, within the next five years, we’ll see more drugs entering clinical trials that use rational Polypharmacology as the basis of drug discovery.” Combination therapies can be designed by developing and using novel chemicals or by repurposing and reusing existing chemicals.
Designing combination therapies with novel chemicals have the complexity of not only understanding how these chemicals work individually but how these chemicals may interact with each other chemically as well as biologically. Combination therapies that are created using existing drugs remove the complexity of understanding the safety and metabolism profiles of the individual drugs and eliminate potential toxicityrelated surprises. Extensive data on the metabolism paths of the individual drugs also ensures that the potential of unanticipated drug-drug interaction is reduced.
Rational design of novel and efficacious drug combinations is of combinatorial complexity if exhaustively screened. Screening a library of, say, 100 targeted drugs in combinations of twos and threes at one dosage translates to more than one million in vivo studies. Designing an optimal combination that at sub-therapeutic doses demonstrates efficacy via synergy of mechanism of action drastically increases the number of in vivo studies.
The other complexity that needs to be managed is the potential high rate of false positives when using in vitro studies as a screening mechanism. In vitro models are weak surrogates for disease systems and might identify lots of potential hits that are ineffective in clinical models. Running such quantity of studies in in vivo animal models is for all purposes practically impossible.
Simulation-based predictive disease models offer a tractable method of designing such combinations. Such validated technologies enable an equivalent in vivo study to be completed in less than 30 minutes with a transparent view into thousands of disease phenotype and biomarker assays. Using sophisticated cloud architecture of more than a thousand CPUs, such million-plus simulation studies can be scheduled and completed within a month. This number of in vivo studies will generate enough quanta of assay data to make decisions on the shortlisted therapy candidate.
Automated engines to analyse this standardised assay data for designing a novel therapy that meets criteria of efficacy using sub-therapeutic dosages, synergy, PKPD compatibility etc provides a developmental pathway for rational design-based repurposing of drugs. This combination of benefits of using such technology is very compelling and reduces the risk of failure while reducing the cost of development and enables a ‘leap frogging’ of the current development process.
Case study: demonstrating novel derisked development strategy
The predictive simulation-based approach and technology was validated on a drug development programme for the autoimmune disorder Rheumatoid Arthritis (RA). The therapeutic candidate, CWG952, has a novel biological mechanism of action and went from design to animal validations in less than nine months. The design process was initiated by first defining the target criteria or cost function that the designed therapeutic should meet.
In the context of Rheumatoid Arthritis, this is a therapeutic showing efficacy measured using American College of Rheumatology (ACR) criteria in TNF responder and non-responders patients based on parameters such as swollen joints, tender joints, pain, interleukins and chemokine’s biomarkers.
A digital library of molecularly targeted drugs from different indications was used for the simulation based in vivo screening. These drugs are screened individually and in combinations translating to more than one million in vivo equivalent simulation studies. A similar screening in conventional cell and animal systems with such transparency is practically impossible. Each of these studies was qualified against the target criteria and rank ordered.
The scientific objectives for this therapeutic were efficacy measured using ACR70 criteria using minimum sub-therapeutic combination dosages of individual drugs showcasing synergy. The business objectives were drug accessibility through partnerships or patent status. The candidate therapeutic lead went from simulation-based design studies to showing efficacy in the very first high-bar animal study (murine collagen induced arthritis model, the gold standard model for RA).
The combination of predictive studies and use of repurposed chemicals enabled the elimination of in vitro studies, lead optimisations and dose-ranging studies to identify the effective dose and dramatically reduced the time and cost of identifying a novel therapy de novo. The validation of the efficacy in animals for a therapeutic candidate that is a combination of approved drugs for used other indications sets the stage for IND-enabling activities based on referencing individual drug data via the 505(b)(2) pathway.
Constraints and challenges
While the clear benefits of therapeutic programmes based on combinations of repurposed drugs are very compelling, the perceived challenges are related to IP protection, commercialisation and regulatory approval pathway. The successful commercialisation of programmes such as Qnexa, from Vivus or Thalidomide from Celgene, among many, demonstrates that the derisking advantages clearly outweigh the perceived challenges. Traditionally, composition of matter patents based on new chemical structures is the standard IP protection strategy used in the drug industry.
In cases of therapies based on novel biological mechanism of action using combinations of repurposed drugs, IP fencing and barriers are due to multiple levels of non-obvious innovations. The primary IP fencing mechanisms are a novel method of use and matter composition based on their combined formulation. These legal protections are further enhanced for such fixed-dose combination therapies if the doses used are sub-therapeutic and the individual drug components do not impact the new disease indication.
The perceived challenge for commercialisation of repurposed drugs is the potential off-label use of individual drug components if they are available off patent for the original indication. Though Pharma companies cannot legally promote or explicitly market off-label usage, this is a potential threat for single drug repurposing. Thalidomide from Celgene is an example of a single drug repurposed with the original indication off patent. The off-label use for this drug is prevented for its original indication of leprosy through prescription governed with the supporting System for Thalidomide Education and Prescribing Safety (STEPS) questionnaire.
The commercialisation strategy is better managed for non-obvious combinations of sub-therapeutic products as generic manufacturers cannot create a copycat product because such a drug would violate the method of use and structure patents. Drug manufacturers are also prohibited from marketing a ‘kit’ that contains individual drugs in the sizes not approved for original indication and specified in the approved combination therapy product. This inability to make drugs at any dose makes it impossible to formally access off-label use of individual drugs to create a novel combination therapy.
Additionally, any complex extended release formulation changes to any of the drugs within the combination therapy makes it practically impossible to have a non- Abbreviated New Drug Application product on the market.
The final perceived challenge is possible regulatory pathway approvals and potentially mandating use of factorial arm designs to prove that each of the drug components is contributing to the drug efficacy. The fact that the combination therapy is composed of drugs with human data facilitates a faster and lower-risk development path through US FDA 505(b)(2) pathway. The potential requirement to check the impact of each of the individual drugs and combinations of them can potentially translate into a greater number of arms in Phase I and IIA trials.
This would be of lesser issue for combinations of repurposed drugs that have never been implicated for the new indication as the regulatory body, for ethical reasons, would not subject a patient population to therapy that is proven non-relevant in the disease context. However, the tradeoff gain between a few extra trial arms and highly improved success odds nullifies the challenge.
Conclusion
Given the high R&D cost and low productivity and approval rate, drug development approaches need serious reconsideration and a change from a discovery- driven focus to rational design of programmes. It is well established that human physiology is very complex with many biological unknowns, and new insights are generated on a daily basis thanks to financing for basic scientific research from bodies such as NIH. In spite of this, and the large quantum of data in existence today, the information is disaggregated and not standardised.
Development of a validated disease physiology model which allows decision making prior to committing to clinical development is a gamechanger for an industry where the success rate is less than 5%. The ability to design novel therapeutic programmes based on reuse of proven safe drugs with human data from other indications provides a ‘leap frogged’ approach to innovative scientific and commercial development pathways. This approach makes the design of therapeutic programmes truly biology or pathway agnostic and not biased to implicit and explicit scientific preferences.
The CWG952 case study for Rheumatoid Arthritis shows how such an approach has been leveraged to design novel drug programmes reusing approved drugs from other indications, demonstrating comparable efficacy to standard of care in animal models. The perceived IP and regulatory challenges of this approach compared to novel chemistry are diminished as repurposing- based success stories from Celgene, Pozen and Vivus and others get validated from a business perspective. DDW
Acknowledgement
Damian Doherty, Features Editor of Drug Discovery World, paraphrased the simulation based therapy design approach as ‘leap frogging’ which the authors have used and incorporated in this write up.
—
This article originally featured in the DDW Summer 2012 Issue
—
Taher Abbasi is CEO of Cellworks Group. He has more than 20 years of technology and global operations set-up and management experience. His specialisation is automation engineering in context of semiconductor engineering, rationally-designed therapeutics and online learning. He was part of the core team involved in establishing the Cellworks R&D operations and processes in Asia; global research collaborations and developing strategy for implementing Cellworks proprietary technology automation infrastructure to emulate disease physiology computationally and designing therapeutics pipeline. He holds a BS in Electronics from University of Bombay; MS in Computer Engineering from California State University, Northridge; MBA from University of California, Los Angeles and National University of Singapore.
Pradeep Fernandes is President of Cellworks Group. He has more than 20 years of experience in semiconductor engineering and life sciences. He has applied the engineering approaches and technologies to develop the mathematical solver engines and automation infrastructure for the Cellworks proprietary technology for emulating disease physiology computationally. Under his management, the computational technology engine met development milestones of supporting more than half a million interactions in 20 minutes or less. He managed the implementation of the pre-clinical validation for the therapeutic programmes and the US legal and finance functions. He holds a MS in Electrical Engineering and BS in Computer Science.
Dr Shireen Vali is the Chief Scientific Officer of Cellworks Group. She has more than 15 years of experience in molecular and cellular biology research and development. She was instrumental in building the Cellworks R&D organisation spanning multiple disease indications from initiation to the size of more than 100 in six years. Under her management, the R&D organisation met development milestones for in vivo equivalent technology representing disease physiology with complexity in range of half a million cross talk interactions and more than 30,000 biological players. She also oversaw the development of a pipeline of therapeutic programmes for Rheumatoid Arthritis and oncology. She has presented and co-authored many international publications; holds a PhD, Neuroscience from UC Davis and completed fellowship at Stanford University.
Professor Gurkirpal Singh is Adjunct Clinical Professor of Medicine in the Division of Gastroenterology and Hepatology at Stanford University School of Medicine. He is renowned for his work on clinical epidemiology and outcomes in rheumatology, pain management, cardiology and gastroenterology and has conducted numerous large randomised clinical trials. Professor Singh has worked closely with the US Federal Government. In these roles he was invited for expert testimonies on drug safety issues by both the US Senate and the US House of Representatives, with ongoing work on such issues with the US Senate Finance Committee.
References
1 Cavalla, D. Therapeutic switching: a new strategic approach to enhance R&D. IDrugs 8, 914–918 (2005).
2 Patterson, A. Mining for Therapeutic Gold: A More Strategic Approach to Drug Rescue and Repurposing. NIH, June 9-10, 2011.
3 Manso, PJ and Sokol, AL (2007). Life cycle management of ageing pharmaceutical assets. Pharma Law Insight July/August, 16-19.