Spring 2001
Biologically relevant chemistry
By Dr Stephen A. Hill
Spring 2001

A large proportion of R&D expenditure relates to the investigation of compounds which ultimately fail to reach the market as useful medicines. We discuss why early drug discovery using high throughput, automated in vitro and in silico methods, must become the primary testing ground for novel compounds not clinical development.

In the context of the pharmaceutical industry, biologically relevant chemistry ultimately means chemistry that has been optimised to generate medicines. A successful prescription medicine is a biologically relevant chemical structure by virtue of specific features. Those features include a clearly defined risk-benefit profile, which is acceptable to patients, healthcare providers and healthcare purchasers, as well as a carefully controlled and predictable process for manufacturing the medicine.

To take a compound from clinical development stage on to the market requires that regulatory authorities such as the Food and Drug Administration (FDA) and the European Medicines Evaluation Agency (EMEA) approve a New Drug Application (NDA), based on safety, efficacy and manufacturing controls. Chemistry underpins all of this. By and large, efficacy in humans is determined by binding of the chemical structure (ie the medicine) to the biological target site of interest. For example, Tagamet® and Zantac® prevent and treat ulcers by blocking H2 receptors in the stomach. Sufficient potency at the site of action is a pre-requisite for effect. In turn, chemical structure is a prime determinant of the strength of binding site interaction and thus potency. Unfortunately, many chemical structures bind not only to the intended site of action, but also to other biological targets which are unrelated to the disease being treated.

This non-selectivity can result in serious toxicity and unwanted side-effects. Increased specificity for the intended binding site is therefore a desired feature of biologically relevant chemistry as exemplified by the improved side-effect profiles of COX-2 inhibitors over less selective non-steroidal anti-inflammatory drugs (NSAIDs) in inflammatory conditions. While much of the rational design approach to early stage drug discovery has focused on optimising compounds for their potency and selectivity at a desired biological target, these two properties alone are insufficient to guarantee a safe and effective medicine. For a small molecule drug to be safe and effective, it must be absorbed and distributed to its site of action in the body following oral administration. It must then be metabolised and eliminated in a way which is not harmful and does not lead to accumulation of toxic metabolites or parent drug over time. Biologically relevant chemistry is therefore chemistry which creates potent, selective compounds with appropriate ADMET (absorption, distribution, metabolism, elimination, toxicity) characteristics in human beings.

It is worth noting that while proteins, which are administered primarily by injection, comprise a significant proportion of therapeutic compounds, the preference of both patients and healthcare providers, from the perspective of cost, convenience and compliance is for once a day, orallydosed medicines.

Finally, in terms of defining biologically relevant chemistry, we will eventually need to consider individuals as opposed to diverse population groups. The final decision on whether a medicine has an acceptable risk-benefit profile is determined at the level of individual patients. For the individual patient, the average level of safety and efficacy of a medicine in a population is irrelevant. The individual wants to know what risk accrues to them for a given level of efficacy. Biologically relevant chemistry must therefore become increasingly focused on events at the individual level. Pharmacogenomics and ADMET characterisation (especially in terms of drug metabolism) have the potential to allow chemical design to be focused on effects in ever smaller subsets of the potential patient population – perhaps ultimately even at the individual level. This fragmentation of markets is an inevitability, and we should be driven today by chemical design with this future in mind.

An integrated process
Compound design and optimisation is now the major bottleneck between the targets derived from the genomics revolution and clinical development. The goal is to bridge that gap by integrating technologies in three areas: intelligent design of compounds, high throughput automated chemistry for the synthesis of compounds, and a highly integrated, parallel process for drug discovery.

Intelligent design
There has long been debate over the relative merits of intelligent design of molecules as drug candidates versus ‘blind chance’ through ultra high throughput screening of ever increasing numbers of compounds. Given the scarcity of reported successes from large-scale, blind screening campaigns, I take the view that the role of intelligent design will continue to increase in importance in the future. The number of small molecules which could be made using those elements of the periodic table which typically make up medicines has been estimated at 10200. There has been neither the elapsed time nor raw materials available in the whole universe to make even a single copy of all of these potential compounds. We are unlikely, therefore, to see the day when we can make a quantitatively representative sample of the potential small molecules of ‘chemistry space’.

If drug discovery is truly that – ie the discovery of a biologically relevant molecule as opposed to the design of a biologically relevant molecule – then ever larger libraries and ultra high throughput screening presents the risk of making an ever bigger haystack in which to search for a needle – without even the confidence that the needle occupies that particular haystack. In those situations where little or no information about a target binding site is available, a qualitatively representative sample of chemistry space is needed. We need to create the maximum diversity in the minimum number of compounds. Moreover, each new library that we make for this purpose should cover a different part of chemistry space.

As more information becomes available regarding three-dimensional binding site structures of important targets, library design must shift away from maximum diversity to facilitate chance interactions, in favour of compounds designed for their ability to bind uniquely and tightly to the desired target. This rational design has been a feature of modern drug discovery for many years and has resulted in such breakthroughs as the protease inhibitors for the treatment of HIV. Advances in computer performance and the development of new modelling tools are increasing the potential for in silico design and should result in improved efficiency of rational design for potency and selectivity.

Ongoing collaborations aim to exploit novel in silico approaches to de novo chemotype (ie core scaffold structure) design. Using limited compound screening-derived Structure Activity Relationship (SAR) data or known binding site/ligand structure data, one can aim to co-develop in silico technology which will provide the ability to define novel chemotypes which, when linked into combinatorial library design and synthesis capabilities, will permit the creation of multiple lead series of compounds, providing the often desired ‘back-up’ compounds in the event the lead possesses an unanticipated liability. Such ‘chemotype hopping’ is used to design analogues primarily according to potency and selectivity. We recognise, however, that tight binding affinity and high selectivity alone are not sufficient in defining a medicine. Indeed, the majority of failures during clinical development relate to metabolic liabilities (Figure 1). Reducing the failure rate due to those deficiencies provides a major opportunity for both improving the efficiency of pharmaceutical R&D while, at the same time, resulting in the creation of medicines of higher overall quality. Our recent merger with Camitro provides us with the technology to apply predictive ADMET modelling in the earliest stages of compound library design. Using Camitro’s in silico technology, we are now able to correlate chemical structure to likely ADMET profiles in humans and focus our compound design, synthesis, and screening on only those compounds with the greatest likelihood of success in clinical development. Moreover, when ongoing discovery programmes encounter insurmountable barriers in terms of optimisation, Camitro’s predictive ADMET capability combined with Nanodesign’s chemotype shifting technology have the potential to redirect synthetic efforts down a potentially more productive path.

High throughput automated chemistry
Intelligent design as described above holds great potential for increased efficiencies in drug discovery. It will not, however, translate into real productivity gains without efficient methods for the production of real compounds in sufficient quantity and purity to facilitate experimental confirmation of predicted biological profiles. Even the best in silico design is unlikely to deliver the exact chemical structure that will end up in animal and human studies. Before the chemical structure is ‘locked in’ prior to animal Good Laboratory Practice (GLP) toxicology studies as a prelude to Investigational New Drug (IND) filing and human phase I studies, real compounds will have to be made, subtle differences among real analogues explored, and a number of iterative production cycles for novel analogues undertaken. A return to the days of traditional medicinal chemistry, whereby a single chemist would spend two weeks and thousands of dollars making a single compound, cannot be entertained. We need the power of high throughput, automated chemistry to deliver an appropriate number of analogues, to a specific design, in milligramme quantities, in excess of 90% purity, and as discrete compounds rather than mixtures. ArQule’s Automated Molecular Assembly Plant (AMAP(tm)) chemistry operating system has developed to the point where we have implemented approximately 200 chemical transformations which allow us to approach the ‘real’ synthesis of most virtually-designed libraries in a highly efficient manner (Figure 2).

Parallel processes for drug discovery
The real driver for the future of discovery productivity is not individual tools or technologies, but the opportunity to integrate the best of what is currently available into an efficient and effective process. Increasingly, the process of drug discovery must become a parallel process whereby the multiple biological parameters which differentiate a safe and effective medicine from a failed compound are assessed at each and every stage of the discovery programme. This should probably start as early as the point of target selection and validation. There is an argument that a time consuming effort at validating a particular target via animal ‘knockout’ models is wasted effort if, subsequently, no small molecule agonist/antagonist can be identified for that target. An alternative approach would be to screen unvalidated targets and only progress those for which a small molecule agonist/antagonist can be readily identified. That same molecule could then be used to not only validate the target in biological systems before further investments are made, but it could also serve as the lead compound for subsequent optimisation. Such ‘hit-based’ target validation offers the additional advantage of demonstrating the chemical ‘tractability’ of the target early in the discovery process.

The next opportunity to apply parallel processes also relates to target selection. The potential for significant increases in the number of disease targets derived from genomics means a greater choice of important targets will become available for drug discovery. Our aim is to make smaller screening libraries with maximal structural diversity. By focusing the libraries on the minimum number of compounds that are still structurally diverse, it then becomes cost effective to screen against larger numbers of targets. It also becomes feasible to generate in vitro ADMET screening data simultaneously with that for potency and selectivity. The choice of which leads to follow up then depends on finding a ‘high quality’ hit on a multi-parametric basis, not just identifying the most potent compound. Only when the combination of biological target and molecule delivered a hit of sufficient quality on a multi-parameter basis would a lead optimisation programme be initiated. Obviously, by incorporating in silico modelling of ADMET properties in the design of the original libraries, the frequency of ‘high quality’ hits resulting from those libraries will be enhanced.

Using this initial filtering to identify high quality hits, it is also feasible to use the de novo chemotype design capabilities, described earlier, to identify one or more back-up series of compounds in the event the lead series proves refractory to optimisation. At every subsequent step in the lead optimisation process, it should be possible to use a combination of predictive modelling, experimental profiling and rapid iterative compound synthesis to focus on the analogues most likely to be suitable for animal toxicology and subsequent human Phase I studies. No longer should we spend many months identifying the most potent compound in a series, many further months identifying the most selective compound, only to find out after this effort that the compound series has, for example, insurmountable cytochrome P450 interactions. This sequential process with late failures must be replaced by a multi-parameter filter at every stage of the discovery process – this is the fundamental approach to parallel drug discovery.

Conclusion and thoughts on the future
The foregoing discussion and consideration of how to evolve the current drug discovery paradigm gives realistic hope for the future of the pharmaceutical industry. Society will continue to reward innovation in the form of novel, cost-effective medicines with appropriate risk-benefit profiles for individual patients. Changes to our current processes are, however, a prerequisite for a successful future. Business as usual in terms of drug discovery productivity will not suffice. Nor will slow, incremental improvements get us to where we, as an industry, need to be. Total pharmaceutical industry spending on research and development has more than trebled over the last decade and is now estimated to exceed $50 billion per annum. Over that same period there has been no more than a 20% increase in the number of compounds reaching Investigational New Drug (IND) status, and at most, a 35-40% increase in the number of New Drug Applications (NDAs) submitted and approved by the FDA (representing compound annual growth rates of only 2% and 4% respectively).

To generate a single approved medicine is now estimated to cost in excess of $600 million. Of this, as much as one-third is spent on discovery activities leading to phase I studies in humans. Most of this expenditure relates to the investigation of compounds which ultimately fail to reach the market as useful medicines. Our goal must be to identify the cause of those failures and focus a greater proportion of our efforts and finances on those compounds with the greatest potential to succeed. Moreover we must identify those causes of failure much earlier in the R&D process – not at the end of phase III studies in humans when most of the resource and effort has already been spent. Early drug discovery – using high throughput, automated in vitro and in silico methods, must become the primary testing ground for novel compounds – not clinical development. We must aim to integrate predictive and experimental ADMET technologies, rational design for potency and selectivity, pharmacogenomics for patient selection, and highly efficient chemistry with which to make the compounds of interest. If we are successful, we may look forward to the time when clinical trials in human subjects will simply be confirmatory of a highly predictive risk-benefit profile derived from pre-clinical experiments. Clinical trial programmes will therefore become less risky, attrition rates will fall, higher quality drugs will result, and resources will be freed up to focus on greater innovation throughout the drug discovery process. The industry in the meantime must ask how to best create the working environment most conducive to innovative drug discovery as this is where the greatest long-term value will be created. Will this be in the large, multi-merged pharmaceutical companies, the emerging biotechnology companies, or smaller independent technology development and lead discovery organisations? One thing is clear – those companies that develop a more efficient process for discovering high quality IND candidates, with low clinical attrition rates, leading to medicines for unmet medical needs, will thrive in the future. Biologically relevant chemistry will be at the core of that success.

 


Stephen Hill, BMB Ch, MA, FRCS joined the ArQule as President and CEO in April 1999. Prior to joining ArQule, Dr Hill served as the Head of Global Drug Development at F Hoffmann-La Roche Ltd. He joined Roche in 1989 as Medical Adviser to Roche Products in the United Kingdom. He held several senior positions there, including that of Medical Director, with responsibility for clinical trials of compounds across a broad range of therapeutic areas. Subsequently, he served as Head of International Drug Regulatory Affairs at Roche headquarters in Switzerland, where he led the regulatory submissions for seven major new chemical entities globally. He also was a member of Roche’s Portfolio Management, Research, Development and Pharmaceutical Division Executive Boards. Prior to Roche, Dr Hill served for seven years with the National Health Service in the United Kingdom, in General and Orthopedic Surgery. Dr Hill is a Fellow of the Royal College of Surgeons of England, and holds his scientific and medical degrees from St Catherine’s College at Oxford University.