Hit to lead and lead to candidate optimisation using multi-parametric principles
As the numbers of potential target proteins become available for pharmaceutical companies of all sizes, the drive towards more cost-effective drug discovery assumes greater significance.
It is a truism to state that the biggest bottlenecks in the drug discovery process are the hit-to-lead and lead-to-candidate optimisation phases of lead and candidate optimisation. One philosophical approach to the lead optimisation phase that is gaining wider acceptance as a methodology for improving this process is that of parallel optimisation.
This article focuses on how to approach the issues of high throughput investigation of the physical and physicochemical properties such that data can be acquired in real time together with potency and selectivity.
The genomics and proteomics revolution of the last decade has given unprecedented access to new target proteins to pharmaceutical companies of all sizes, either through their own efforts or in collaboration with academics or specialist biotech companies. This fact, combined with technological advances in high throughput screening (HTS), rapid assay development and high throughput parallel synthesis and compound husbanding has meant that for each new protein target, a plethora of ‘hits’ can be reasonably expected from an HTS campaign.
Herein lies the problem, for the next phase of drug development is the optimisation of the properties of these ‘hits’ such that an exploratory development ‘candidate’ compound or compound series can be identified. This phase represents a multi-dimensional technical challenge that will prove to be resistant to simplification and automation. In short, the industry can identify ‘hits’ for many new targets with reasonable efficiency, but the next phase of lead optimisation represents the new bottleneck for efficient drug discovery (Figure 1).
The process of taking a screening ‘hit’ to a ‘lead’ to a ‘candidate’ is to a large degree dependent upon the definition of ‘hits’, ‘leads’ and ‘candidates’ in each research organisation, and while there is broad agreement with the definitions there are still some considerable differences – especially when one considers at which stage of the process a result is apposite. For the purposes of this article and using our in-house terminology a ‘hit’, ‘lead’ and ‘candidate’ can be defined in Table 1.
We believe that candidate compounds evolve over time with various improvements in the indicated properties and we break down the classification of candidates into Bronze, Silver and Gold to reflect their relative value to an organisation. It should be noted that these definitions are by no means fixed but are variable from project to project and compound series to compound series. The figures in brackets are average timelines for progression from one stage to another and are again highly variable.
It is a truism to say that the industry has in the past been too focused on the optimisation of potency to the exclusion of the optimisation of other biological (selectivity), physical (solubility) and physico-chemical (pharmacokinetic (PK) related) based properties. As a result many compounds have been elevated to candidate status which in retrospect should not have been. Their consequent failure in pre-clinical or clinical studies raises the attrition rate of compounds progressing through the discovery pipeline with obvious cost and time penalties.
The traditional project management of ‘hit’ to ‘lead’ and lead optimisation has more recently been that of performing the seductive optimisation of potency, followed by an assessment of pharmacokinetic and toxicological parameters. Medicinal chemists then are caught in a trap of trying to optimise a feature like absorption, while maintaining high potency.
A typical problem is that of potency versus absorption – given that absorption can usually be improved by reducing molecular weight, but potency by increasing molecular weight, medicinal chemists caught with optimising a subnanomolar compound with all the features guaranteeing non-absorption, are caught in a cul-de-sac. A typical project will see several promising series of compounds failing at these latter hurdles and the requirement to develop a subsequent series. Such ‘serial’ lead optimisation can therefore be lacking in efficiency (Figure 2).
Consequently, an alternative strategy of simultaneously optimising all the properties necessary for the drug at the outset is a rational one. Hence, the now commonly accepted ‘best practice’ for lead optimisation is that of ‘parallel’ optimisation of potency, selectivity, physical property measurement and PK related properties (Figure 3).
We refer to this approach as ‘multi-parametric optimisation’. Such an approach puts a more equal value on the data generated for all the above properties and the net result of this method gives compounds that are more rounded in terms of their overall characteristics and more likely to pass the first pre-clinical and clinical investigations of their utility. The aim is to improve the efficiency of the lead optimisation process and reduce the attrition rate of the process downstream of this position.
However, it should be stated that multi-parametric optimisation is not without its own issues and problems. Firstly, while synthesis, analysis and screening can be geared up to be considered ‘high throughput’, this is not the case in the measurement of some of the PK-related properties such as S9 microsomal assays and Cytochrome P450 induction or of physical chemistry measurements such as solubility. It is clearly important that such data can be sourced in synchrony with the other information acquired.
Secondly, with each compound in the lead optimisation process now generating many more data points from the different ‘assays’, many of which are duplicated, there is a real danger of information overload. There is a requirement for software packages that assimilate the data from this exercise and portray the information in easy to digest and preferably graphical formats. Given that lead optimisation is, by its very nature, an iterative process whereby such information is generated, assimilated, interpreted and then translated into further action then it is imperative that the process does not itself cause a bottleneck.
Each of the following properties associated with in vitro PK (absorption; metabolism, solubility) is now discussed with reference to their relative importance to the drug discovery process.
In recent times, the efforts of parallel synthesis chemists in producing large numbers of compounds has been tempered by the suggestions of physical chemists that the ‘drug-like’ nature (or not) of the compounds is of equal value. The work by Chris Lipinski (1) and the wide acceptance of the ‘rule of 5’ is testament to the importance of using appropriate design criteria for production of such lead discovery libraries (2).
The rule of 5 (compounds should not:
a) exceed a molecular weight of 500
b) have a cLogP of less than 5
c) have more than 5 H-bond donors
d) have more than 10 H-bond acceptors is in large part a prediction that a compound satisfying its dictates will have a small enough polar surface area (PSA) to efficiently passively diffuse across a cell membrane and thus be effectively absorbed across the gut surface membranes and into the blood stream.
More recently, we and others (3) are routinely predicting such behaviour using experimental models of absorption. At its simplest, the use of immobilised artificial membranes (IAM) and the parallel artificial membrane permeability assay (PAMPA) systems can predict passive diffusion across cell membranes. Caco-2 cells, a human colon adenocarcinoma cell line, are a closer approximation of the in vivo situation.
Caco-2 cell monolayers have been used to examine compound absorption by modelling the epithelial cell layer barrier and absorption from the gut lumen to the blood stream. Typical studies determine transport in the apical to basolateral direction in vitro, although comparison with transport in the basolateral to apical direction can provide indications of the involvement of active drug efflux mechanisms, eg P-glycoprotein Figure 1 (Pgp) activity.
Bovine brain microvascular endothelial cells (BBMEC) can be used in an analogous manner to predict Blood Brain Barrier (BBB) penetration, as part of the optimisation of potential CNS drugs or in other projects where CNS penetration is deleterious.
Metabolic stability is an important factor affecting the progression of potential lead compounds. In vitro metabolism studies, using liver S9 or microsome subcellular fractions, are widely used in the optimisation of compound selection. The relative stability of compounds can be measured as loss of parent compound by HPLC-MS analysis. With addition of the appropriate enzyme cofactors the phase (2) conjugation pathways of a drug, eg glucuronidation, can also be predicted.
Cytochrome P450 enzyme induction and inhibition
The Cytochrome P450 dependent group of enzymes are the principle enzymes involved in the oxidative metabolism of drugs. In response to certain drugs CYP450 enzyme levels, particularly in the liver, may increase (are induced) several-fold to cope with the increased insult to the tissues. P450 enzyme induction is thus an early marker of the possible toxic side-effects of a drug. P450 expression can be monitored by determining protein mass, enzyme activity or message levels. Argenta has developed a high throughput method to evaluate P450 induction by drugs as part of its drug optimisation process.
Inhibition of the cytochrome P450 enzymes can also lead to clinical drug-drug interactions. P450 enzyme inhibition can be assessed by performing in vitro inhibition studies using recombinant P450 enzymes and high throughput 96-well plate fluorometric assays. Among the companies that offer such metabolism studies as a service are Cerep SA (France) and Cyprotex (UK). Lion Biosciences (Germany) which acquired Trega’s Navicyte capability also offers such assays in 96-well kit form. Alternatively inhibition of individual enzymes in liver fractions can be monitored by the use of selective substrates, metabolism of which can be followed by LC-MS analysis.
While not being strictly an in vitro pharmacokinetic parameter, the aqueous solubility of compounds is possibly the most underestimated and yet critical property that a potential lead compound possesses. In order to measure aqueous solubility in a high throughput manner it is only practical to use turbidometric methods to measure kinetic solubility (1) utilising robotic technologies. In this way, it is possible to determine aqueous solubilities with reasonable accuracy and in a throughput comparable with the above early ADMET screens. It is suggested that an aqueous solubility of at least 1mg/ml is required for an orally dosed candidate.
In summary, the concept of multi-parametric optimisation is now being embraced throughout the pharmaceutical and biotechnology industries and it is envisaged that this approach will yield positive results by increasing the speed of candidate discovery while reducing the attrition rate. DDW
This article originally featured in the DDW Winter 2001 Issue
Dr Anthony Baxter is CEO of Argenta Discovery Ltd and was one of the founders of the company since its inception in 2000. Before Argenta, Dr Baxter was Chief Scientific Officer at Oxford Asymmetry International Ltd and prior to that worked in ‘big pharma’ (Ciba and Glaxo) as a research manager and medicinal chemist.
Dr Peter Lockey is Director of Biochemistry at Argenta. Prior to Argenta, Dr Lockey was head of the high throughput screening group at Aventis UK, and has more than 14 years’ experience in the pharmaceutical industry. He holds degrees in biochemistry and medicinal chemistry in addition to a PhD which focused on drug targeting in cancer chemotherapy.
1 Lipinski, CA, Lombardo F, Dominy, BW, Feeney, PJ. Adv. Drug Delivery Rev. 2001, 46, 1.
2 Baxter AD. Current Opinion Chem. Biol. 1997,1, 79.
3 Balimane PV, Chong S, Morrison RA. J. Pharmacol. Toxicol. Methods 2001, 44, 301-312.