Overcoming Bottlenecks in Drug Discovery
Developing a new drug is an expensive and time-consuming process. The best current estimate puts the cost of developing a single new drug at close to $1 billion.
After a drug candidate has entered the clinical trial process it takes, on average, 5.1 years for it to gain regulatory approval and widespread reimbursement (2). And it can take much longer just to get to that stage through a process of generating lead compounds, performing cycles of lead optimisation, pharmacokinetic profiling and carrying out toxicity studies. It may not be possible to speed up clinical trials, but bottlenecks in lead generation and optimisation can be tackled.
When it first came on the scene 20 years ago, High-Throughput Screening (HTS) was expected to revolutionise drug discovery. To some extent it did and for many Big Pharma companies it still plays a major role in identifying leads against drug targets.
In principle HTS can be used to test a large number of compounds very rapidly for their potential activity against a known drug target. Automation of key processes using techniques such as robotic liquid handling, detection, data processing and modelling has improved efficiency – if this is simply measured in terms of the number of datapoints acquired per person- hour deployed.
But gathering this data is only the start of the drug development journey: chemical modification of each potential lead is then usually performed as part of the process of lead optimisation to increase the potency and selectivity of the chosen compound, and to obtain derivatives with more favourable pharmacokinetic properties and toxicity profiles. Most experienced drug discoverers agree that that the quality of leads is far more important than the number of leads that are available.
Problems with HTS
HTS is a useful tool but over-reliance on it does lead to bottlenecks in drug discovery; it is expensive to set up, expensive to run and it can take a great deal of time and effort to generate hits with any potential.
Drug discovery has changed markedly in the last 10 years. Today, financial constraints are greater because of the global recession, the patents on many blockbuster drugs are nearing expiry and reimbursement budgets are tighter. Even the large pharmaceutical companies who have historically carried out most drug research cannot sustain as many HTS facilities as they would like.
Smaller organisations, including academic labs (universities), research institutions, spin-outs and small start-ups find it impossible to invest in their own HTS system, so they employ outsourcing to generate ‘hits’ as starting points for drug research. Many research groups in small and large companies, and in academia, are also questioning the mass screening approach and are using lateral thinking to develop intelligent strategies to develop new leads.
“Brute force HTS seems to have little value in pursuing more difficult targets. Very large libraries seem to contain very large numbers of non druglike compounds and screening such libraries wastes a great deal of time and resources. I’m much more in favour of fragment-based screening and virtual screening of carefully selected libraries,” stresses David Selwood (University College London, UK).
Post-genomic expansion of target choice
It seems ironic, but one of the major challenges facing drug discovery is the explosion of potential drug targets that have been identified in the postgenomic era. The traditional drug development process of 30 years ago relied heavily on testing a small number of compounds in pharmacology models. This approach enabled many major drugs to be developed but it was slow and could not be easily scaled-up.
In the early 2000s, the Human Genome Project sequenced 22,000 human genes and 1,600 of these have known disease associations. In the last decade, 3,000 expressed proteins with druggable domains have been described, and over a third of these are relevant to specific diseases. Despite this only 266 have been exploited commercially as drug targets, leaving more than 80% as an untapped resource.
Another recent explosion in the number of therapeutic targets that are available for drug discovery has come from improved understanding of the mass of protein interactions that regulate many cellular processes. Many of these protein interactions will be druggable and as we discover more about the protein structures involved, we can start to fully exploit such targets. There is no doubt that there are a large and growing number of good and high quality screening assays available, [for a detailed review, see Colas, 2008 (3)] which should create more interest in building drug discovery pipelines which include protein-protein interaction (PPI) inhibitors.
Too many needles in the haystack
HTS can be likened to searching for a needle in a haystack – but using a very expensive and sophisticated bulldozer. HTS screens an enormous, random library of compounds to generate hits that are then further tested to find the most ‘drug-like’. Hundreds of thousands, or even millions, of compounds must be collected before HTS can begin, creating a huge database of potential screen compounds. Equipment such as assay platforms and automated machinery must also be purchased, set up and calibrated.
Preparing for HTS is often a forgotten time-sink. It can take three to six months to establish a robust assay suitable for HTS and to obtain enough high-quality protein or cells to screen the whole compound collection. Eventually, once the equipment is up and running, somewhere between 10,000-100,000 compounds a day can pass through an HTS system but this may need to be repeated for weeks to screen the millions of compounds that will generate the number of hits required.
“High throughput techniques can certainly have an impact,” comments Selwood, but he warns that a common problem with high throughout assays has been loss of quality. “We can’t just sacrifice quality because assays may be cheap and quick; the quality of high throughput data has to improve,” he says.
Advances in compound screening
Screening technology is advancing and it is being applied and combined with more sophisticated assays and screening methods.
Screening in miniature
One of the most striking developments over the last five years has been the miniaturisation of screening systems. This results in higher capacity and faster operation, and the need for less biological material to use in the assay. “Traditional screening set-ups had 96-well formats but this has gone to 384, and even to 1536 and beyond in some systems,” notes Jeroen Kool, from the Department of Chemistry and Pharmaceutical Sciences at the VU University Amsterdam, The Netherlands.
This development is apparent for the more straightforward assay formats, but also for cellular/high content screening (HCS) formats. “Just entering the HTS arena now are many different cellular HTS assays that are based on confocal microscope imaging. Other systems incorporate flow cytometric technologies (4). These enable real biological responses to be incorporated in decision making much earlier in the hit to lead development stage,” he explains.
For some fields of drug development, efficient HTS of cellular ligand mediated effects can now be implemented efficiently. In ion channel drug discovery, for example, the ‘gold standard’ methods have been elaborate and slow patch-clamping techniques. “A major advance is that automated patchclamping assays are now being used in HTS formats – such as the 96-well plate-based automated patch clamping assays – and make discovery of compounds that interact with ion channels more efficient,” says Kool.
Acoustic pipetting techniques for library transfers can make this process faster, more accurate and convenient. Technologies such as microfluidics and microarray printing are now also starting to be incorporated into drug discovery workflows. Although still in development, some are in place as auxiliary techniques alongside mainstream screening pipelines. As miniaturisation progresses they are expected to be given a more prominent position in screening.
In the future this trend should allow more elaborate screening as pipetting costs, reagents and target costs will go down but screening speed will increase. “It should become possible for multiple parallel processes (assays) to be analysed in the same amount of time as one screening assay is now conducted,” predicts Kool.
New assay formats
Today many potential drug targets and auxiliary proteins can be expressed in a variety of cellular systems creating diverse functional assay formats. Different fluorescent and/or luminescent proteins can be incorporated in cells to create functional assay readouts at different points in signal transduction cascades. This is true even in high content formats, where multiple readouts are measured. “More assay formats, such as TR-FRET or FRET based assays, can be recognised in which multiple binding partners create a signal as they bind, because of a functional response of the ligand,” says Kool.
Assay formats that amplify the signals from such interactions can also be used to generate more sensitive readouts with less background interference. There is competition from different assay developers, particularly for assays that screen for components involved in signalling cascades.
The label-free methodology Surface Plasmon Resonance (SPR), which is often used in the drug discovery pipeline for secondary screening and lead optimisation, is increasing in throughput and has become faster, some systems now offering screening in parallel. The SPR biosensors allow analysis of interactions between the lead compound and the target in real time. The most advanced systems are able to use small-molecule fragment libraries which are now widely recognised as a potentially very efficient way to access novel chemistry.
Automation of biology and chemistry
“Automated biology such as the incredible genome sequencers seem to be likely to have the biggest short-term impact on target identification,” says Selwood. Disease is increasingly defined in genetic terms and these analyses are already identifying new drug targets. “Beyond that, I think the next big development will be in automated chemistry using the new mild methods that are currently being developed. We are now seeing catalysts and bases that have hitherto undreamt of selectivity and can work at room temperature. Combinatorial chemistry was a disaster for the industry but that doesn’t mean that automated chemistry is a bad idea,” he explains.
In this area, future advances are imminent but neither the chemistry methodology nor the automation technology have quite advanced far enough to make them happen. “Anyone who’s used one of the large automated synthesisers and seen it stall at the first sign of a viscous solvent/reagent mix will know what I mean…” Selwood notes. Combination of solid supported reagents, new catalysts, automated work up and purification are eagerly awaited. “Are we there yet – not quite, but anyone following Steve Ley’s work can see how the field is developing,” he says (http://www.ch.cam.ac.uk/staff/svl.html).
Using screening intelligently – protein-protein interactions
Defining protein-protein interactions (PPI) is fundamental to understanding the aetiology of many diseases. Metabolic, signalling, immune and gene-regulatory networks at the cellular, tissue and organism level are influenced by PPI and it is reasonable to suggest that understanding more about how proteins interact may ultimately form the basis of novel and viable drug discovery programmes. Although PPI are in principle attractive drug targets, they do pose a challenge. A vast amount of data on different interactions is being generated and new methods are required to speed development.
“PPI is a challenging class of targets,” stresses Zhengrong Zhu, Manager of the Lead Discovery- Soluble Target Group at GlaxoSmithKline (Waltham, MA, USA), who co-authored another recent review of HTS technologies (5). “Most of interaction surface of PPI is large and featureless, and thus very hard to find small molecule inhibitors,” he adds. However, he notes that allosteric sites may exist for small molecule inhibitors. “With the right technology it is still possible to find active compounds for PPI. Highthroughput affinity-based technologies provide an innovative approach for drug discovery. DNAencoded library technology exemplifies this and we have used it with some success against PPI targets,” Zhu reports.
According to Pierre Colas, of the French National Centre for Scientific Research (CNRS; Roscoff, France) successful discovery of PPI inhibitors relies on two main factors. “The first is to choose good targets by identifying protein interactions that are druggable, as opposed to protein interactions that cannot be targeted by small molecules. In some interactions, the binding interface is too large and/or flat, for instance,” he explains.
The second pre-requisite is to screen the right molecules. Although HTS has the power to screen large libraries of compounds, maximising hit rates can be achieved very effectively by using virtual screening techniques to assemble the libraries with a clear focus on protein interaction data.
Progress with better target identification has been made with a machine-learning method that has been applied to try to predict the druggability of PPI. Sugaya and Ikeda used a supervised machine-learning method involving a support vector machine. They selected 69 different attributes that describe PPIs, including data on the structure of the proteins, drug and chemical interactions and functional information. Validation of the attributes was accomplished using 30 well-established druggable PPIs from a total of 1,295 test PPIs. The method identified the druggable PPIs with a sensitivity of 82% and a specificity of 79%. This approach could prove useful for the triage of potential PPI targets.
Databases of PPI are also appearing, as data is collected and collated. Raphael Bourgeas and colleagues have recently described a publicly available database that stores structural information about PPI with known inhibitors, and which can be used by any investigator to assess a PPI for its potential as a druggable target. “A web-based application has also been developed that predicts the occurrence of protein interaction hot spots. This enables a skilled computational chemist to look at the structure around the hot spot and emit an opinion on the druggability of the site,” reports Colas.
Observations and future directions
Recent times have been challenging for the biotechnology sector, making potential investors cautious. Although there are signs that confidence is returning, there is widespread recognition that drug discovery has to evolve to become more cost-effective. The next generation of biotechnology companies is likely to be more ‘virtual’ in nature, with less internal infrastructure. This model provides added value and a lower risk for investors.
Larger pharmaceutical companies are also changing their drug discovery paradigm. Most companies are reducing the size and scope of the internal research capabilities and are moving towards establishing internal ‘biotech-like’ structures within their organisations. Greater collaboration with smaller biotech companies and academia is already being enacted as the recognition grows that lateral thinking and ingenuity will be important in making the drug discovery process of the future more sustainable.
“HTS has changed the face of drug discovery and recent technical advances have alleviated many bottlenecks in drug discovery processes,” concludes Clemens Möller from Discovery Alliances, Evotec AG, Hamburg, Germany. He predicts that we are now set to see the implementation of complex, more physiologically relevant readouts that will replace the traditional, target-oriented approaches in drug discovery.
“To this end, the use of stem cells in HTS and advances in understanding systems biology, as well as using direct biophysical readouts and HCS, are emerging. At present, it seems too early to assess the impact of these and other novel technologies and strategies – such as open innovation approaches, or new industryacademic partnership models – on pharmaceutical productivity; however we are certainly set to see exciting times in drug discovery as more programmes advance,” he says. DDW
—
This article originally featured in the DDW Fall 2010 Issue
—
Dr Trevor Perrior is Director of Research at Domainex, a UK-based biotech which offers drug research services. Before joining Domainex he held a number of senior R&D roles in ICI, Zeneca, AstraZeneca and Celltech; working in the UK, USA, and Switzerland. Trevor has led teams that have delivered several development candidates across a number of therapeutic areas.
References
1 Adams, CP, Brantner, VV. Spending on new drug development. Health Economics 2010 19:130-141 http://onlinelibrary.wiley.com/doi/10.1002/hec.1454/abstract.
2 Keyhani, S, Diener-West, M, Powe, N. Trends in Drug Development Time and Price. Academy Health. Meeting (2005: Boston, Mass.). Abstr AcademyHealth Meet. 2005; 22: abstract no. 3676
3 Colas, P. High-throughput screening assays to discover small-molecule inhibitors of protein interactions. Current Drug Discovery Technologies 2008 5:190-199.
4 Kool, J, Lingeman, H, Niessen, W, Irth, H. High throughput screening methodologies classified for major drug target classes according to target signalling pathways. Combinatorial Chemistry and High Throughput Screening 2010 13:548-561.
5 Zhu, Z and Cuozzo, J. High-throughput affinity-based technologies for small-molecule drug discovery. Journal of Biomolecular Screening 2009 1157-1164.
6 Makela, A and Oker-Blom, C. The Baculovirus Display Technology – an evolving instrument for molecular screening and drug delivery. Combinatorial Chemistry and High Throughput Screening 2008 11: 86-98.