The Timetable of Invention and it’s Relevance to Drug Discovery

The Timetable of Invention and it’s Relevance to Drug Discovery

By Dr William Bains

The most critical question to any investor in new technology should not be whether a technology will work. Technologies nearly always work in the end, providing they do not break fundamental laws of physics, and are not based on false discoveries to start with (like ‘Polywater’ and Cold Fusion).

Enough time, investment and dedicated science will take almost any idea to realisation in the healthcare market, and if the market is willing to pay the price it will then make a profit.

The critical question is: how long will it take?

Globally averaged drug discovery times are typically 10-12 years. For a radically new therapeutic approach this can be even longer – Her/neu was discovered to be cancer-associated 15 years before the approval of Herceptin (1981 for ERB, 1982 for NEU). By coincidence, this is not much shorter than the life of the patents filed on the original target concept, if they were filed, which renders the economics of pursuing anything really new rather questionable.

There are two types of approach to getting round this. One is the ‘low risk’ approach – do medicine discovery without drug discovery (me-too compounds, reprofiling, reformulation and so on). But a more radical approach is to seek a route to NDA which uses technology inherently far more likely to succeed. Technologies such as siRNA, systems biology or cell-based therapeutics are claimed to be such approaches – siRNA uses a natural, ubiquitous, selective mechanism for regulating gene expression that can be developed straight from the DNA sequence, and so has a huge head start on small molecule NCEs from the start.

Claims such as these (which are hard to argue against when propounded by Lasker and Nobel prize winners) are typical of a torrent of such technologies that have been tried over the second half of the last century. In this article I wish to question whether any of them have a chance of beating the odds, and being myself over half a century old I will use history rather than current science as my guide.

Take an example. In 1974 two groups of researchers in Cambridge, UK, came up with two techniques which promised to revolutionise biomedical research. Considered expert opinion at the time pronounced that both would lead to rapid advances in our understanding of health and disease, and thence to new therapies. Phrases like ‘fundamental change’ and ‘breakthrough in understanding’ were bandied about by institute heads and granting bodies.

Within 10 years one had outstripped even the most optimistic projections, the other was mired in setbacks and was being discounted as a failure. The two methods were Fred Sanger’s di-deoxy sequencing technology and Kohler and Milstein’s monoclonal antibody technique. The differences between the two are an instructive history for those interested in the real future for today’s cutting edge science, and in identifying when, and how, to invest in new technology.

What ‘held monoclonals back’? Today it is obvious that the enthusiasts of the late 1970s overlooked many major problems, but at the time the predictions (by sober and serious experts, not by the popular press or New Scientist) were that therapies would be in use ‘within 10 years’.

The timecourse of what actually happened is well known. From initial description in 1975, there was a rapid surge in academic research by early adopters of the technology, most of whom ended up raising monoclonals to their plastic tubes or to BSA, but who learned in the process, optimising the first generation methods over the next six to seven years.

Two to five years after 1975 the first companies exploiting monoclonals broke cover (or companies previously specialising in other things switched to the new technology), lured by the ‘drugs in a decade’ claim of five years before. By 1980 papers were starting to appear in significant numbers, and the publication rate went exponential over the next five years. In line with predictions, the first clinical trials started in 1984/5.

And then things started to go wrong. From 1985 onwards unexpected effects were observed in initial, small clinical trials – lack of efficacy, HAMA, sideeffects. How could an exquisitely specific, nanomolar potent agent fail to be safe and effective? This was found out in the first high-profile Phase II failures in the early 1990s, which lead to new forms of monoclonals (ie, new chemistry) to get round some of the problems – chimaeric, humanised and then human – and monoclonal-based therapeutics such as fusion and PEGylated molecules.

Over the same time it was realised that making a milligramme of protein in mouse ascites and making 50kg to GMP were problems of different orders, and manufacturing skill became limiting, as it has remained to a lesser or greater extent ever since. With one exception (OKT3 in 1986), reliable product launches only started in the mid-1990s, 20 years after Kohler and Milstein’s original paper (and, had they filed one, their original patent).

This timetable is laid out in Figure 1.

Figure 1 The timetable for development of monoclonal antibodies

Company formation after the first wave of enthusiasm in the 1980s has not been tracked, as by 1988 the fashion for setting up ‘a monoclonal antibody company’ had faded, and the technology was increasingly seen as just one plank of a platform, not the whole stage. When Antisoma was founded in 1988, it was a cancer therapeutics company that happened to use monoclonals as a cool technology, not a Mab company.

Monoclonals are not unique. Many other ‘breakthrough’ technologies, which were going to revolutionise therapeutics and put new drugs on the market ‘within 10 years’ have almost identical timescales. Antisense was invented in ~1982, antisense companies were set up in the mid-1980s, papers climbed in the 1990s, clinical trials started in 1991, ran into problems in the early 1990s (aptamer effects, PK), new chemistry was invented, high profile failures soured the field in the mid-1990s. With one exception (Vitravene in 1999 in Europe), steady product launches have yet to be achieved.

Gene therapy was ‘invented’ in the mid 1980s with the first controllable gene expression systems in mammalian cells (a development itself partly driven by production needs for antibody and protein therapeutics, and genetic treatment of all sorts of diseases was ‘only a decade away’. Gene therapy companies burst on to the scene at the end of the 1980s, publications rocketed in the early 1990s, lack of efficacy became apparent in the mid-1990s, new chemistry was tried (new viral vectors, lipofectin-like material, the ever present liposome) in the early to mid-1990s, high profile clinical failures put everyone off the field in the later 1990s (Jesse Gelsinger in 1999).

With one exception (Shenzen SiBiono’s anti-cancer therapeutic launched in 2003 in China) there has yet to be a steady flow of products. Genome-scale sequencing was first seriously proposed and costed around 1987, the genome itself appeared in 2001 (PoT), and the first wave of products which could be genuinely said to arise from genomics programmes (as opposed to products in-licences from conventional discovery by genomics companies) are in the clinic, planned to launch in XXXX.

Is this a problem with these appallingly complex, ill-defined biological molecules? High-Throughput Screening (HTS) technologies have flown a similar trajectory. Embraced enthusiastically in the late 1970s, with a rash of specialist companies springing up in the early 1980s, a semi-academic literature with specialist conferences such as SBS arising soon after, problems with industrialisation (robotics, software, reagent supply) becoming the principle concerns by the mid 1980s, widespread disillusionment with the approach in the early 1990s when it became ‘well known’ that HTS was failing compared to the then sexy technologies of SBDD and x-ray based methods. With the exception of a few molecules in the mid-1990s (Indinavir, Tirofiban), products that could be reliably traced back to screening hits only started appearing at the turn of the century.

For comparison, these timetables are shown in Figure 2.

Figure 2 Comparative timetables for radical drug discovery technologies

(Such comparisons are always moot when the date of ‘invention’ is hard to define, as with gene therapy or HTS – these are based on a ‘working estimate’.) Also shown are where siRNA is today on the timeline, which predicts that the first, outlier clinical products will be launched in three to five years’ time, and the reliable application of the technology in 2015 to 2020. Interestingly, the first reports of ‘unwanted’ effects from siRNA are starting to appear, effects which only a year ago informed commentators said were almost impossible because siRNA was a natural system of exquisite selectivity. This is pretty much on schedule.

This is not new. It took 14 years to get penicillin into general therapeutic use, and this was substantially accelerated by wartime demand for the drug, which drove the solution of manufacturing problems using uneconomic levels of resource. (Politico-military drivers have this effect, from getting men to the Moon to stockpiling smallpox virus. When the army asks for something, it rarely asks ‘what’s the price?’)

DNA sequencing technology suffering the reverse was the problem, where expert predictions that a whole mammalian genome worth of DNA could be sequenced by 2010 were short of the mark by orders of magnitude. The reason is obvious – DNA sequencing is a technology which, once it is got to work, continues to work. Making it work is an end in itself. Incremental improvements can then spread the technology from the expert to the general user, and then to the robot.

For therapeutic monoclonals the technology of generation and selection, itself complex and challenging, were the start of a process which had to prove the therapeutic concept in animals and in man, and then develop the surrounding technological infrastructure to deliver it as a product. This included literal drug delivery, but also manufacture, and opportunity identification, which is not at all obvious when the real world characteristics of the technology are not known.

I have divided this process into three overarching steps – Proof of Relevance (PoR – can your idea/process/target/biology do anything that people outside your own specialist discipline want), Proof of Concept (PoC – does it actually work in making new drugs) and Proof of Technology (PoT – can it be turned from a one-off into an industrial process). The impact of each can be identified by considering technologies which only have to go through PoR to attain success.

These are research technologies which ‘only’ have to work to identify NCEs or validate targets, and do not have to be applied to humans or scaled up to deliver bulk API to the doctor. Examples are two-hybrid screens, Zebrafish and Drosophila model organisms, image-based high-content screening approaches and others. In nearly all cases, PoR takes between seven and 10 years.

Enthusiasts for the Zebrafish were talking in 1995 of the value of this tiny, pipettable vertebrate as an industrial tool, but only in the last year or so have potential pharmaceutical company partners routinely accepted that the data generated is valuable to them, and is not ‘just Zebrafish’ work which can be automatically assumed to need repetition in a ‘proper’ model. SBDD as a tool started to be discussed in depth in the late 1970s (I remember being taken to admire the Evans and Sutherland workstation in the early 1980s), but its routine role in drug discovery was only accepted when Abbott’s RT inhibitors and Merck’s carbonic anhydrase agent showed biological success.

Many other models and analytical tools suffer this decadal time-lag before acceptance, including ‘genomics’ itself. There are many reasons for this, but they all relate to the complexity of biology, and the consequent belief that there is no easy answer to anything.

Why is PoC different from PoT? Because PoC requires that one product struggle through development to show that it can work, usually in niche, high price markets, after an intense, Darwinian process by which many others have fallen by the wayside (OKT3 vs Centocor, Xoma anti-sepsis products etc). PoT requires that this success be scaled up and made reliable. PoC says it can be done. PoT delivers it routinely.

If PoR takes seven years, PoC takes seven years, and PoT takes seven years, then we would expect the whole process to take two decades, with an initial harbinger of eventual success appearing around year 14 when the one product example that ‘makes it’, and thus proves PoC in man, is launched. This matches the timescale for HTS, antisense, gene therapy, monoclonals, SBDD, and a range of other technologies not discussed here.

If we formally model this process (which is easy to do in the abstract, although hard to implement in specific examples), we arrive at a model like Figure 3.

Figure 3 Model of technology development

Problems come in classes, the top levels of which coincide with the three broad Proofs above. (In reality, there are multiple layers of class within class.) We can ‘see into’ the box that contains today’s class of problems, and of course into the boxes containing yesterday’s classes of solved problems. But tomorrow’s classes are closed boxes.

How fast we solve today’s problems is a function of how much knowledge we have about them, which itself is a function of how many prior problems in this class we have solved. Thus the rate of solution increases with time. Our expectations of this are unrealistically optimistic, because we underestimate the number of barriers before us within the class of problems we are presently solving, for two reasons.

Firstly, we consistently under-estimate how hard it was to solve yesterday’s problems – the solutions have been integrated into our knowledge set, and so we extrapolate incorrectly to today’s problems. Secondly, and more importantly, we systematically under-estimate the number of technical barriers that have been overcome by attributing failures (and subsequent work to overcome them) to causes other than the inherent properties of the technological proposition.

The failures of other technologies are because the technology is useless, but failures of yours are because it was implemented improperly, it was managed/sold badly, or (the ever-favourite) the customer was too stupid to realise how great it was. Thus company comments in the 1990-2 period blamed the failure of sepsis antibodies on unreasonable FDA requirements, incorrect patient selection by clinicians, or the blindness of medical statisticians, and not on what is now the obvious cause, that both understanding the disease biology and choosing the right antibody were far harder problems that had been realised.

So our projections of how fast we will solve the present class of problems are usually over-optimistic. But they are not nearly as over-optimistic as our projections for the further future. Intuitively we tend to assume that other classes of problems do not exist, or (more reasonably, but still incorrectly) that because today’s problems are really hard then these ones are the hardest and tomorrow’s will be easier. “If I can only crack the XXX problem it will all be plain sailing” is a phrase that should strike terror into the long-term planner.

For example, in 1975 reviews of the future of antibodies as therapeutics focused on problems we knew about (antibody affinity, selectivity, finding antigens). Knowledge of these problems allowed people with genuine, deep expertise to say that when they are solved products will appear, usually in five to 10 years’ time (ie, the PoT timeframe).

The problems not then being tackled were either assumed to not exist (HAMA) or to be trivial (production costs – in the early 1980s it was thought that recombinant proteins would cost less than $1,000/kg, based on a simplistic calculation of labs’ scale costs of cell culture – only when it became apparent that chromatographic purification media alone would cost far more than this did these costs start to be revised upwards).

Lovallo and Kahneman1 have studied this problem in the general context of management over-confidence, and why this exaggerates our inherent tendency to minimise the possibility of unseen problems. When a new idea is described, a component of the description is always advocacy of the idea, suggesting that the future classes of problems are trivial or nonexistent. But assumptions built into that advocacy subsequently become the assumptions in analysis. Advocacy is good – without it the miserable nay-sayers would rule the world. But it should be recognised for what it is, not for objective analysis.

There are two implications from this for people hypnotised by the latest technology pitch. Firstly, the structure in Figure 3 can be used as a predictive tool to predict when a technology will really make it, as opposed to when current forecasts suggest that it will. If one’s concern is products – therapeutics or tools – this is of value. Secondly, one can still make money from radical technology, but only if you are realistic about what it is going to do, and invest in the appropriate proposition.

In particular, if investment is in stock rather than businesses, investing at a time when the structure in Figure 3 suggests that your exit will be during a wave of success and enthusiasm, rather than a wave of bad news and failures, is probably a good ideal.

My previous article in DDW (2) suggested that Big Pharma should get into older technology that can be applied in industrial modes with industrial reliability, and leave such hairy technologies as siRNA therapeutics, new animal models and (the current trend) genetically personalised medicines to Biotech. Figure 3 confirms that this is a sensible, conservative approach, given that we start our PoC phase afresh with each new target and chemical class. Biotech can take the ‘punt’ on products for niche, wild and way-out applications, which if they are lucky enough to be the 1-in-10 that get to PoC will make them rich. But be aware of chances of success, and the rocky road along the way. DDW

This article originally featured in the DDW Summer 2006 Issue

Acknowledgement
I am grateful to John Proudfoot (Boehringer Ingelheim) for his analysis of drugs discovered using HTS. The ideas and conclusions in this article should not, however, be attributed to him.

Dr William Bains is an academic and entrepreneur in the life sciences. He has worked in basic and applied life science research, technical consultancy, venture capital and has started three UK-based life science companies and sits on the Boards of several others. He is visiting faculty at Cambridge University, where he teaches company creation.

References

1 Lovallo, D and Kahneman, D. Delusions of success: how optimism undermines executives decisions. Harvard Business Review. 2003(July): p. 57–63.

2 Bains,W. Failure rates in drug discovery and development: are we getting any better? (2004). Drug Discovery World 2004 (Fall): 9–18.

*Note: This paper is a description of a work in progress on the historical and predictive fate of therapeutic technologies. Comments are welcome, particularly examples or counter-examples to the ideas presented.

Suggested Reading

Join FREE today and become a member
of Drug Discovery World

Membership includes:

  • Full access to the website including free and gated premium content in news, articles, business, regulatory, cancer research, intelligence and more.
  • Unlimited App access: current and archived digital issues of DDW magazine with search functionality, special in App only content and links to the latest industry news and information.
  • Weekly e-newsletter, a round-up of the most interesting and pertinent industry news and developments.
  • Whitepapers, eBooks and information from trusted third parties.
Join For Free