Modelling and simulation in drug development, promise and reality
Modelling and simulation in drug development is not new. What is new is the vision for moving from a descriptive role (what happened) to a predictive and therefore decision making role. While seemingly attractive, important hurdles, both scientific and practical, must be overcome.
Modelling and Simulation is a term used to describe a collection of techniques that attempt to reproduce relevant parts of something in a computer program, then use this program to ask questions. It was pointed out at a recent advisory committee meeting (1) that we have been using various models of drugs in animals (including humans) in drug development for a long time. However, it has been proposed that the role of Modelling and Simulation (M&S) should be expanded from essentially descriptive and hypothesis testing to predictive.
We could ask question such as: what target(s) should I focus on as the best opportunity to intervene in a given disease, or, of a group of compounds which looks the most promising with regard to efficacy and safety, or which of the possible trial designs (doses, populations, etc) are most likely to be informative about a compound?
The goal of asking these questions, using simulation is presumably to be smarter about developing drugs. I’d suggest that we already answer these questions using models, just relatively simple and informal ones, derived from careful examination of data (rather than formal summary of the data in mathematical equations).
It seems clear that we have done this with some success. Therefore, the question becomes whether we can answer them better or faster using computer models than with models for our drugs that we already have in our heads. The case to be made for more formal modelling and simulation is that the quantity of data we currently collect on drugs precludes anyone from drawing conclusions about the totality of it without a computer model to summarise those data so it can be extrapolated.
The value of modelling and simulation comes from the degree to which we can extrapolate the models. Clearly, no one would question that a model that could reliably extrapolate from in vitro data to market share would be of enormous value and justify whatever resource was needed to develop it. But, that is not the sort of extrapolation that the current technology is able to deliver.
Such a model would be of such poor resolution that it would not be valuable. At the other end of the spectrum, no one would bother using simulation to address questions that (real) data already exist to answer. That model, while probably of very good resolution would not be valuable, since there is no extrapolation. Somewhere in between lies whatever value M&S may bring to drug development. There is a negative relationship between the magnitude of the extrapolation in a model and the likely accuracy. Careful consideration of this balance is key to using M&S effectively.
Types of models
Models come in two general categories, empiric and theoretical. These have different uses, and may be employed in different phases of drug developments. Empirical models are based on raw data. Typically, a model is fit to data (from humans or animals) using nonlinear regression to find the best values for the parameters of the model. The structure of the model is then changed (additional compartments, lag times etc), and the model refit to (hopefully) find the model that best represents the data.
This model is then extrapolated in some way (eg different doses, longer duration or different species) to make specific predictions about dose or duration or species that are outside the domain of the existing data. Typical software for this modelling and simulation include NONMEM ® (http://c255.ucsf.edu/nonmem0.html), Pharsight Winnonmix® and Trial Simulator® (www.pharsight.com) and Innaphase Kinetica® (www.innaphase.com). The models tend to be fairly simple, as complex models are difficult to fit to data, and therefore, a relatively poor representation of the physiology. The best application of these models is in designing human clinical trials, based on existing human or animal data.
Theoretical models are based on a set of assumptions about the biology and pharmacology, rather than observed data. The assumptions in turn are based on a compilation of observed and experimental data. These models can be very complex, attempting to mathematically reproduce the knowledge of experts and the known data on how a biological system works. Thus, they are intended to represent to the degree possible what is known about the physiology and pharmacology of the system.
These complex models in general cannot be fit to data using non linear regression as is done for the empirical models. Examples include Physiome models of tissues and organs (www.physiome.com), models for obesity, asthma and HIV from Entelos (www.entelos.com), and Gastroplus® from Simulations Plus (www.simulations-plus.com), which models the dissolution and absorption of drugs.
Among these, some applications tend to be more ambitious in that they attempt to predict clinical outcomes, while others are more interested in representing the basic physiology. As with all models, the degree of extrapolation is inversely related to the confidence one has in those extrapolations. These models are most useful for examining targets and target combinations as well as making early predictions about the effects of compounds in intact animals.
Is there a need or is this a solution looking for a problem?
There are plenty of anecdotes about ‘failed’ clinical trials, and how retrospectively it was perfectly clear why this trial design was doomed to failure. Indeed, a good bit of the work done to date in clinical trial simulation falls into the category of Monday morning quarterbacking. Retrospective clarity, however, brings little value to drug development programs.
It should also be noted that if we select ‘failed’ clinical trials to evaluate simulation, regression to the mean might result in a bias suggesting it is useful. We would like to know if simulation is useful for the typical trial, not a ‘failed’ trial. There is currently no objective, prospective data on how well this works. The ideal solution would be to randomly assign 100 development programs to using simulation and develop tools to evaluate the efficacy of the program. However, a realistic solution might be to assess those programs in which simulation resulted in a change in the program.
Those programs in which the simulation did not result in a change cannot be used to assess the value. While these data are being generated, we may be able to gain some insight from simple questions such as: can we get the answer correct? It is likely that this is a minimum criterion for the simulations to be useful – that the results of the simulation agree (to some undetermined degree) with actual trial results. We have suggested a criterion for assessing the agreement between simulated trials and actual trials (2).
The only data currently in the literature is one abstract with a single example (3) and a small case series, also published as an abstract (4). These abstracts suggest that this technology is still quite new. A manuscript, published by the same group gave a detailed comparison of a single simulation with actual clinical trial results (5). This was a retrospective simulation, blinded to the actual trial results. It seems clear that had this study been done prior to the actual trial, the results of the simulation would have been very misleading in dose selection, with the extrapolated dose response relationship being very different from the one observed in the actual trial.
The business case for modelling and simulation
A number of groups have suggested that M&S can speed development of compounds and or reduce the cost of developing a given compound. There is as yet no evidence to support this. Probably more likely than an acceleration of the process is the opportunity to choose better compounds to start with, or make fewer mistakes with those that are chosen, resulting in more informative labels. Because of these (as yet potential) business values, GlaxoSmithKline has made a significant commitment to modelling and simulation in support of drug development decisions.
The remit of the modelling and simulation group includes support of dosing decisions for first time in human, support of dosing decisions for phase III as well as support of internal decision making, such as prioritising projects and compounds. Since this is a highly technical field, a fairly small, dedicated team has been established to perform, or at least support this work. A decision was made to do this work internally.
The primary reason behind this decision was a commitment made by the modelling and simulation group in that modelling and simulation will not result in delays of decisions. Doing this work in-house permits ready availability of data, identification of questions and rapid turnaround of analyses/presentation to the project team. A typical turnaround time, from availability of data to decisions based on those data is two to three weeks. The target is to deliver modelling and simulation results in that timeframe.
It will be rare that modelling and simulation can accelerate the drug development process. We have examined whether different trial designs can establish a dose-response relationship more quickly than can be done with traditional trial designs. However, regulatory precedent (and often guidance) frequently dictates the duration and even the design of pivotal studies.
Will the answer be correct?
As noted above, the existing literature suggests that frequently it will not be correct. Without further elaboration, we can say that our in-house experience is a good deal more encouraging.
Will this be required someday?
The FDA has established a Modelling and Simulation Working Group. On November 16, 2000, the working group presented to the Advisory Committee on Pharmaceutical Sciences (transcript at http://www.fda.gov/ohrms/dockets/ac/00/transcripts/ 3657t2.pdf). The purpose of the presentation was to discuss with the Advisory Committee the regulatory experience with modelling and simulation and what should be the next steps for the working group. Clearly this area is of great interest to the FDA.
However, the discussion at the Advisory committee meeting was of a very general nature, and about whether a guidance would be helpful and what such a guidance might contain, not of whether this would become an expected part of a drug development program. In the meantime, a ‘Best Practices’ document has been published in draft form by the Center for Drug Development Science (www.georgetown.edu/research/cdds/) that can be useful in understanding some current views on modelling and simulation.
The basic process of using simulation for clinical trial design is shown in the figure. It is an iterative process of developing and extrapolating models, then using those models to design trials. Once additional data is available (from the trial), the model is further refined and extrapolated. Effective use of modelling and simulation requires both a credible answer and delivery of that answer on a time line that does not delay the project.
Typically, this means delivery of the answer within three weeks of the time the data becomes available, simultaneously with the traditional statistical results. Experience has suggested that this is possible, although difficult with existing technology, if experienced users do the analysis and extensive preparations are made. Our experience suggests that doing this work inhouse is most effective.
All analyses are done to address a specific question or questions. We do not develop models simply to predict the outcome of trials. In addition, we must have a commitment by the project team that results and recommendations made will be seriously considered in decision making. Again, we don’t develop models just to see if we can do it. Much of the work done to date has been for ‘demonstration’ purposes.
We feel we are ready to move beyond the need to demonstrate it, although we still do need to validate the results. To this end, we plan to collect data on the ‘congruence’ of actual trial results with simulated trials. This will serve both as a feedback and learning tools for the M&S group, but also to demonstrate to project teams and management whether or not we can do what we claim.
The need for experienced users makes a team dedicated to modelling and simulation essential. A typical project pharmacokinetisist will have the opportunity to do too few analyses per year to acquire the necessary skills to perform this work rapidly and effectively with a great deal of support. At the same time, the modelling and simulation group should actively support the development of M&S skills among the project pharmacokinetisists for several reasons.
The most important reason is that these people at a key link to the projects, an understanding of what M&S has to offer the project will permit them to endorse the technology to the team, as well as build confidence and visibility. Second, as the skills of the project pharmacokinetisist improve, the M&S group can concentrate on more sophisticated techniques and developing new techniques to bring value to the projects, and do so on the required timelines.
However, even a dedicated team of specialists will be challenged to deliver results routinely in two to three weeks. Analysis of the time required to produce a final report in M&S suggests that the bulk of the time is spent with model development, that is the time-consuming process of trying different models to find the ‘best’ (or near best) model to describe the data. Once the model is available, the actual simulations (and required sensitivity analysis) are relatively fast. We see a need to develop high-throughput model development techniques.
We are developing methods to use distributed computing and search algorithms to automate the process. Within a project called the Virtual Cluster Services, we plan to use hundreds of existing computers to efficiently sort through a large number of candidate models using a technology called Genetic Algorithm. This is discussed in a recent abstract at the American Society of Clinical Pharmacology meeting6. Interested parties may contact the author regarding acquiring this software.
Modelling and simulation in drug development is not new. What is new is the vision for moving from a descriptive role (what happened) to a predictive and therefore decision making role. The predictive role for simulation is not a mature science, indeed the data available in the literature suggests that results are at best inconsistent. This likely is a result more of over ambitious attempts than an inherent flaw in the technology. The next few years will likely be instructive regarding how far we can extrapolate these models and have confidence in the results.
Dr Mark Sale received his MD degree from Ohio State University and completed a residency in Internal Medicine at Indiana University. After that he was a fellow in Clinical Pharmacology at Stanford University. From 1992 to 1998 he was Assistant Professor of Medicine and Pharmacology at Georgetown University. At present he is Senior Clinical Program Head for Modelling and Simulation at GlaxoSmithKline.
1 Department of Health and Human Services, Advisory Committee for Pharmaceutical Sciences Meeting, Nov 16, 2000, (2000) (statement of Larry Lesko, director, Office of Clinical Pharmacology and Biopharmaceutics) Transcript available at http://www.fda.gov/ohrms/ dockets/ac/00/transcripts/3657 t2.pdf
2 Mudd, PN, Hale, M, Shen, Y, O’Connor-Semmes, Sale, M. A method to assess congruence of a clinical trial simulation and a phase II clinical study. Clin Pharmacol Ther 2000:67;(2) 185 (abstract).
3 PK/PD modelling ofmoxonidine and simulation of a phase III trial Jang IJ, Ko HC, Peck C Clin Pharmacol Ther 2000;67:(2) 125 (abstract).
4 Ko, HC, Jang, IJ, Li, H, Bies, RR, Bobburu, JVS, Peck, CC. Simulation of Clinical Trials: Lessons Learned Clin Pharmacol Ther, 2000;67: (2) 162 (abstract).
5 Kimko, HC, Reele, SSB, Holford, HG, Peck, CC. Prediction of the outcome of a phase 3 clinical trial of an antischizophrenic agent (quetiapine fumarate) by simulation with a population pharmacokinetic and pharmacodynamic model. Clin Pharmacol Ther 2000;68:568- 77.
6 Sale, ME. Automated, machine learning based model building in nonmem (abstract) ASCPT meeting, March 2001, accepted.