The promise of pharmacogenetics and pharmacogenomics
Within the next 10 years the benefits of pharmacogenetics and pharmacogenomics will inevitably outweigh the disadvantages. But what are the commercial and legal implications for the pharmaceutical industry especially for companies who have lead candidates ready to enter development?
The twenty-first century began with two great biomedical events: the publishing of the first draft of the complete human genome sequence and the high density mapping of the genetic variations within it. These two achievements will transform the way in which we target and treat human disease.
As scientists unravel the series of molecular activities that occur when disease strikes, they will be able to locate new gene targets which can be used to develop innovative new medicines and services with greater efficacy and fewer side effects than many of today’s treatments produce.
And as they identify variations in the sequence of DNA between individuals, they will be able to determine why some people become ill in the first place, and why some people respond positively to a particular medication while others suffer adverse effects or do not respond at all.
Pharmacogenetics (the correlation of the DNA sequence of genes to a drug response) and pharmacogenomics (the study of the pattern of expression of genes involved in a drug response in a defined environment) are still in their infancy. But they promise to revolutionise the way in which drugs are researched, developed, marketed and prescribed. Indeed, both the pace at which we are pushing back the boundaries of ignorance and the speed at which the necessary technologies are evolving suggest that they will do so within the next 10 years.
Yet science and technology are not the only considerations; the social, political and philosophical climate is equally important. For the more we learn about the secrets of our species, the greater the danger of a backlash from those who fear the consequences of ‘meddling’ with the code that makes them who they are.
Filling in the gaps
On June 26, 2000, the scientists working on the Human Genome Project announced that they had finished reading a ‘draft’ of the manual for making a human being. The next task is to identify the single nucleotide polymorphisms (SNPs) that play a large part in making one human being genetically different from another, and here the SNP Consortium has already made significant headway. In August, the consortium reported that it had mapped out nearly 300,000 SNPs – double its original target – and that it expects to identify 750,000 by December.
Thanks to these two initiatives, we know that the DNA in the human genome is made up of about three billion nucleotides, or chemical letters, which code for all the macromolecules needed to build and sustain a human being. We also know that about 99.9% of the letters are the same in all human beings, and that one in every 1,000 nucleotides differs from one person to another. Those three million SNPs account for variations in height, eye colour and other such visible characteristics. More importantly for medicine, they also account for variations in susceptibility to disease and in the way individuals respond to therapy.
But though we have identified the letters and words that comprise the genetic alphabet, we do not yet know very much about what they mean. We do not know the connections between sites of genetic variation and specific disease conditions or biochemical pathways. In short, we are a long way from being able to read the whole story, let alone rewrite any of the sentences or paragraphs.
Seeing the light
The rate at which we are advancing suggests that we shall not remain in the dark for long. The first draft of the human genome was finished in only a decade. Work on producing a high-density SNP map is also well ahead of schedule.
Of course this does not mean the task will be easy. Most diseases are polygenic, with responses that are multi-factorial and variable in penetrance – a consequence of the particular alleles of the genes that are expressed. The behaviour of both candidate and metabolising genes also depends on environmental factors such as age, sex and diet. And genes do not operate in a binary ‘on-off’ fashion; they function on a sliding scale. In real people with real illnesses, then, the story becomes very complicated indeed.
Nevertheless, the research that is currently taking place will eventually enable us to understand and predict the molecular drama that unfolds when disease occurs. That knowledge will, in turn, produce a radical change in the way the pharmaceutical industry operates.
Treating patients properly
Under the current model for making and selling drugs, pharmaceutical companies aim to produce a blockbuster that serves the entire patient population. But the variation in individual genotypes means that many drugs work for only 60% of that population at best. Beta-blockers, for example, do not work for between 15% and 35% of the patients for whom they are prescribed; tricyclic antidepressants do not work for between 20% and 50%; and interferons do not work for between 30% and 70%.
Worse still, many people not only fail to respond to a particular treatment, they actually suffer unpleasant or serious side effects. One well known US study estimates that in 1994 over two million patients were admitted to hospital because they had been prescribed inappropriate drugs or had experienced adverse effects from drugs that had been correctly prescribed. Over 100,000 died as a result – suggesting that adverse drug reactions are between the fourth and sixth leading cause of death in the US (1).
In other words, a typical blockbuster drug that generates revenues of $1 billion a year does so because it is distributed to 100% of the patient population – not because it works for 100%. Pharmacogenetics and pharmacogenomics will turn this situation on its head. A drug that is designed using the principles of pharmacogenetics would only be used to treat that percentage of the population whose genotypes showed they would respond to the medication. But it would be efficacious for all of that sub-population.
The key question for the industry is what impact this will have on revenues. The obvious assumption is that a drug prescribed for just 60% of the patient population would generate just 60% of the revenues its predecessor might have generated, reducing income from a $1 billion a year blockbuster to $600 million. But this is naïve. A drug that is guaranteed to work for everyone for whom it is prescribed is more likely to command a premium price. So, although overall revenues may be less than they would have been with a traditional blockbuster, they are unlikely to fall in line with the proportion of the population for whom the drug actually works.
Getting the right response
If pharmacogenomics reduces revenues per drug, however, it will also reduce costs – by making clinical development a much more accurate process. Since the results of clinical trials are rarely unequivocal, pharmaceutical companies currently have to run numerous tests involving large patient populations in order to get a statistically meaningful result.
In Phase I, they typically establish the maximum tolerated dose of a new drug by giving it to between 25 and 50 healthy volunteers. In Phase II, they test the efficacy, safety and dosage range of that new drug on several hundred patients. And in Phase III, they verify the data they have already obtained by testing it on between 5,000 and 10,000 patients. In all, a new drug typically gets tested on between 5,500 and 10,500 people.
This is a very expensive way of doing things. In 1997, Lehman Brothers calculated that total costs per approved drug had reached an average $608million (2). Clinical development accounted for about $263 million. But pharmacogenetics and pharmacogenomics will enable the industry to adopt a totally different – and much cheaper – approach.
Within the next few years, it will be possible to correlate clinical outcomes retrospectively with the genotypes of a subset of genes selected because focused pharmacogenetic studies suggest they are the most relevant. Within another few years, it will be possible to sequence the genomes of entire clinical populations and correlate genetic variations with different drug reactions. And by the year 2010, as our understanding of the interaction between drugs and dynamic biological systems advances, it will be possible to test a hypothesis on trial patients recruited because of their particular genetic profiles.
The consequences of this change will be twofold. It will alter the aim of each phase of clinical testing. Phase I will be used to establish proof of concept; Phase II to segment responders, non-responders and adverse responders; and Phase III to refine the results from testing the drug on responders. It will also reduce the number of patients required to run those tests.
Phase I will still involve some 25-50 volunteers. But Phase II will be based on the number of patients needed to produce a pharmacogenomic profile – a number that is likely to vary from 400 to 2,000, depending on the complexity of the disease and the network of genes involved in treating it. Phase III will then consist of numerous tests on much smaller patient groups chosen because they have a genotype that suggests they will respond favourably.
Together with genome-wide scanning to identify the cluster of genes on which the clinical research should focus, this approach should cut the number of patients required for clinical trials quite dramatically. Indeed, we estimate that it might ultimately be possible to conduct all three phases using between 2,500 and 3,500 patients – at least 50% fewer than the number required today. That would, in turn, halve the cost of clinical development, saving about $130 million per drug.
Bar coding for all
Clearly, however, developing drugs for patient populations with a particular genotype also has profound implications for the practice of medicine at the point of care. For a start, there is little point in tailoring drugs to specific genotypes unless the DNA of every member of the population has already been sequenced or can be determined.
The power to genotype large populations is currently beyond our reach, since the available technology is too costly, too slow and too inaccurate. But a wide variety of genotyping platforms are now being developed and research conducted by PricewaterhouseCoopers shows the industry is confident that the problem will be resolved. Most of the companies we approached in a recent survey believed that within the next five years they would use genotyping in at least 50% of clinical trials (3). If the current rate of progress is sustained, there seems little reason to doubt that universal human ‘bar coding’ will be possible within another decade.
Thus, within our lifetimes, every patient in the West – if not the world as a whole – could be equipped with a swipe card that contains details of his individual genome. His doctor would then check the card against the range of drugs available for treating the illness from which he is suffering and prescribe the drug that is best for his genotype.
Understanding text and context
We are obviously some way from realising this vision – and the power to sequence entire populations is not the only challenge the scientific community faces. It will be no easy job, for example, to manage and analyse the vast quantities of data that widespread genotyping generates.
Moreover, even when we can do these things, we still have to understand the ‘phenotype’ – the complex, dynamic interplay of gene and protein networks with environmental factors, which determines how we respond to drugs. Some factors, like age and sex, are easy to identify. But dietary patterns are much more difficult. Many drugs also behave quite differently when administered with other medications from the way in which they behave when administered in monotherapy. One-off variations in habit – such as the patient who swallows an aspirin because he has run out of the Cox 2 inhibitor he normally takes – are quite impossible to predict.
The growing diversity of the human race is another issue. Man’s family history is relatively short, and he has not had time to build up the variety that is found in other primates. His history has also been restricted by location. But the geographic, social and political freedoms many people now enjoy have removed the reproductive constraints that characterised earlier generations. This will inevitably increase the variability of the gene pool, as segregated populations have diverged genetically. Whether that will be reflected in the range of biochemical responses to disease and drugs remains to be seen.
Making Frankenstein pharmaceuticals?
Nevertheless, if anything stands in the way of the pharmaceutical industry’s ability to exploit pharmacogenetics, it seems increasingly evident that it will be neither science nor technology. The biggest obstacles are likely to be social, ethical and political.
We have already seen what public opposition can do, with the European furore over genetically modified foods. It is all too easy to imagine how much more hostility the illusory spectre of ‘Frankenstein pharmaceuticals’ might evoke. Popular misconceptions about pharmacogenomics are one problem. A second is concern about the use to which genotyping information may be put – a legitimate anxiety, given the UK Government’s recent decision to allow the use of genetic tests for Huntington’s chorea in setting health insurance premiums (4).
The decision paves the way for the use of genetic tests to identify people with other hereditary conditions like breast cancer and Alzheimer’s disease. But critics argue that it could result in discrimination against the ‘genetically disadvantaged’.
The public is not alone in its concerns. Both the American Medical Association and the British Medical Association are currently struggling to draft a set of guidelines that could require general practitioners to provide genetic counselling for any patient who provides a full history. Since this is a normal part of any medical consultation, it means that doctors will either have to give every patient genetic counselling – placing a massive burden on already over-stretched resources and raising the prospect of even bigger malpractice liabilities – or dispense with a basic diagnostic tool.
The dilemma facing the medical profession is primarily ethical, but it also has political and economic dimensions – and here we can expect to see substantial international divergence. Iceland has already embraced large-scale biomedical research with the decision to create a healthcare database that contains genotypic and genealogical data on the majority of its citizens. However, countries with bigger populations or stronger religious affiliations may balk at adopting this course.
That said, the economic argument in favour of pharmacogenomics is very powerful. It will ultimately ensure that we pay only for drugs that demonstrably work on the people for whom they are prescribed, an argument that may well seem compelling to the many nations with ageing populations and rising healthcare bills.
Conclusion
Despite the social and ethical difficulties, then, it seems probable that the benefits of pharmacogenomics will so outweigh the disadvantages that it wins the day. But what does this mean for the pharmaceutical industry? First, any company with a lead candidate just about to enter development will launch the final product in a world that expects tailored medicines. That alone should give pause for thought.
Second, any company that does not use pharmacogenomics could eventually find itself embroiled in legal proceedings. In 1976, some former employees who had contracted vibration white finger disease sued the UK National Coal Board. The High Court ruled that the board had been negligent in failing to protect them from a condition whose cause had been identified three years before. How would it rule in the case of a patient who has suffered serious damage from taking a drug that pharmacogenomics could have shown would cause such a reaction? The question is not hypothetical; in another 10 years we shall have the skills to perform such tests. DDW
—
This article originally featured in the DDW Winter 2000 Issue
—
Dr Tim Peakman completed his PhD thesis on the regulation of gene expression in anaerobic bacteria and also has an MBA. He worked at Wellcome Biotech and the Wellcome Foundation on the cloning, humanisation and expression of monoclonal antibodies for the treatment of autoimmune disorders and HIV, before leading various projects on the therapeutic potential of ion channels in epilepsy and pain. When Wellcome merged with Glaxo, he assumed responsibility for co-ordinating the group’s ion channel effort. In 1998, he joined the Pharma R&D practice of PricewaterhouseCoopers and is now leading the discovery solutions programme.
Dr Steve Arlington has more than 10 years’ experience in pharmaceutical research and development, both as a project team leader and as a research group manager. His work has resulted in the development and launch of a number of world class drugs and diagnostic tests. As a partner of PricewaterhouseCoopers and global leader of the Pharma R&D practice, with total responsibility for the European pharmaceutical business, Dr Arlington has acquired a further 15 years’ experience working extensively with company boards and senior management in the areas of R&D and e-business strategy.
References
1 Lazarou, J, Pomeranz, BH, & Corey, PN. (1998) Incidence of adverse drug reactions in hospitalized patients. A metaanalysis of prospective studies. JAMA 279, 1200-1205.
2 Lehman Brothers. (December 1997) Pharmaceutical Company Valuation.
3 PricewaterhouseCoopers. (2000) Genotyping Technology Products User Requirements Survey, conducted on behalf of The SNP Consortium and available on website at http://snp.cshl.org/news/user_ survey.pdf
4 On 13 October 2000, the UK Government approved the use of genetic tests to identify people with certain hereditary illness. The decision was reported in all the national newspapers.