DDW Editor Reece Armstrong speaks to Dr Darrell Green, Lecturer in RNA Biology, Biomedical Research Centre, Norwich Medical School University of East Anglia, about his work using next generation sequencing (NGS) and the areas the technology is impacting within drug discovery and development.
RA: What areas of drug discovery and development are NGS technologies impacting most?
DG: I think the most impact is being made in personalised medicine. There are two main concepts underlying this impact; we’ve known for a few years now that people can have variable responses to the same medication and that these responses are largely based on an individual’s genetic makeup. We can use NGS to better understand so-called “pharmacogenetics” so that we can predict how an individual may respond to a given treatment. For example, perhaps they need a higher or lower than average dose of a specific medicine, or we could prescribe a completely different medicine that is tailored to the individual. The second concept is largely in oncology. Two different people could have the same type of cancer, but their tumour’s genetics could be ever so slightly different, but different enough that the treatment response is completely different. For example, we know that p53 structural variants are the main cause of osteosarcoma. But each of those two individual patients might have additional mutations that dictate treatment response. Say one patient has a MYC mutation and doesn’t do so well on treatment, but another patient with a BRCA mutation responds pretty well to PARP inhibitors. NGS is becoming invaluable for treating individual cancer patients based on their tumour’s genetic makeup.
RA: Could you discuss some of the work you’re doing with NGS technologies?
DG: Our lab mainly utilises NGS for investigating small RNAs such as microRNAs and tRNA fragments. Small RNA sequencing or “sRNA-seq” protocols are different to messenger RNA sequencing or “mRNA-seq”. For mRNA studies, it is more straight forward to capitalise on the poly-A tail and use this feature to isolate your mRNAs for sequencing. For sRNA studies, the poly-A tail doesn’t exist on your molecules-of-interest. Instead, we have to ligate adapters to each end of the sRNA before isolation and sequencing. However, the ligation step is biased and inefficient because the enzymes that ligate the adapters, which are DNA and RNA sequences, prefer some sRNAs that the adapters are ligated to. This means that many sRNAs in your sample are being missed and not sequenced. The lab I completed my PhD in designed a new type of adapter called High Definition (HD) adapter, which significantly reduced the ligation bias and we found many more types of sRNAs that actually led onto the discovery of a potential new cancer treatment. Our current work, in my own lab now, is focused on using HD adapters in single cell sRNA sequencing projects, for example, in circulating tumour cells.
RA: Are there any disease areas NGS technologies are having a particular impact in?
DG: As you’ve probably guessed from my previous answers, the most obvious impact to me is in cancer research and oncology. However, NGS will have impact in any genetic-based diseases. I started my career in a molecular genetics diagnostics laboratory in the NHS and I remember NGS just starting to be evaluated for routine use for every sample that came into the lab, rather than us having to examine single mutations in single genes. Sometimes we would see the same patient coming through the lab at a later date, but this time to examine a different location in a gene (or even a different gene entirely!). If NGS can be used to screen everything at once, one time, then it will be far more informative of an individual’s condition and will save significant time, money and resources over repeating work for the same patient. Plus, this will be for any genetic disorder, not just cancer.
RA: How accessible have NGS technologies become for life sciences researchers?
DG: With the expansion of commercial providers, literally all life science researchers now have easy access to NGS. I’m too young to fully appreciate what it was like before NGS, but I do remember as an undergraduate student having to make multiple libraries that took weeks and months to make for direct sequencing or Sanger sequencing – and compared to NGS today – it was painful! In my first research post, the lab had just done their first ever NGS experiment and were comparing the data to microarray data and I remember it being a big deal at the time. They had to use an international collaborator to get it done and it was expensive. We were fortunate working at the Norwich Research Park that one of the park partners was The Genome Analysis Centre – now the Earlham Institute – who had then invested money into purchasing several Illumina machines and so the centre offered NGS to other labs on the research park. NGS rapidly became ‘the norm’ to us but was not so accessible to other researchers or collaborators who had to still rely on microarrays. In the space of about three years in the mid 10’s, I witnessed an absolute boom of commercial providers across the UK and Europe who could perform NGS at relatively inexpensive cost. Microarrays quickly fell by the wayside. Today, literally any researcher new or experienced, could search in Google for a provider, get in touch and set up an NGS experiment for one or two thousand pounds. NGS has become the norm for all. It’s the data analysis afterwards that requires a bit more expertise.
RA: Are there any barriers to using NGS platforms such as cost or training restrictions?
DG: I don’t personally think there are any real cost or training restrictions. If you are using a commercial provider for NGS, they will take care of everything for you. The cost has come down massively and grant reviewers, in my experience, are pretty good at acknowledging what the costs are and how much you need for your planned experiments. The only barriers per se are if you want to start performing NGS yourself. Working in academia I do not really see the point to this however, because the sequencing machines and their run costs are too expensive to be using on an individual experiment basis. NGS is only really cost effective if you are running lots of sequencing and regularly. In which case, you would be provided with training on how to use the systems, likely at point of purchase. I’m really excited with some new NGS platforms coming out now where all of my above points are negated because the sequencing devices literally plug into the USB of your computer and you can perform NGS at your desk! When these platforms are improved even further so they comprise the same sequencing power as the current standard machines; that’s cool, futuristic and removing all barriers.
RA: How important are bioinformatics and data analysis in helping teams analyse sequencing data?
DG: A lot of what I have talked about already is largely focused on NGS and getting it done in the first place, which is the easier part. The real technicalities, difficulties and barriers come into play when it comes to analysing the data. Commercial providers for NGS do offer some bioinformatics services for an additional cost but in my opinion – and at the risk of offending providers – these services are very basic and should only really be used as a last resort, perhaps if you are generating some preliminary data without the appropriate research staff in place. Commercial NGS providers use basic software packages, similar to the ones freely available online advertised towards wet lab researchers, for all projects and using the same parameters for the data. Anybody working in NGS will tell you that not all data is the same and requires custom analysis for each experiment and each project. Using these standard packages and services means the user will miss a ton of useful outputs (i.e. discoveries!) and potentially even get some things incorrect. It is critical that a properly trained bioinformatician with coding skills and expertise is employed (or identified as a collaborator) to be able to analyse NGS datasets and get the most out of it. This is the real bottleneck with NGS studies and perhaps the most misunderstood section when it comes to grant review. We are very fortunate and grateful in our lab in that we receive bioinformatics support from Children with Cancer UK who haven’t funded just one hypothesis-driven project for one individual, but instead have recognised that one specialist individual can work on several different NGS projects. I hope other charities and funders start to do this and the bioinformatics bottleneck becomes reduced.
RA: As large amounts of data are generated from NGS, will tools such as AI become important in helping analyse those data faster and more efficiently?
DG: I’m not entirely sure, at present at least, how AI could replace what we are doing in NGS. Analysing data faster and more efficiently isn’t really limited by whether its performed by a human or AI. That depends on server speed and RAM, etc, which is out of the control of both human and AI. Thinking out aloud, I guess an argument could be made that AI could replace the bioinformatician, for example, deciding how to custom-build the analysis for a unique dataset. But even then, complex back and forth discussions are had between the wet lab researcher and the bioinformatician that means the data analysis changes slightly. Could a wet lab researcher have that conversation with AI on an ad hoc, untrained and untaught basis? I’m not so sure. AI certainly has a role to play in other areas such as pathology, but that’s another conversation.
RA: Can NGS improve the chances of a drug making it to market?
DG: For sure! This largely relates to the first question when I talked about pharmacogenetics and revealing and targeting specific mutations in an individual’s tumour. An interesting case study to answer this question is primary bone cancer. This disease has not seen treatment or survival improvement since the 1970s. Each patient is treated with exactly the same medicines and literally no new standard of care drugs have passed clinical trial in the last 50 years. Here’s where it gets interesting; if you go through the historical clinical trial data with a fine-toothed comb, you will see that some drugs actually worked really well for some patients on the trial. But the new drug was ultimately deemed a failure and was not pursued. Why is that? As NGS technologies have improved and become more widely used in bone cancer research, NGS has revealed that tq]his type of cancer has multiple subtypes, distinct enough to be considered a separate disease. Patients enrolled onto the previous trials were a mixed bag of diseases, which is why only ~5% had a good outcome. However, ~95% did not have a good outcome and so the trial failed. If NGS was available at the time – which it will be going forward – trials can be designed in two new ways. Either the drug is targeted towards a specific NGS profile, in which case that previous ~5% become ~100% if they are the specific participants enrolled; or the drug is tested in all participants and the ~5% are profiled afterwards and the drug is pursued for that particular cohort. Using this approach, NGS will unpick which drugs work in which patients, ultimately improving the chances of a drug making it to market, even for that 5%.
DDW Volume 24 – Issue 4, Fall 2023
Darrell Green is a lecturer and Group Leader in RNA Biology at Norwich Medical School, University of East Anglia. He trained in molecular genetics at Addenbrooke’s Hospital in Cambridge before obtaining his PhD in Medicine at UEA. His research combines genetics, cell and molecular biology with next generation sequencing and bioinformatics to study the role of microRNAs in various biological processes; with a particular interest in childhood bone cancer.