To advance these therapies, pharmaceutical companies are increasing the number of compound combination experiments during target validation, lead identification and lead optimisation.

Data sets from these experiments, however, tend to be very large and complex as the number of combinations to be explored increases exponentially with the number of compounds under investigation. Extracting the most promising drug combinations from these experiments in the shortest amount of time presents a major challenge in the drug discovery process.

AstraZeneca and Genedata have taken on this challenge and developed the Compound Synergy Extension to Genedata Screener, which provides an effective industrial-scale screening platform for the analysis of experimental data from compound synergy screens. This article will examine the high-throughput capabilities of the Compound Synergy Extension implemented within Oncology iMed at AstraZeneca. It will detail a novel workflow with high-speed acoustic combination dosing and imaging instrumentation, which enables researchers to:

l Produce final results from raw data in less than 120 minutes for studies with more than 100 384- well plates.
l Reduce time-consuming manual data processing to a minimum.
l Interactively inspect raw data and check the quality of normalised values.
l Review in-depth intermediate and final results using combination-specific displays such as concentration matrices, curve shift displays and    isobolograms.
l Automatically fit mono-therapeutic and combination- based dose-response curves.
l Display, quantify and compare combination effects based on different interaction models (eg Loewe, Bliss, or HSA models or combination  indices).

Complexity of combination screens
While the experimental principle and underlying biological models were developed in the early 20th century, only recent advances in automation and assay technologies have allowed for a systematic screening set up of compound combinations screens In such screens, combinations of compounds are examined on microtitre plates similar to single-compound screens, however, with two or more substances per well. Experimental complexity increases further by varying concentrations of individual substances for the same combination of substances as well as exploring the interaction of a known set of substances across a wide range of cell lines. This can quickly lead to rather large and complex data sets.

For example, examination of 50 different substances in combinations of two at a time requires 1,250 combinations (number of combinations c = ~ n^2 / 2, where n is the number of substances) to be tested for a single cell line. Furthermore, assume a combination needs to be tested in eight different concentrations per compound plus a zero dose, plus 16 controls. This yields a test panel of 96 wells for a single combination. Finally, considering a typical screen is performed in triplicates and involves sometimes 15 different cell lines, 15 x 3 x 1250 x 96 = 5,400,000 wells need to be screened. On 384-well plates, this would yield about 16,875 screening plates, far more than a full deck of standard single compound high throughput screens yield. Table 1 shows typical demands of highthroughput combination campaigns.

Looking at the practical aspects of such combination experiments, there are two major challenges: The whole drug discovery process is set up to examine single substances – looking at combinations of two or more substances is not yet an established process and in particular lacks support in data management workflows.

Analysis of drug combination experiments is complex, in particular of larger screens – it requires scalable data analysis software that supports automation yet is flexible as well.

Challenging data management
In a typical compound combination experiment, each well carries two substances in different concentrations. While many robotics and new acoustic dispensing systems can easily handle combination samples, the storage of the related information is often not supported by the existing IT environment. In particular, most plate registration systems can only manage single substance pipetting schemes. To support combinations, existing plate registration systems must be extended – or a new workflow specific for compound combinations must be established. A similar issue arises at the end of the analysis workflow: the result for a given combination is associated with two or more substances – not one. However, the established data warehouses in pharmaceutical research can only store results for a single substance. While in some instances an extension might be possible to store results from combination screens, querying such databases for relevant information remains difficult and convoluted as the concept of tying results to compound combinations is complex and typically not supported by the query infrastructure.

Challenging data analysis
There is a clear trend to scale up compound combination experiments, resulting in huge and complex data sets.

In addition to the sheer data volume, the data analysis procedure is multi-stepped and complex. It includes the following steps:

1. Normalising for cell growth.
2. Correcting positional artifacts.
3. Fitting dose-response curves to the mono-therapeutic dose series.
4. Fitting interaction models to estimate interaction effects. (Note: often different interaction models are explored, so this and the following         steps are carried out multiple times.)
5. Analysing the differences between observed and estimated interaction effects.
6. Obtaining statistical significance scores for the estimated interaction effects.

Each of these steps requires parameterisation and a review of intermediate results. Data analysis therefore can quickly become extremely timeconsuming, in particular without appropriate software support. Things can get even more complicated due to the incompleteness of the experimental data set: sometimes there are only six concentrations per combination available, sometimes 10 – also the number of replicates and cell lines can vary. This requires the data analysis solution not just to be fast, but also flexible to address those issues.

AstraZeneca’s point of view
To state that data analysis in compound combination experiments is challenging is an understatement. The size and complexity of datasets in combination with the challenges in data management and data analysis as described above have created a major bottleneck for compound combination screening. An innovative software solution for data management and data analysis is required. Otherwise the return of investment in combination screening experiments will be limited and the success rate of finding much-needed innovative compound combinations will be low.

This bottleneck motivated AstraZeneca and Genedata to initiate their strategic collaboration for developing an extension to the Genedata Screener platform. Additionally, AstraZeneca saw the need to get beyond the visualisation provided in simple heat maps. They needed insights across cell lines in a combination work package and more visualisation to enable a project point of view – not merely an individual experiment point of view. To identify effective and disease-specific combinations of cancer therapies through in vitro cell-based assays requires industrial-scale screening. AstraZeneca collaborated with Genedata in extending the Genedata Screening platform to provide end-to-end data analysis of compound combination experiments. In a single software environment, the AstraZeneca Oncology iMed Unit has a complete processing workflow for compound combination experiments from small-scale studies in drug projects to high-throughput combination screening in a centralised team. It enables researchers to import raw data together with compound logistics information, analyse thousands of combination pairs in a single experiment and compare results for the same combinations across different cell lines. The team will examine the combination profile of each specific combination across the cell panel studies, together with genetic information for the cell lines, combination hypothesis will be generated for each combination display either strong synergy or ‘selective’ synergy across the panel to recommend further work. This second phase of analysis is where the users benefit from a cross-cell panel view of the overall synergy score profile.

Basic components of the analysis platform for compound combination experiments
Raw data: parsing and normalisation
Data from a TTP LabTech Acumen laser scanning cytometer data is imported via the Genedata Screener Parser infrastructure.

Dual compound registration
Full registration of dual substance IDs and concentrations. Cell line identities are registered within each experiment set-up.

Robust response surface determination
The Compound Combination Index and Synergy Models are calculated from per cent activity values versus the typical procedure using averaged replicate measurements, making the results more robust.

Selection of mathematical reference models
Choice of Loewe, Bliss Independence and Highest- Single Agent (HSA) models for additive or zero interaction, allowing classification and quantification of Additive, Synergistic, and Antagonistic interactions.

Comprehensive set of result types and displays
Result values for Combination Measurements (data, fits and residuals), Combination Index, Synergy Models (models and excess), and Isobolograms.

How AstraZeneca uses the platform
Using the data analysis results from the Genedata Screener Compound Synergy Extension, AstraZeneca is able to interpret combination profiles and enrich their data mart for improved research capabilities:

Visualisation and interpretation of combination profiles
Within disease panel – patient segmentation: Profiling synergy scores within a disease setting enables identification of highly synergistic combinations for mediating tumour cell killing without a disease sub-segment. Across tissue type – disease positioning: Profiling synergy scores across cells from different tumour tissue origin indicates preferential response of specific combinations(s) to direct disease positioning options.

Data mart
For each cell line screened, all combination scores based on different reference models, curve-fit parameters and graphical representations are exported to the on-line Data Mart. A variety of commercial and proprietary visualisation tools are used to retrieve relevant data from different cell panels or tissue types.

This streamlined Compound Synergy Extension platform handles large data sets and enables the flexible quantification of synergistic combination effects, including:

l High throughput workflow for rapid evaluation of a large number of drug combinations across many disease cell panels.
l Use of live/dead phenotypic cell-based assays focusing on the identification of combinations, which enhances tumour cell-kill and potentially  translates into better tumour regression in in vivo studies and in the clinic.
l Multi-layer data reduction, rapid combination curve fitting and data analysis, synergy quantification and data export – all handled in a fraction  of the time used in manual methods.
l Export of calculated data and graphical output to a centralised Data Mart, which facilitates functional standardisation of drug combination analysis.

The Compound Synergy Extension platform provides a high degree of automation, flexible and robust curve fitting, efficiency and integration with the AstraZeneca screening data infrastructure. Yet it encourages interactive QC and data review at all steps. It allows simultaneous scoring of results using various published mathematical models (Loewe, Bliss, HSA), visualisation approaches and novel reliable methods for determining response surfaces, which results in a fast, accurate and robust assessment of synergistic effects. Using this platform, AstraZeneca has already realised quantifiable improvements in the quality of synergy scores. Scores are more robust, enabling further refinement of experiments and will have a farreaching effect on data analysis of compound combination experiments. These enhanced capabilities will help to identify compelling new drug combination therapy options with novel target agents and Standard of Care.


Dr Oliver Leven heads Genedata Screener Professional Services. He has worked with leading global pharmaceutical companies in deployment of High-Throughput, High-Content, Ion Channel and Time-resolved data analysis screening projects. Dr Leven holds a PhD in Bioinformatics from the University of Cologne.

Dr Eric Tang is Associate Principal Scientist at AstraZeneca Oncology iMed Unit in Alderley Park, UK and a committee member of the European Laboratory Robotics Interest Group. He leads the pre-clinical drug combination initiative and Advanced Micro Physiology System for ‘3D’ cultures in the Oncology innovative medicine unit within AstraZeneca and has pioneered high throughput combination profiling since 2005.