Optimising Well Results In Label-Free Analysis

Optimising Well Results In Label-Free Analysis

By Dr Stephan Heyse, Dr Oliver Leven and Dr Jon Tupy

Bearing in mind the cost and time required to conduct a screening campaign, it is of paramount importance that screening laboratory scientists process and analyse screen data as carefully and consistently as possible. This paper argues that data analysis software that is modular, process-based and supports visualisation and flexible result generation is essential.

Today, screening data analysis is a standard procedure in research organisations. Newer technologies, however, such as high-content screening (HCS) or label-free analysis create serious challenges for scientists in screening labs. Oftentimes, these technologies are not integrated with the established data analysis platform, resulting in increased manual processes and error-prone work.

Further, they require data preprocessing to obtain per-well results, often executed on instrument-specific software packages. Typically provided by the instrument vendor, these packages limit interactive analysis to single plates or vendor-specific analysis options. These issues prevent screening scientists from efficiently delivering accurate, comparable and well-documented results for even smaller studies spanning multiple plates. Therefore, scientists devote a disproportionately large amount of time to meet their data analysis standards.

This article will:

1) Overview technologies for compound screening and related well results
2) Outline typical data analysis steps
3) Highlight data analysis pitfalls
4) Introduce software concepts designed to avoid such pitfalls
5) Detail a use case that applies these concepts.

Well results overview

The objective of a screening campaign is to identify active samples in a particular assay and set of conditions. Achieving this goal is closely tied to the technology upon which the assay is based. For plate-based screening, each well of a microtiter plate contains an individual experiment. Due to the serial character of such experiments, wells are often grouped according to identical experimental conditions; some wells contain references to assess and normalise the activity in other wells. Determining results for individual wells is accomplished in one of the following ways:

1. Direct quantification of the well result

Assay types using endpoint measurements include absorption, steady-state fluorescence, luminescence and radio-labelled species. These technologies produce detection modalities generating a single or only a few measurements per well. Data from these types of assays represents an end-result per well, or may include additional calculation (eg ratiometric assays). Evaluation of endpoint measurement assays is typically accomplished by considering a single result per well.

2. Time-resolved quantification of the well result

Time-resolved measurements such as label-free technologies (eg Surface Plasmon Resonance and wave-guiding techniques, ionic current, timeresolved fluorescence, etc) are used in assays that detect a time-dependent outcome. Usually one or multiple readouts are recorded over time, resulting in one or multiple traces per well.

Often these traces are aggregated to single results (eg peak) or multiple results (eg baseline, peak). Multiple results may require a further calculation (eg peak minus baseline). Often, more complex calculations are performed to further interpret the data, such as fitting a slope, determining the area under the curve, or applying other experiment-specific heuristics.

Analysis methods for time-resolved assay technologies are often the most limiting. They require researchers to choose pre-defined result types via reader-specific configurations, before they actually execute the assay and see the resulting data. Therefore, time-resolved data derive the most benefits as they can be used in a flexible analysis workflow that supports both time-trace visualisation and on-the-fly adjustments of analysis outcomes.

3.Quantification of results on individual cell level

High-content assays (HCS and imaging flow cytometry) technologies quantify phenotypic changes of biological objects (mainly cells) by numerical analysis of images. For each object, multiple results (parameters, features) are calculated, such as cell and nucleus size, cell roundness and nuclear intensity. Aggregating such object-level data to well results (eg average nuclei intensity or percentage of active cells) creates multiple results per well, which individually may indicate the activity of the compound or allow a scientist to assess the quality of the well result.

This perspective on experimental technologies uncovers similarities and differences across the palette of new and established screening technologies. These insights enable the establishment of systematic processing rules and data analysis patterns.

Standard steps in screening data analysis

Most screening is done on microtiter plates with dedicated control wells for normalisation and plate quality assessment. Data analysis forms a string of individual yet interdependent operations. Applying each step depends on the actual experiment technology and specific scientific question to be answered. The following outlines a common and basic analysis workflow including a default set of analysis requirements.

1. High-content assays, cell or object-level measurements must first be summarised to yield well results. This is typically done through pre-processing on the instrument or during image analysis process. This process consists of: per-cell (object) normalisation (eg intensity divided by area); selection of cells matching defined quality criteria; and summarising of selected cells to well results by simple calculations (mean, sum, number) or more complex ones (median, KS-score, stddev).

2. For experimental technologies generating per well results only, per-well normalisation is often calculated by the instrument software. Examples include baseline correction for time-series experiments and background subtraction for fluorescence experiments.

3. The next processing step is plate-wise normalisation, using either one (n-fold) or two (%effect or %inhibition) reference well groups. This allows across-plate and across-assay comparison of results. Normalisation to a calibration curve is often used if the effect-signal relationship is not linear. This calibration requires a curve fit to the results for a series of standard wells spanning the desired concentration range, which is followed by a translation of the measured well signal into concentration.

4. To reduce systematic effects caused by experimental conditions (eg edge effects due to temperature or oxygen level shifts within a plate), pattern detection and correction algorithms may be applied.

5. Wells not meeting the quality control guidelines (eg due to elevated baselines or a high background signal indicating auto-fluorescence) are invalidated, flagged or masked. Stretches of failed wells, resulting from experimental issues such as blocked pipettes, can be identified by visual inspection or automatic algorithms. They are masked as well.

6. Final result determination depends on the screener’s objective. For single concentration compound experiments, the results may be sorted and active compounds selected by a threshold. Multiple results from different wells carrying the same compound are subject to replicate analysis and sometimes also condensing. For dose-dependent experiments, dose-response curves are fitted to the well measurements, yielding the compound potency. For siRNA experiments, the technical replicates (identical siRNA in different wells) and biological replicates (different siRNA targeting the same gene) must be condensed to result in a gene ranking based on activity and likelihood.

Common pitfalls

In most screening experiments, data analysis is a sequence of individual, interdependent steps as detailed in the previous section. These form a chain of calculations with defined intermediate results. A screening-specific software solution automates data management and specific scientific tests to quantify result validity, thereby minimising errors. When dedicated analysis software is absent, a number of issues may arise due to fragmented analysis using multiple applications, general purpose software, or both.

Underestimation of process and process artifacts in data analysis

Some users and organisations face a very fragmented workflow, built around an instrument vendor’s software packages, Excel, and a corporate database for all relevant results. Depending on the type of experiment and the level of data analysis needed, some or even all of the following challenges are a daily experience:

No overview on the complete dataset

As the instrument software allows individual and interactive processing of results only for a single plate, and as a complete assay can only be analysed in batch-mode, the complete data set cannot be reviewed with a sufficient level of detail. Only a graphical overview on all well results for plates screened (heat map/trellis plot) allows scientists to discover operational issues. Issues can span random yet frequent pipetting errors resulting in a typical ‘high-low’ pattern for adjoining wells, or systematic deviations between plates from different runs (Figure 1).

Figure 1 Screenshot of Genedata Screener Assay Analyzer module

Manual input of formulas by copy and paste

Excel is used for a lot of analyses and oftentimes is simply the glue between different stages of the data analysis. Therefore, manual editing of formulas in cells is the norm. Even when initially checked for correctness, the copying to other sheets runs the risk of copy-and-paste errors: best case lose references; worst case numerical results change without an error message notification. Potential errors in using Excel are not easily detected and frequently go unnoticed. Even if error-free, any result certification requires significant investment in time for double-checking. Interdependent multiple calculation steps extended over different spreadsheets.

In spreadsheet-based calculations, the result of one formula is often the input for the next processing step. This creates an inability to view users input (formula and result side-by-side), resulting in increased time for error searching and sanity checks. Clearly, the time needed to set up and monitor spreadsheet-based calculation workflows increases disproportionally with the number of steps. In fact, it is not uncommon for single calculations spread across five different spreadsheets.

Disconnect between ‘raw data’ and results

Many complex screening technologies produce raw data containing sets of values per well, not single numbers (eg multiple time-dependent measurements in FLIPR and label-free experiments, or results of image analysis from cells in high content screening). It is important for users to be able to go back to the actual image or time-series data trace to validate the numerical findings of an interesting well result.

Without this ‘phenotypic quality control’, additional work is required to search through raw data on a different software system, computer, in another room, or worst of all on paper. Ideally, scientists need access to a visual review of source data combined with capabilities to change analysis methods to accommodate any new details they have seen in data.

Fragmented workflow. Manually moving a data set from one software system to another (eg from the instrument software to a screening data analysis software or a gene-ranking application) requires significant effort. When users detect a data processing error, it requires going back to the instrument software to re-do data export and import, and processing screening data.

Often, this prevents optimal analysis as these loops are executed twoto- three times to deliver acceptable results. These additional cycle times are burdensome on users. In an ideal workflow, users are able to easily return to the initial calculations and see the effect of their changes on end results.

Over-engineering the data analysis pipeline In many environments, the need to automate the various parts of data processing has been addressed by the introduction of implicit (script operated) or explicit (graphical) workflow support tools. While such tools offer flexibility to IT-savvy and statistically-trained users, they often impose a rigid roadblock for day-to-day data analysis.

This means that the individual data processing pipeline is tailored to a specific experiment and must be adapted for every new experiment including slightly changed conditions (eg different plate format or changed raw data formats). Such adaptations require an IT or statistics skill set, and thus create delays and overhead in recruiting support for every change. With a dedicated organisational effort, it is possible to create a highly-specific toolbox that addresses most processing and analysis requirements for a work group.

While these support tools may do the job over the short term, this high investment in time and manpower usually cannot be maintained over longer periods of time. New analysis requirements, support for new assay technologies and upgrading versions of the underlying workflow support tools all add to the fragility of a custom-built system.

Software to improve screening data analysis

To obtain significant value from a complex, resource intensive process, such as compound screening using label-free technologies, requires advanced data analysis software. Such software incorporates modularity and flexibility while providing user-accessible control and customisation of results to accommodate a wide variety of analysis objectives. Moreover, when a single parameter is changed in a unified system, processes can be sequentially and automatically updated.

The system interface must provide useful visualisation and screening-specific metrics and results. When assessing a software system for screening data analysis, the following functionality checklist could help to ensure the adoption of a solution that optimises well results from label-free technologies.

Modularity. A single system must support: raw data import; well result calculation (from multiple results per well as for ratio-metric assays technologies, but also HCS and time-series); normalisation; calculation of plate quality control parameters (Z’ and S/B); and data correction and compound result calculation for single concentration and dose response curve experiments.

Software modules that work together as a whole allow customisation of specific workflow aspects while maintaining interdependence of the complete workflow. Each system building block can be independently accessed and configured by users, but all are part of a larger system that supports automation and data integrity.

Staged processing

Staged processing ensures wellresult calculations are consistent for all wells before plate quality control, which in turn is a prerequisite for dose-response curve fitting. Maintain intermediate results in a defined state (stored and/or versioned) and always accessible to the user. Update downstream results if early stage data is further manipulated.

Visualisation of data and results

The human eye can detect small nuances and changes where generic algorithms fail. Seeing all plates or all doseresponse curves of an experiment provides new insights and data familiarity. For numeric or derived results, visualisation provides access to underlying source data, be it high content images or traces from time-dependent experiments.

Iterative workflow support

As mentioned above, it should be possible to alter early results and see downstream results automatically updated. A system must ensure speedy processing: when users must wait minutes for the results of a calculation or hours for results to be saved, this drastically reduces the ability to explore and understand experimental data.

Use case for interactive visualisation and analysis of label-free data

The following data analysis steps are typical for a full time-resolved label-free experiment. The steps include the ability to detect, import, visualise and aggregate well data captured in time-series. An initial step for time-series data analysis is baseline calculation. This is the definition and verification of an initial stretch of measurements without any other experimental influences.

Ideally, several such measurements are aggregated by median, yielding a robust measure of the initial state of the experiment. Here, visualisation of the well traces is extremely useful, both as an initial visual check on experimental outcome, and as confirmation that the correct time period has been used to capture the baseline measurement (Figure 2).

Figure 2 Compound scoring with access to the full time traces

A subsequent analysis step analyses the full experimental data set for samples that show activity in the assay. For time-series data this involves peak finding and quantification, using methods such as maximum value, max time, slope and area under curve. Note, there is also interest in results that will allow data quality assessment; the baseline measurement above is one example.

For the time-series example, software must support calculation and display of multiple outputs such as baseline, max value, value at time, etc. Systematic visualisation is requisite. A typical dual-addition FLIPR assay requires collecting maximum value for the second half of the time series, corresponding to the addition of the test compound (Figure 3).

Figure 3 A dual-addition FLIPR experiment designed to detect the loss of agonist signal in the second half of the time series experiment

However, visualisation of perwell traces across all assay data may reveal additional biological mechanisms – and the need for additional types of data aggregation to detect these other response categories in high throughput (Figure 3).

Conclusion – Optimising Well Results In Label-Free Analysis

Considering the expense and time required to conduct a screening campaign, it is incumbent upon screening lab scientists to process and analyse screen data as carefully and consistently as possible. Easy-to-use software that is modular, processbased and supports visualisation and flexible result generation, is essential. DDW

This article originally featured in the DDW Spring 2012 Issue

Dr Stephan Heyse leads the Genedata Screener business unit. He earned his PhD in Biophysics from the Swiss Federal Institute of Technology. His thesis work focused on developing and applying optical waveguides for label-free measurement of cell signalling events.

Dr Oliver Leven heads Genedata Screener Professional Services. He has conducted consulting and deployment projects in the areas of High-Content and Time-resolved Screening. Dr Leven holds a PhD in Chemistry from the University of Köln.

Dr Jon Tupy is Head of Genedata US Professional Services. His PhD was in Biochemistry and Biophysics from the University of California, San Francisco, followed by a fellowship in Bioinformatics at the Berkeley Drosophila Genome Project.

Suggested Reading

Join FREE today and become a member
of Drug Discovery World

Membership includes:

  • Full access to the website including free and gated premium content in news, articles, business, regulatory, cancer research, intelligence and more.
  • Unlimited App access: current and archived digital issues of DDW magazine with search functionality, special in App only content and links to the latest industry news and information.
  • Weekly e-newsletter, a round-up of the most interesting and pertinent industry news and developments.
  • Whitepapers, eBooks and information from trusted third parties.
Join For Free