As the demand for assay data continues to grow and enabling technology allows us to generate huge datasets at impressive speeds, assay design is as critical as ever says Brian, Kirk, Vice President, Business Development, BioDot.
While assay scaling has obvious benefits with respect to data analysis and providing answers to complex questions, scaling a high throughput assay is only valuable if you can control your process. Therefore, thoughtful and deliberate planning is critical for the successful implementation of these programs.
Time is valuable
It is important to consider how workflow design can significantly impact process time, as process time can impact the reliability of the resulting data. For example, while loading samples and PCR primers with the same liquid handling robot may reduce the technologist’s time, it may inadvertently create dwell time variability from assay to assay. This occurs when one step is slower than the next. Transferring 1,536 samples to the PCR plate can take hours, while broadcast dispensing the primers to 1,536 wells can take only a few minutes. In order to balance time from step to step, it may be necessary to build dedicated robots that are configured differently to match the cycle time of each step.
Overall throughput may also suffer if any one piece of automation is asked to perform too many steps. If the batch capacity of the automation is less than the daily needs, multiple runs will be required. Similarly, if one system is responsible for more than one step in the workflow, it can create unnecessary bottlenecks.
Addressing the true bottlenecks
We are seeing these strategies play out on a global scale as we watch laboratories scale their qPCR workflows to meet COVID-19 testing demand. The thermocycle process takes one to two hours and is the key bottleneck in qPCR. In order to address this issue, organisations are looking to formats that allow more samples to be processed per thermocycle by moving from traditional 96- and 384-well formats to significantly higher densities.
10 samples per plate:approximately 50 results per shift*
100 samples per plate:approximately 500 results per shift*
1,000 samples per plate:approx. 5,000 results per shift*
[* per thermocycler]
This lesson can be applied to all assay designs. If a particular step is long and difficult to change (hybridisations, incubations, etc.), look to miniaturise the assay that will allow for formats that will help expand the inherent capacity of hardware that may be difficult, expensive or otherwise impractical to simply expand in a linear fashion. If we look back at the qPCR example, a thermocycler processing samples at 50 samples per shift means that a laboratory will need to find space for, validate, then operate 50 thermocyclers for two shifts to process 5,000 COVID-19 assays on a daily basis.
Accounting for dead volumes of expensive, critical reagents is often overlooked and can diminish the economic impact of the overall program. A key reason for scaling a specific assay and leveraging assay miniaturisation is to drive down assay cost, and the move from microliter volumes to nanoliter volumes can be profound. However, if the dispensing platform being used for scale-up requires a large dead volume, the true consumption may not result in the expected reagent reductions. For example, if the dispensing technology can deliver 200 nL droplets in order to build an assay, yet requires a20 µLdead volume, the total volumes per assay is as follows:
(Dispense Volume*Total Number of Dispenses + Dead Volume)/Total Number of Dispenses
Using the above formula, you will note that dispensing ubiquitous or semi-ubiquitous solutions to many targets can amortize the dead volume cost over the number of total dispenses.
If you are dispensing to 100 targets and 200 nL per target with a 20 µL dead volume:
(0.200 µL*100 dispenses + 20µL)/100 dispenses = 0.4 µL per assay
If you are dispensing to 1,000 targets and 200 nL per target with a 20 µLdead volume:
(0.200ul*1,000 dispenses + 20µL)/1,000 dispenses = 0.22 µLper assay
This illustrates that large dead volumes need to be considered with respect to the way the dispensing technology will be used in your assay.
Considering tolerances, CV, Cp/Pp and Cpk/Ppk for critical inputs
The precision and reliability of your automation in a highly multiplexed assay is critical to assay design. It is important to consider the volume CV of the dispensing platform, if the CVs vary as the programmed volume changes from 10 nL to 100 nL to 1000 nL, and whether you have a reliable method for measuring CVs. Often, a liquid handling technology will have different CVs at different dispense volumes, so this must be considered when settling on a final assay volume. If you have well understood CVs and process limits, you can predict whether you will have the process control needed to build a robust process.
The trend towards miniaturisation and scale is inevitable. However, in order to meet this demand in an environment with ever compressing timeframes, we must continue to build robust assays through methodical and disciplined design principles. If we can manage to achieve both scalable and robust designs, we will create powerful methods that will drive profound discovery and, ultimately, improve patient outcomes.
Volume 21, Issue 4 – Fall 2020
Brian Kirk led the development and commercialisation effort to disrupt the FISH and cytogenetics markets with the patented FISHArray technology. He now works with groups in life science (high- throughput screening, genomics, proteomics, etc), leveraging BioDot’s expertise in nanolitre and picolitre dispensing to build next generation tools in these areas. Kirk has over 15 years of experience designing, developing, and marketing high-throughput manufacturing systems for many of the world’s largest diagnostic companies.