Markus Gershater, CSO and Co-Founder of Synthace discusses how technologies such as AI and automation are helping advance biotech and drug discovery.
On June 5th, 1833, Charles Babbage presented a small working part of his “Difference Engine No 1” to a group of interested onlookers. Sitting in the audience was Ada Lovelace, a tack-sharp 17-year-old mathematician. Fascinated by what she saw, she met Babbage afterward and formed a friendship of incredible importance: he would build what is now regarded as the first computer, and she would write what is now regarded as the first computer program for that very same machine.
Both having radical ideas held back by the limitations of their contemporary technological environment, Babbage and Lovelace were ahead of their time. The thing about good ideas, though, is that they tend to stick around. A little over a century later, Alan Turing would crack Enigma and the modern age of computing would begin.
Like the gap between the 19th-century idea for computing and the 20th-century realisation of computing, the life sciences are in the same limbo. But not for long. Many of the ideas, technologies and methodologies that life science needs to undergo complete transformation are already here. On their own, they are powerful. But brought together and used in concert, they will be revolutionary.
And it’s going to happen sooner than you think.
Earlier this year, I shared a stage with Dr. Helena Peilot Sjögren, Associate Principal Scientist in Discovery Biology at AstraZeneca, as we made a joint presentation at SLAS. I listened as she told the audience how they had begun running “experiments that were previously impossible.” She later went on to give some examples (which I’ll return to later) that confirmed a hunch I’ve had for some time: we are on the verge of a new way of doing science. I predict that, by the end of this decade, the way we do experiments will be entirely transformed.
A new era for drug discovery: 3 ways forward
To date, digital tools in the lab environment have been biased towards record-keeping or limited operational execution. There are design tools we use before entering the lab, automation tools we use when we are in the lab, and even more—the eponymous electronic lab notebooks—that we use after the fact.
To speed the discovery of new drug targets, of new potential molecules, and to help us analyse large and complex multidimensional datasets evermore effectively, we must move toward digital tools that combine the before, during, and after of the experiment lifecycle to enable the doing of science, not just the recording of it.
There are three ideas whose convergence will buoy the transition to this united model: Design of Experiments (DOE), “Lab Automation 2.0”, and artificial intelligence and machine learning (AI/ML). The time is ripe to bring them all together and move towards cohesive digital tools that enable the revolution I’ve already glimpsed in the work done at AstraZeneca.
Before we discuss how to bring them all together, let’s consider each in turn.
Making Design of Experiments (DOE) a reality
Multifactorial experiments are vital in the study of biology because they help us understand the interactions that are fundamental features of all living systems. Without understanding them, we are lost. While DOE will not transform all of discovery on its own, it allows us to reach far deeper into the complexities of biology than we could otherwise.
It’s not a new concept. While it has been around since the 1930s, its application in life science is still limited. There are four main reasons why:
- Awareness and understanding: some don’t believe it will work, some don’t grasp its full potential, and some don’t understand it at all
- Situational inertia: creating DOE designs and using automation equipment to execute those designs is difficult to get started
- Learning curve: it’s a lot to learn, particularly when it comes to the details of experimental design, data structuring, and analysis
- Seeing is believing: until you use it, it’s hard to understand how much it will change your science and your work
The fundamental change needed to overcome every one of these barriers is abstraction. Just as we use programming languages instead of binary code and currency instead of barter, abstraction is both a common and vital part of our lives. A scientist who must translate the complexity of DOE by hand, manually code machine instructions, and manually aggregate and reformat experimental data will get frustrated and stuck. I know I have.
Instead, the scientist with the right layer of abstraction between their ideas and the nitty gritty of execution and data aggregation will have more room to breathe and focus on the problem. These are fundamental tasks that must be handed off to software so that it can do the heavy lifting for us. It’s high time this happened in the life sciences.
Delivering “Lab Automation 2.0” environments
We’ve had the capacity for high throughput automation for some time now and, while higher throughput is always welcome, there’s only so much value it can drive. It’s time to move to Lab Automation 2.0:
- Full traceability
- Full reproducibility
- Higher quality and consistency
- Increased walk-away time
- Capable of handling vast complexity
- Substantially improved useability
With this, any scientist will be able to design and run experiments that will work on any number of different machines, in any location, without writing a single line of code. Automation engineers will then be able to operate at a higher conceptual level, enabling and optimising the work of their teams at scale.
Right now, we are limited by the software that we use (or don’t). Creating automated methods and protocols requires a significant time investment from specialist automation engineers, often writing custom code, often while blocking machine time. This limits the use of automation to wherever that investment makes sense.
Getting to Lab Automation 2.0 is another area where software will be vital. To be able to make this a reality we need services that are able to dynamically reprogram machine instructions and integrate with multiple machine classes at the same time.
Colliding datasets with AI/ML technology
The buzz around AI/ML is remarkably strong and, without a doubt, it will be transformational in bringing new insight to biology. But for all the fanfare, we have yet to see the full realisation of its potential. The work of biology and the data/metadata that it produces is difficult to represent in code and difficult to digitise. If we can’t do it, AI/ML remains a pipe dream that remains the preserve of “big tech.” The volume of data, and also the quality of data we can provide to those artificial intelligence and machine learning tools determines the likelihood of uncovering anything interesting—so this should be another priority.
However, with Lab Automation 2.0 and with the volumes and quality of data generated by methodologies like DOE in such an environment, this will finally become possible. We may even begin to map entire biological landscapes overnight, using the resulting data and metadata to predict future outcomes. There will likely come a time in this decade when AI can predict the best possible experiment design before we even step into the lab.
Should this come to pass, the upshot will be scientific breakthroughs that defy belief by today’s standards.
Assay development meets high dimensional experimentation (HDE)
Before I discuss how we can hasten the convergence of these technologies, I want to return to the examples from AstraZeneca that I alluded to earlier. Particularly interesting is how they have combined and deployed both DOE and Lab Automation 2.0 across their assay development teams. They are now working with a new capability that we have taken to calling “high dimensional experimentation,” or HDE for short.
We know that “one factor at a time” (OFAT) is terrible for studying biology because of its inability to explore interactions. Classic DOE also has its limitations, favouring an efficient design iterated many times to explore a defined design space—an inherently cautious approach. With HDE, we blow this caution out of the water. When we have access to (and integration with) equipment that can run 1,536 well experiments and dispense volumes as low as 2.5nl, we can explore an entire design space in a single shot.

Using HDE, one AstraZeneca team found a way to run an assay with 50% less of a highly expensive reagent. Another team doubled their assay window by running a 1536-run experiment investigating 7 factors. Another completed a full buffer screen with 9 factors and 75 conditions, where each of those conditions required a 16-point dilution curve to investigate (approximately 1200 runs). These were teams of world-class scientists who had long been working with the very best facilities modern technology has to offer and yet, with HDE, they were able to unlock these “previously impossible” experiments.
Used consistently, the potential impact of HDE on assay development is huge:
- Higher assay quality
- Lower assay cost
- Faster time-to-insight
This is an early indication of what’s to come. But how do we take things further still, through the rest of this decade?
Tying it all together: the experiment of the future
“The adjacent possible” is a term first coined in the mid-nineties by the theoretical biologist Stuart Kauffman. It’s a concept applicable to any given system where the present “actual” of the system expands into nearby elements that are not yet part of it. Like rooms in a house, says Steven Johnson as he explores the same concept in his book Where Good Ideas Come From, we may only open a door to a room that is next to the one we currently stand in, never a door to a room on the other side of the building.
DOE, lab automation, and AI/ML are all ideas that have remained, to some degree, stubbornly adjacent to the current “system” of the life sciences. To realise their full potential, we must find a way to fully absorb and bring them all together. I believe the missing ingredient here is the framework we use to think about this problem. Whether we’re talking about the software we want to deploy or the equipment we want to install, much is made of “the lab of the future” as our industry’s panacea. We create a subtle but profound shift if we instead switch to thinking about the experiment of the future.
Rather than considering each tangible or intangible part of a lab on its own, this shift asks us to examine the many assumptions about how we should work in the first place. When we think in terms of the experimental record, we need an ELN. When we think in terms of sample management, we need a LIMS. When we think of the experiment itself, we stop thinking about the people, processes, equipment, data, and methodologies as separate problems to be solved in isolation. We ask instead what combination of these we need to do the best possible science.
Is there a way to enable and control the entire experiment lifecycle from end to end? Is there a way to enable DOE, Lab Automation 2.0, and AI/ML with a single unifying standard? Is there a way to elevate the scientist so they can spend more time on what matters most, applying more of their individual talents to today’s most difficult problems with the full power of modern computing? I believe there is, and it’s something that the team at Synthace is hard at work on as we speak.
“Imagination,” said Ada Lovelace, “is the Discovering Faculty. It is that which penetrates into the unseen worlds around us, the worlds of Science.” For us to unlock a new era of discovery, we must find ways to unlock the imaginations of the scientists working at the front line of R&D and give them the tools they need to run the kinds of experiments they can only imagine right now.
If we do that, we usher in a new era of biological insight.