You think you need an ELN… but are you asking the right questions?


Asks Matt Clifford, Director, Research and Innovation Strategy, IDBS.

When it comes to technology transformations, sometimes finding the right solution comes from asking the wrong question. One ‘wrong question’ that I hear a lot is: “Is it time for our lab to invest in a new electronic lab notebook (ELN)?”

In many labs, a standard ELN represents the most frustrating part of a scientist’s day: toiling as a data entry clerk to provide data for other departments, without easy access to data themselves. There’s even a name for the downsides of such under-supported products, poor integrations, and bad user experience: frustrated ELN users call this the “tax on science”.

That tax has a real cost in terms of drug discovery. Research team leaders are being asked to find more and better drug targets and to take on more projects throughout the research and development process. New tools and technologies can shorten project times, improve understanding of diseases and experimental space, and reduce the cost of failure – but only if labs have high-quality, well-structured data and scientists have time for higher level analysis.

New technologies implemented poorly to meet demands for better data can mean even more tedium for scientists, without clear benefits for enhanced capabilities or scientists’ day-to-day work. If your lab is hoping for a new ELN alone to fix the ‘tax on science’, there is a good chance that the problem – and the solution – actually lie further upstream.

Instead of a new ELN, you may need an integrated data management and insights platform – and a bigger vision for what your data can do.

ELNs then and now

A few decades ago, ELNs were at the forefront of the digital migration for pharmaceutical labs. At first, they replaced important paper-based processes: labs needed to capture data to manage experiments, comply with regulations, and – pressingly from a business perspective – file patents.

For decades, the patent process in the United States depended on being first to discover or first to invent. To prove that they made discoveries first, it was crucial for labs to accurately record who did what when. Intellectual property management became a key driver in the transition from analogue record keeping to a ‘paper on glass’ approach: replacing paper-based records with electronic equivalents.

This transition improved intellectual property management for early drug discovery. It also introduced new opportunities to validate and improve data quality, creating wins for regulatory and experimental goals. And, of course, ‘paper on glass’ brought labs into the twenty- first century by eliminating reams and reams of paper.

Over the last few decades, however, the laboratory information management landscape has changed. When they were first implemented, for example, ELNs were often the system of record for storing intellectual property; that function now falls to a variety of laboratory information management systems.

In 2013, US patent law also changed. Labs no longer needed to be first to invent, but first to file. Suddenly, it became much more important to submit applications quickly than to pinpoint when an experiment was completed.

Data is now the lifeblood for many research organisations; supporting predictive models to improve target and disease understanding, aligning real-world evidence with clinical outcomes.

Well-structured data has become a huge competitive advantage for fast, accurate analysis and insight – this is especially true in the context of machine learning (ML). But all too often, ELNs are a dumping ground for unstructured data and results that lack meaningful context.

While standalone ELNs can still speed up drug discovery, if they don’t form part of a more comprehensive data strategy and ecosystem they can also get in the way, by enabling outdated practices that need a much deeper overhaul.

What ELNs can and can’t solve

Alone, a basic ELN is the functional equivalent of an untidy sock drawer. You pick up your socks, you throw them in the drawer, and now you have a clean floor. That’s good! In this metaphor, that’s the regulatory, compliance goals of data management achieved.

But tidying up is one thing; looking sharp is another. Instead of hiding your socks, you’ll need to visualise them, pair them up, and work out which pairs go with which outfits. That project requires not just clarity about the socks but also context about the rest of the closet, what’s in the hamper, and what the cool kids are wearing these days.

A messy sock drawer on its own won’t help much; neither will a marginally better ELN. Just like outfit planning requires putting socks in context, goals like faster drug discovery and development, better-characterised disease targets, or empowering scientific talent require well-structured, contextualised data.

A basic ELN is ultimately a data collection tool rather than a research system or a data management strategy ­– it may be the pain point, but it is not the solution.

Consider, for a moment, a hypothetical cell line development lab. It’s a modern organisation with good hardware, multiple bioreactors and imaging systems, one shared cell viability analyser, and a team of 50 people growing new cell lines over six months and then running parallel experiments over a 14-day window.

Say this lab has a good ELN: it is already using a paper on glass approach. Scientists enter some experimental data and outcomes in the ELN usually at the end of process. Throughout the project, however, they make key decisions and consolidate thousands of data points using Excel workbooks, saved on local drives, file shares, in email inboxes, Teams’ Sites and shared folders. With this approach, generating a final cell line development report could take six to eight weeks and tens of thousands of resource hours. Tracking a decision back to data can involve more work than redoing the experiments.

Moving away from offline processes can take weeks off compliance timelines. But at the end of the day, this lab is still silo-ing valuable data points in separate notebooks, unconnected to workflows, instruments, or other experiments.

An internal analysis at a top 10 pharmaceutical company showed that in preclinical studies alone, this kind of ‘dark data’ was costing its business $20 million a year in delays, compliance challenges, and missed operational insights. In addition, the opportunity to reuse that data to drive better modelling, and therefore outcomes, was being missed out on entirely.

Parallelised lab processes, like those performed with automation, are now being coupled with digital approaches such as Digital Design of Experiment (DoE), Digital Twins, and ML. These approaches require high-quality data and experimental metadata to help drive to insight. When only unstructured outcomes are captured and the data lifecycle from lab to analysis is not controlled, a huge manual data effort is required.

In addition to hurting the business, this kind of ad-hoc approach wastes scientists’ time. Scientists must still manually enter data into the ELN, manually run calculations in Excel, and manually compile metrics into dashboards for weekly standup meetings.

To slice data differently, they may even have to submit a help ticket to another department. In our hypothetical cell culture lab, that might look like bench scientists emailing and chasing on Friday morning, asking for reports due by Monday, including Friday’s runs. Those reports could take hours to pull from disconnected systems and poorly structured datasets.

The alternative: an integrated biopharma research and development system

A new basic ELN may help this lab with compliance and reducing paper waste. Out of the box, though, it won’t connect useful data points. Instead, what most labs need is a new approach that can capture data at the source with all the context of the experimental workflow and the outcomes included, so scientists can derive new insights.

If our cell culture lab adopts an integrated data management and insights platform, their data collection process will look much different. Now, data points pull directly from the cell viability analyser and the bioreactor into a cloud-based data architecture. Context about the experiment, the instrument, and the workflow is automatically captured to create a richer and better-described data set leading to more reusable data and repeatable outcomes.

An integrated platform can take care of analytics steps as well. Routine calculations and data transformations can be built into processes and repeated automatically. An IC50 curve, a multi-parameter regression, or equations can all be pre-configured so scientists don’t need to spend time writing Excel formulas or code or waiting for work to be completed in supporting informatics teams.

In addition, integrated platforms can incorporate the ability to trace cell culture genealogies or reagent preparations to previous experiments so that variability can be traced back to source. ELN functionality is seamlessly integrated with electronic workflow execution. One lab found that implementing digital workflows and automating work meant they could generate cell line development reports 50% faster.

Scientists would rather do science than manual data entry. This is finally within reach, once an integrated platform is set up, as repeatable processes can be configured to provide data and insights, alongside instrument integrations, bringing labs closer to the ideal scenario: clean, ready-to-use data that just arrives.

Now, because data is stored in a centralised data backbone with analytics built in, scientists can quickly access and understand their own data freely; data is democratic and according to F.A.I.R. principles: data should be findable, accessible, interoperable and reusable. As a bonus, fewer manual steps also mean reduced risk of data integrity issues, which, if uncovered at the last minute, can delay regulatory submissions by as much as six months.

Integrated digital platforms can also manage structured method execution workflows that flag deviations as they occur for example preventing the use of uncalibrated equipment. This approach has helped some labs see a 20- 30% efficiency increase.

The business case for change

For some labs, this data management vision may sound pie in the sky. Soon though, it will be a basic requirement for staying competitive. A modern BioPharma Lifecycle Management platform (BPLM) shouldn’t just address scientists’ pain points or compliance requirements: it should address core business needs.

Recently, a leader at a leading pharma organisation told me: “The scientists of today will be replaced with the scientists of tomorrow who are using machine learning.” Labs that do not have easy access to well- structured data and built-in insights are positioning themselves to fall behind.

The amount of data that we can now collect and analyse from each experiment is staggering. We have immense opportunities to develop better target understanding, enable the discovery of new and more effective drugs – but only if we can harness that data.

To get a sense for the scale problem, imagine an astrophysicist sitting down to look at a scatterplot of four billion stars. A human can’t see and analyse patterns with that volume of data or use it to make predictions.

Biologists face the same challenges, in different contexts: instead of stars, imagine the many ways our individual genomes interact with our environments over time. To make predictions, scientists need tools to handle this complexity, make suggestions, and surface patterns. Digital tools, like machine learning and artificial intelligence, can do all that – if the data coming in is meaningful.

A BioPharma Lifecycle Management platform automates data entry and transformations not just to please scientists and quality assurance teams, but also to power analytics. It puts data in context and provides access across the organisation to link each new experiment to central ontologies. These investments are already unlocking new insights into diseases and pathways.

Designing the lab of the future

It’s one thing to decide that your lab needs more than a new ELN; it’s another to figure out what’s next. For some labs, building an effective research platform did in fact start with an ELN.

Often, these labs were on the cutting-edge decades ago. They quickly adopted new, cloud-based tools as they became available. First an ELN, then a laboratory information management system, followed by analytics, data warehousing, a data backbone and integrations framework and dashboards. They have since been connecting disparate capabilities from different providers.

For a long time, custom approaches like these were the only solution. But now, more turnkey options are also available. Centralised BioPharma Lifecycle Management platforms can now consolidate these functions into flexible systems that can grow and change as labs do and solve business challenges faster.

When deciding to build or buy, labs need to think both short and long term. It is important to have a clear long- term goal for how data will be used – preferably one that centres around the concrete data analysis needs that will drive the business forward.

Building an ecosystem from a myriad of digital building blocks will create distracting challenges that are not core to a pharma company’s strategy. When solutions are available commercially, avoid using specialist teams to redesign the wheel internally. Even if you can, be pragmatic and push back on waiting for a perfect tomorrow to drive to a better today.

To support adoption, it’s also important to pay attention to the short-term needs of users. If the initial goal was to address complaints about data entry in the ELN, ensure that any system you implement comes with obvious, immediate data entry wins for scientists. Modern systems come with built-in analytics that allow you to track adoption and change management impacts.

If our hypothetical cell culture lab keeps the data architecture that is already in place they will miss out on the value of digital approaches that inform scientists, reduce work and drive to better outcomes. Adopting an integrated BioPharma Lifecycle Management platform will allow them to leverage data in ways that would not have seemed possible a decade ago, streamlining each step in the discovery and development journey. That efficiency is especially crucial in today’s first-to-file patent environment.

Maybe your team has been asking whether it’s finally time for a new ELN. Or maybe you are looking for a better laboratory information management system, a new dashboard, or better data quality for compliance. Whatever it is, I would advise you to keep an open mind: the best answer may require a more holistic approach, and you may still need to explore a few more ‘wrong questions’ before the best solution becomes clear.

DDW Volume 24 – Issue 3, Summer 2023

Matt CliffordAbout the author:

Matthew Clifford is Director of Research and Innovation at IDBS. With over 20 years’ experience working within and alongside the pharma industry, including eight years in informatics roles in a top 20 pharma supporting discovery and preclinical organisations, he has a proven background in industry trends and solutions.

Related Articles

Join FREE today and become a member
of Drug Discovery World

Membership includes:

  • Full access to the website including free and gated premium content in news, articles, business, regulatory, cancer research, intelligence and more.
  • Unlimited App access: current and archived digital issues of DDW magazine with search functionality, special in App only content and links to the latest industry news and information.
  • Weekly e-newsletter, a round-up of the most interesting and pertinent industry news and developments.
  • Whitepapers, eBooks and information from trusted third parties.
Join For Free