The multi-year process of drug discovery and development creates huge silos of data and information. Extracting and combining information can be a significant overhead, resulting in potential for delay and error. Iperion’s Duncan van Rijsbergen provides practical recommendations to help companies create high quality, consistent data that may be shared across systems.
Pharmaceutical companies are increasingly focused on the need for digital transformation and speeding time to market. The reality is that basic issues such as getting up-to-date and consistent data to talk to each other across functions and systems are stymieing ambitions. Currently, too many companies still store data in documents and extract it again at a later date for development or regulatory purposes. There is no clear oversight of all the data.
Regulatory systems contain data on products and their licenses. There is also procedural data, recording interactions with the authorities about a licenses, from the initial application through to post-authorisation changes to the license. Expert functions from pharmacovigilance to clinical teams generate the basic data that feed the regulatory dossier that supports the license. Usually, there is no direct communication between regulatory systems and expert functions systems.
Typically, during the discovery process a high number of potentially active compounds are identified and developed that will not make it to market, due to factors ranging from safety concerns to dose regimens. It is crucial for companies to identify these compounds as early as possible in the process, to minimize the money spent on non-viable development.
In particular, phase III trials are hugely expensive. There is an urgent need to understand drug viability before phase III trials. Collating data from the various expert functions from biochemistry to pharmacovigilance still routinely depends on conversations between experts. From a management and financial perspective, there is little data-focused and systemic overview.
Sharing data with the regulators
During the development process, regulatory authorities require regular submissions, from justification for animal trials through to permission for clinical trials. It is key for companies to be able to automate the process of pulling together the necessary data. The ideal would be to press a button and generate a report on the current development status for submission.
For example, collated data produced over many years of tests and trials will determine the final drug indication in the marketed product information that might exclude use of the drug for certain groups such as pregnant women or people taking other drugs. All the data, from every trial, potentially years apart, needs to contribute to this. A data-first starting point is key. If companies store clean and consistent data, rather than documents, they will be a in a much better position to automate processes and share this data efficiently with regulatory bodies. Yet, companies continue to struggle with basic data quality issues.
Data quality issues
First, there is the compliance issue, where licenses must accurately reflect activity relating to clinical trials or manufacturing. In a regulated environment, compliance failure could lead to product recall, license suspension or fines. Datasets in operational settings may not align with datasets shared with the authorities. While the data is essentially the same, the way the data is presented may not be aligned exactly across the two systems. The granularity of the data, how it is worded or linked, might be slightly different.
Secondly, there are issues tracking changes in data over time. Drugs that are produced over many years will experience changes in for example composition or manufacture. These must be reflected both in regulatory systems and in the company’s operational systems. There is a need to change the data but also to keep it in sync. That synchronisation becomes much more difficult if there is a longwinded process, with multiple steps in it, where the data changes form multiple times, going from structured to document, and back to structured again, with manual copying along the way.
Ideally the synching process should be integrated with the regulatory process. That way, when the company introduces improvements to the product, testing data can be shared with the regulator much more quickly, accelerating the time it takes to get product enhancements to market. Reducing manual processes also eliminates the potential for human error and reduces costs.
Commonly, compliance has become a goal in itself. Ideally, though, compliance should be effortless, a by-product of a companies’ activities, not the focus of them. When data is aligned and kept in sync automatically, through a proper aligned process, then compliance is secondary, it will just happen by itself. The benefits of more effective and efficient drug discovery will take centre stage.
Here are five practical action points to help get companies started on their data quality journey:
1 Communicate with all the stakeholders involved in the process. Together, identify the use cases for data flow continuity and agree on how best to measure the benefits of automating data integration. Getting everyone’s buy-in and developing solutions collaboratively drives transparency and improves trust among functions. This approach enables people within a fairly long process chain to know how their data affects their predecessors and successors. It provides confidence that predecessors have done things correctly and successors get data they can work with.
2 Develop a shared vocabulary to talk about data held in common across functions. Presenting product data across the organisation in a way that everybody understands, with commonality of language, also builds trust as well as driving operational excellence and innovation.
3 Standardise data descriptions. Once use cases have been identified and a common vocabulary agreed, consider how best to standardise data relating to complex products. Drug discovery processes can take many years. It is vital to avoid proprietary data descriptions that may be outdated and unusable a decade later. CDISC Standards are now required for regulatory submissions of clinical data to the US FDA and PMDA in Japan. The IDMP model is a valiant effort to find a common way to describe product data.
The quality and consistency of individual data is also key to data standardization initiatives, such as the US FDA’s drive to standardise Pharmaceutical Quality CMC (PQ-CMC) data elements for electronic submission. The more widely accepted a product model is, the easier it is to share with external parties. This includes regulators, and also partners such as labs, manufacturers and research organisations.
4 Ensure processes are properly aligned. There needs to be a robust process for capturing and sharing changes over time – and making sure that systems keep in sync and that there is as little time lag as possible. Focus on bottlenecks. There may be one process in an operational setting and another in the regulatory section. Where do they meet? Where does the data gets exchanged and how could that be improved?
5 Identify suitable technological solutions. The initial focus should not be on finding the right software, but on the system architecture and how and where to connect systems. One approach could be to build a bridge between two systems, a point to point connection. The issue is maintaining the link and upgrading functionality in two discrete systems that talk to each other. A better option would be to develop a looser coupling, and this is where the common language model comes in. It is important not to take a static approach – how do I solve the problem now – but also consider maintaining the solution and innovating over time. This is not about individual systems but about a system of systems.
The core business of a pharma company is to get the best medicines to its patients. Data processing should be a hygiene factor. Ensuring data quality and integration won’t in itself generate innovation but it will provide a platform on which to innovate. A consistent vocabulary is key to supporting effective data communications and getting drugs to market more effectively.
Insight in drug discovery and development
Technology, systems and software can do much to connect, extract and collate data, providing a clear overview of the discovery and development process. But technology is only effective when it is based on a good understanding of the data within the business. In many drug discovery processes, information and data is hidden in documents or even shared in an unstructured way between experts. Identifying this crucial data, bringing it out blinking into the light of day and standardising how it is described, is a first step towards creating automated, interoperable data flows. These are the building blocks that will create an increasingly accessible single view of development – driving more cost-effective innovation.
Volume 21, Issue 4 – Fall 2020
Duncan van Rijsbergen is Associate Director Regulatory Affairs at Iperion, a globally-operating life sciences consultancy firm which is paving the way to digital healthcare, by supporting standardisation and ensuring the right technology, systems and processes are in place to enable insightful business decision-making and innovation.