Implementing and Operating Large Complex Capital Projects in a Research Environment
UK Biobank is a major UK study that will recruit 500,000 volunteers aged between 40 and 69 to provide a resource to study the link between genetic factors, environment and lifestyle in the causes of health and disease.
As part of the study, it will collect biological samples from these individuals, process them and store them at ultra-low temperatures for a period of at least 30 years.
This article addresses the approaches used to design, build and commission the sample processing and archiving facilities. It describes how lessons from manufacturing industry have been successfully transferred to an industrialised, high throughput bio-processing factory. Lessons from implementation of automation in other industries are also considered.
Finally, an understanding of the project lifecycle and the transition to full operation and the consequent impact on management and leadership are discussed in the overall context of the project.
UK Biobank (www.ukbiobank.ac.uk) is a large national study in the United Kingdom established to determine the role of genetic factors, environmental exposures and lifestyle in the causes of major diseases of middle and late age. It will recruit 500,000 study participants aged between 40 and 69 over the next three and a half years to local assessment centres where lifestyle information, physical measurements and samples of blood and urine will be collected.
This resource will be linked to the participant’s health record for longitudinal follow up to enable the establishment of large disease cohorts and matched control groups. The detailed protocol (manuscript in preparation) for the collection, processing and archiving of such a large number of biological samples was developed through a thorough review of current knowledge, and wide consultation and peer review in the scientific community, followed by extensive piloting to ensure that the proposed procedures were fit for purpose.
It details the samples to be collected, the preliminary processing and storage temperatures, the transport of samples to a central processing facility, and the processing, aliquoting and storage of each sample. Using the evolving protocol as a starting point, the processes, technology, data structures and information systems and facilities were designed, tested and implemented in parallel over a two-year period to ensure a robust, cost-effective sample handling capacity.
The UK Biobank sample handling and processing protocol – understanding the operational challenge
As UK Biobank is a prospective, longitudinal study, it cannot state precisely the complete range of analyses that will be performed on the material it stores. Consequently, while there is no perfect solution and there are a great number of potential solutions, development of the protocol for the collection of biological samples in UK Biobank was led, within the constraints of a finite budget, by a number of key principles.
In particular, the aim is to collect samples that allow the widest possible range of assays that could plausibly be envisaged for the future, and to avoid collection, processing or storage approaches that would inherently preclude such assays (ie ‘future proof’ the collection as far as possible given current knowledge and available resources). In order to conserve the resource, only those assays that require fresh material would be done on all samples prior to storage (eg haematology).
As briefly described, participants in the UK Biobank study will attend an assessment centre conveniently located near their home. At the end of the visit, samples of blood and urine will be collected into a variety of vacutainer sample collection tubes (sterile tubes under a slight vacuum into which blood is drawn via a venepuncture apparatus) (Figure 1).
Following minimal processing at the assessment centre, the samples are shipped overnight to a central processing facility where they are processed and archived according to Table 1.
UK Biobank will operate up to six concurrent assessment centres each aiming to process 110 people per day for three and a half years. This produces 4,620 vacutainers per day that in turn generate 19,800 1ml sample aliquots. Over the course of the project, 15 million aliquots of biological sample will be produced that need to be stored at ultra-low temperature for at least 30 years and that can be retrieved with complete accuracy and reassociated with a specific data record.
Therefore, at the start of the design process, the goal was agreed that UK Biobank must establish a reliable, cost-effective capacity to continually process, store and retrieve samples and data in a form suitable for future scientific research. This was an important statement of aims because it sets goals not just for throughput but also quality of samples and data and sustainability within a finite budget during both the set-up phase and the maintenance phase within the context of the overall objectives of the study.
Importantly, it also moves the goals to a clear functional target and avoids an obsession with automation or throughput per se without understanding the aims of the capability or its impact on the broader project (see later).
Tackling the challenge – learning from successful industries and avoiding the mistakes of others
Even at the early stages of the development of the protocol, it was clear that the sample processing and archiving would be a high throughput, industrialised process established in a centralised facility.
Consequently, in planning the design and implementation of the UK Biobank facility, we took two approaches; the first was to learn from comparable projects where the expected gains and successes were either not realised as quickly as had been anticipated or were not achieved at all.
We focused particularly on the implementation of automation in the high throughput screening area in the pharmaceutical industry because of parallels of scale and process and the activities occurring at the interface of research and operations:
Piecemeal implementation of technology as a lights-out operation
The rise of Japanese manufacturing in the post-war era led in the West to huge investment in automated processes in a number of industries, especially car manufacture. Executives from Western countries were given tours around factories that, if not lights out operations, were very highly automated. Pressed steel body parts and components entered a line of robotic arms and a car appeared at the other end to be finally checked and finished by humans.
The answer then was obvious: implement robotic processes, to improve quality, increase throughput and reduce headcount. Subsequent investment, however, failed to produce the expected gains; the organisations had failed to address implementation of technology as part of the overall requirement of organisation (human and structural), the complete process (not just the part that was being automated), technology, supply chain and facilities, process and product quality and information systems and management.
Without this, bottlenecks inevitably occurred – usually in downstream processes that were incapable of handling the new higher throughput or upstream where supply could not match the numerical or quality demands of systems designed for high utilisation – and overall productivity was either unaffected or reduced. These effects were observed in the first industrial revolution in the 19th century but the approach has been repeated in many industries since, including the pharmaceutical industry where very high throughput screening platforms have been installed in research environments without adapting the broader systems, processes and organisation.
Focus on high utilisation and nominal installed capacity
When automated processes are planned and implemented, there is often a focus on nominal installed capacity, high throughput and high asset utilisation instead of a focus on producing the optimum amount of a quality product and how it contributes to the overall success of the organisation.
Many Directors of Screening in large pharmaceutical companies have stood up at conferences and said that they can screen 100,000 compounds a day and therefore have an effectively limitless screening capacity. However, while it is true that the screening platforms can run at this throughput, the average utilisation of screening automation in the large pharmaceutical sector is about 5% (1).
Therefore, the realised capacity is not the nominal installed one but is a product of the capacity and reliability of the entire technology platform and process and the impact of real world conditions such as availability of staff and resources and the extended supply chain of inputs and outputs. The parallel here is owning a Ferrari motor car with a top speed of 180mph and expecting to drive across London during rush hour in 20 minutes. The constraints of the traffic flows and congestion will increase the actual time to closer to three hours and the realised speed will be about 15mph!
Reactive processes with little control of the supply chain – the Forrester effect
As noted above, the realised capacity of a system is a product of upstream and downstream factors. One of the most important of these is the supply chain of inputs required to maintain the process. While there have been undoubted improvements in high throughput screening, the processes of assembling compounds, stable assays, reagents, available screening platforms and resources still tend to be rather like planetary alignment – when everything is ready, the screen will start.
The process is reactive with little control over the number or quality of inputs. This problem has been well characterised in other industries by Forrester; the entire supply starts to get out of synchronisation and becomes increasingly chaotic and hard to control. Instead, the focus should be on forward visibility and planning and aligning the supply chain with the overall process goals to achieve better and more cost-effective asset utilisation with a subsequent contribution to overall productivity (3,6,7).
Lack of standardisation of process or quality
One of the foundations of achieving an industrialised process is standardisation of parts and process with a focus on quality. This leads to a common misunderstanding that industrialisation kills research and innovation. In fact they are two important parts of the one process. However, a clear delineation between research and operations is required so that products or assays in development are designed and tested with proper tolerances and performance characteristics before they are transferred to an operational environment with large capital costs.
The inventive aspect (identification of the target and assay configuration) has been done previously in research. The actual large scale assay should be a standardised operation. There is little point transferring an assay to a large screening system if factors such as stability of assay, tolerances, signal to noise ratios and standardised and repeatable operating processes have not been established. By the time the assay reaches this point, any laboratory anywhere in the world should be able to run it and achieve the same data as any other.
The car industry again provides a parallel; new car models will be designed in the creative and marketing department. Following formal engineering design and testing, the model will be transferred to production engineering where the exact standardised processes, supply chain, technology and organisation for manufacture are established. Often, this process will require a design change and an iterative process occurs between the designers and the engineers.
When all of the production, engineering and design elements have been agreed, only then will the new model be transferred to a large scale production environment – it is hard to imagine Ford using their major production lines to make individual model variants for wind tunnel testing. This aspect of delineating research and operations has been a critical part of the UK Biobank study.
Achieving the correct balance of staff
The research element and the operational element of running a high throughput process are separate parts of the same overall effort. They do, however, require different kinds of skills and techniques and different kinds of people and culture. The identification of a disease target and its subsequent development into a reliable and useful assay is a scientific, research-based endeavour. It requires trained scientific staff applying empirical processes with an appropriate style of management (see later).
The high throughput environment is operational (with high capital and opportunity costs) and requires technical staff with a focus on clearly defined targets of data and product quality and throughput. The culture and management of this part of the process should be quite different to the research environment. It is a mistake to run the operational, high throughput processes with post-doctoral scientists because it is not a research project.
Focus on point optimisation
It is natural to try to improve processes to either make them easier or better. However, this is most often manifest in point optimisation; in other words optimisation of just the part of the process that the individual or group is involved in rather than understanding its impact on the entire process.
The effect may well be to improve that part of the process but have very little effect on overall elapsed process time or productivity (often a new bottleneck is created in another part of the process – for example, in the early days of high throughput screening, so many potential lead candidates were produced that they could not be processed through the lead optimisation cycle).
In addition, point optimisation is rarely the most effective approach to process improvement because in complex processes with more than four or five major steps, the biggest scope for productivity gains are the inter-process element (ie the parts between the major process elements where product components are waiting for the next stage of the process).
To exemplify this point, it is possible to order a personalised computer over the internet and have it delivered within five days. The actual build time (ie when things are being physically assembled) is about 20 minutes but, excluding postage, the elapsed time (including interprocess time) is about three days. Clearly, improving the process time by 20% will have far less impact than a 2% reduction in inter-process times.
On-line process development and testing of Beta -technology
Even when an operational process is established it is wrong to assume that new technology cannot be integrated to improve it. However, this should never be done without careful testing and modelling of the impact on the entire process. Once this has been done, a careful integration plan (involving off-line technology integration, systems adaptation, training, testing etc) should be implemented to avoid affecting the operational productivity.
Any technology that is integrated should be reliable and well characterised in the same or similar processes. Development, testing and implementation of so-called -technology (in other words, leading edge technology that relies on new or untested components or science) in high throughput operational processes significantly increases the risk of throughput or quality issues.
In-house, DIY approach
Many organisations, because of their culture and skills available, will tend to develop most aspects of complex projects de novo using only in-house skills. Often, because they believe that no-one understands their organisation, systems and processes like they do, they reasonably assume that they are the best placed to adapt and modify them. It also appears cheaper to do it this way because opportunity costs of full-time staff are often not considered in overall capital project budgets.
This is the DIY approach that inevitably leads to delay and increased costs. Anecdotally, one very largepharmaceutical company which underwent merger several years ago still has not completed the standardisation of its compound libraries and supporting systems. Instead, the senior project board should identify those aspects that really only can be done in-house based on a careful tendering and procurement process. However, as a default, organisations should try to use, adapt and modify the numerous off-the-shelf solutions for the various elements of the project (the scale of the task of adaption should not be underestimated).
These solutions are tested, standardised and validated: importantly they are supported by large organisations with standardised documentation and maintenance and upgrade schedules whose success depends solely upon the success of their products. When these products are purchased, only a small proportion of the development price is included. This outsourcing approach should also include judicious use of external specialists, particularly in the mechanical, electrical and facilities engineering environments.
As well as learning from previous experiences of other companies, UK Biobank identified early that it should adopt a fully industrialised approach to the implementation of the overall processing and archiving capacity. Over recent years there have been a number of papers published (2,3,4) on transferring lessons from manufacturing engineering to the implementation of large scale automation projects in the pharmaceutical and life science area.
Archer (2) was the first to describe adopting an industrialised approach building on his engineering background and work implementing automated processes in a number of industries. These approaches have been developed and extended for the life sciences and have been adopted here with great success. The overall methodology is summarised in Figure 2.
The key part of this approach is that the project is science-led. However, once the first version of the protocol had been agreed, its development was run in parallel with the development of the infrastructure (technology, facilities, process and systems) and where scientific requirements could not be delivered in budget or with robust available technology, they were adapted and alternative protocols suggested.
While the infrastructure was being designed and developed, the protocol was rigorously tested in a series of pilots to determine that it was fit for purpose (manuscript in preparation). This overall process took about two years. Clearly, if we had waited until these data were available, there would have been an 18 month hiatus while the infrastructure was specified, procured and implemented.
However, by using principles from manufacturing engineering and by careful planning and testing, we were able to cope with the inherent uncertainty of an evolving protocol and still deliver a facility with the required capacity with minimal risk and a low level of initial investment. The major capital commitments were made only when the final decisions on the protocol were clear.
Starting the process – elements of the industrialisation process
When industrialisation is discussed in the context of a biological process, the challenge is often made that it is fine for making washers but not for something that is inherently unpredictable. This is a major misunderstanding – if you are making washers then the process becomes very simple. Modern manufacturing design and implementation approaches have developed to cope with that exact uncertainty in products and process.
For example, Dell will build a PC production plant not knowing what models and technology it will be manufacturing in two years’ time. It will not however, commit to high volume manufacture of a product that is in itself variable from batch to batch. Again there is a parallel with high throughput screening. If the inputs to, or product from, a process are variable from unit to unit then it still should be in the research or production engineering phase. This is not the same as a high degree of variability in different products or different technologies in a robust and established process.
Adopting an industrialisation methodology can be broadly split into two areas. The first is the intelligent application of a variety of techniques and tools to the system under consideration. More important than this though is the recognition of the distinct differences between the project and operation phase. These differences affect every aspect of the way the programme is run and delivered and should be the first thing addressed once senior approval to a capital project has been received.
Establishing the correct team and style of management
It is first important to really understand the difference between a project and an operation. Turner and Cochrane8 define a project as “an endeavour in which human, material and financial resources are organised in a novel way, to undertake a unique scope of work of given specification, within constraints of cost and time so as to achieve unitary beneficial change through the delivery of quantitative and qualitative objectives”.
The key point of this definition is that a project has not been done before in exactly this way and delivers tangible outputs. It has a definite start and stop point whereas an operation is a repeatable, and repeated, process and has no inherent end.
Adopting the right approach to the project – the goals and methods matrix
Turner and Cochrane recognised that it is too simplistic to define a project as a single undertaking8. These authors identified four different types of project based on how well understood the goals of the project were and the methods for achieving them at its start (Figure 3).
The authors describe in great detail how the different types of project should be managed and staffed, and this methodology has been applied to the delivery of the UK Biobank project. Two critical points from this work should be particularly noted. The first is that projects naturally evolve from one type to another, typically moving towards a type one project.
This means that different leadership, team structures, management and control styles are needed – expecting a leading research scientist to manage a type one project is both a waste of his abilities and counterproductive to the project. Similarly, there is little point insisting on rigorous project management approaches with a type 4 project – research cannot be managed this way – but it does not mean that scientists can work without any review or supervision.
In contrast though, a product development project should not be run like a research effort. The second important point is to recognise that it is wrong to move projects as quickly as possible from one type to another – rather, the key is to recognise when this should be done and to manage the transition of staff and methodology appropriately.
In the early days of protocol development, UK Biobank was a type 4 project: the methods and goals were still to be defined and it had the iterative characteristics described by Turner and Cochrane. The protocol was developed by broad consultation with the academic community. Not surprisingly given the huge expertise involved, the challenge was to refine this to a scope that could be delivered within budget and this required a consultative leadership approach.
As the protocol began to coalesce, it moved to a type 2 project that enabled the start of the infrastructure development (the first transition on Figure 2). Although the foundations of the industrialised approach had been laid during the type 4 phase, it was here that the majority of the manufacturing engineering methodology was applied. The project remained type 2 for about two years; as it transitioned to a type 1 project and then to a full operation, the management and control styles have changed again especially around financial control and milestone delivery.
It should also be recognised that a programme may consist of several projects at different stages of this lifecycle. For example, even though UK Biobank will be in a full operational phase recruiting participants to the study, there will be type 4 projects ongoing addressing participant follow-up and study enhancements.
Applying engineering principles to the design of the sample processing and archiving facility
Understand the system in the first order
Any complex project requires understanding of the system and its goals and how they will deliver the organisation strategy before investment is committed. The key point here is the system: it is not just the technology. It refers to the interaction of process, people, systems and IT, organisation, the supply chain, technology and facilities. The system as described works within real world constraints (eg budgets) and against a defined quality target for all of its elements.
The first step was to describe the system and process in steady state at the first order – in other words assume that no factors were limiting and that the flow of materials through the system was constant within defined limits of variation. There are many methodologies for doing this; we used IDEF9 to define the extended system because it works particularly well for science-based processes.
While this process had its limitations, it described the throughput of often complex processes in a logical series of steps with associated inputs, outputs, decision points, information flows and performance characteristics at steady state and peak performance. The main problem with first order approaches is that the interactions in dynamic, inter-linked processes involving more than four or five discrete steps quickly become non-intuitive and thus identifying bottlenecks can become hard.
Design the process in the first order
Having understood the system and its constraints and performance requirements, the new processes and supporting infrastructure to deliver the UK Biobank sample handling protocol was designed in the steady state. This description of the process parameters was the basis for the tendering and commissioning of the major hardware elements in the sample processing facility. It included manpower, information and resource flows but critically focused on net realisable throughput (NRT) rather than nominal installed capacity.
NRT in UK Biobank was defined as a combination of process repeatability and tolerances, reliability of each of the discrete components (as defined by mean time to failure and mean time to repair data), peak and net throughput achievable and the downtime required for set up and maintenance. We were also able to specify that ideally all major process components should function in redundant decoupled parallel cell configurations because of the impact of quality and reliability issues on overall process performance.
In cell-based processes, the reliability of the system is the sum of the reliability of its parts. In a serial systems of single units, it is the product of it parts. The final consideration in the design of the infrastructure for the protocol was the complexity effect described by Archer (2) (Figure 4).
He noted that above a certain number of operations per unit time, there is a shift to a more chaotic and complex operating surface. This change is discontinuous – in other words a manual process can only be pushed and scaled so far before it becomes too complex. The break points are usually in the data systems and logistics. Simply adding more of something does not reduce the complexity.
Conversely, Archer noted a hysteresis effect in shifting between these operating surfaces. On this basis, it was decided that the entire processing capacity should be automated and fully integrated with the UK Biobank LIMS and other data systems.
Design the process in the dynamic state
Having framed the new process in the steady state, all manufacturing operations will use simulation to test it under real world conditions. For continuous processes (as opposed to batch processes), the best method we have found is to use discrete event simulation (DES). This is a computer-based simulation approach that models the entire system and the interactions between the various elements within it.
There are two important elements about discrete event simulation: first it is a linked system, in other words, a change in one part of the system will affect other parts of the system which in turn will affect further parts of the system. This in turn will feedback on to the overall performance of the process. Second, it is based on real constraints and business rules which prevent situations being measured that could not, in reality, occur.
Discrete event simulation can be used in two important ways: the first is to build a specific system capacity (a defined throughput at defined quality at defined cost); in other words to ask, what total resources do we need to deliver a specific capacity. It allows a number of combinations of resources to be assembled under different working patterns with real world loadings to determine the best fit. Typically, resources are assembled and the overall throughput examined. The model then allows empirical de-bottlenecking of the system by shuffling system elements before any investment is committed.
The second use of DES is in scenario development to fully test the system to ensure it will deliver the installed capacity under any likely conditions. This is achieved by identifying all the variables within the system and then creating scenarios where these are varied. This allows the optimum design of the system as well as providing the ability to model the impact of new technologies. Having specified the system, we were able to use this as a detailed basis for tendering and procurement.
However, this was just the start of the design and testing approach – the final result was much improved from a technical aspect but still retained the specified function. Note, that by this stage we had moved from a type 4 project to a type 2 project so that we were able to start to identify milestones and implement project management methodology. Although the precise allocation of budget for the whole UK Biobank project could not be fully described, budget and time limits could be set for this part of it.
Design and prototype – build in quality
Having tendered and selected key providers for the large process components, the process of design and build began. Rather than make major investments up front (while the details and content of the sample handling protocol were still being defined and tested) we took an approach common in manufacturing engineering involving a small design study (addressing major risks and possible solutions) and then a prototyping study (testing the key technologies in small-scale prototypes).
There are many robust methodologies that can be transferred from this discipline to improve the design and reduce risk to a project. However, the following approaches were taken in the design and prototyping phases:
Design phase – Failure Modes Effect Analysis (FMEA):
FMEA is a conceptually simple approach that asks a series of ‘what if?’ questions to identify those parts of a process that can fail and what the impact would be. Failure modes are rated by likelihood and impact and the design modified to deal with those risks that could seriously affect overall process capacity. Often, the solution is simply to build in redundancy, include alarm states or in-built quality checks. In some cases, it leads to a significant design change.
In the early stages of the design of an automated working archive for the long term storage of biological samples, the initial plans proposed a mechanically cooled -80°C compartment. It soon became clear that this was a high risk strategy because of the cost and performance capabilities of commercially available compressors, the cost of the power to run them, the risk of failure and the concomitant facilities costs to provide power and baffle the high noise levels.
Working with the supplier engineer team in a design study, a new cooling system based on liquid nitrogen with no moving parts was proposed. The risk and cost of running the store was hugely reduced and the risk of catastrophic failure eliminated (see also Table 2).
Prototyping phase – Tagauchi method:
The Tagauchi method was used after FMEA as an offline approach using a variety of techniques focusing in more depth on specific elements of the process. It focuses on critical elements identified in the FMEA and uses a variety of experimental; approaches to optimise them to improve performance and reliability and reduce variability. This approach was used in the UK Biobank study to address one of the key technical risks in the automated fractionation of blood from the vacutainer tubes.
The blood in the tubes is highly variable in terms of viscosity, turbidity and volume. The challenge was to be able to detect the different fractions and the interfaces between them and convert this to machine operating instructions. Following extensive testing, a solution involving a high resolution image produced in different lighting conditions was used in combination with a specific image processing software algorithm. The solution was optimised experimentally and signed off as part of the overall prototyping study.
Protoyping phase – Building in quality – Poke Yoke:
Poke Yoke is a simple approach to build quality into the system. It uses foolproof devices or measures built into the process to detect defects and shifts the emphasis from reducing total process defects to reducing all defects. In UK Biobank we have configured the devices to detect three types of defects:
1 Contact defects (are we using the right things?). An example of a contact defect detector is the simple inclusion of the first three digits of the bar code on the sample collection tube that identifies the tube type to the automation platform. However, if the incorrect tube is inadvertently placed on an automation platform, the tube is not processed but held at 4°C and an operator alert raised. In this way, processing of correct blood collection tubes continues.
2 Constant number defects (have we done the right things to the inputs?). To ensure samples are processed correctly, each processing sep is logged into the operating systems of the automation and ultimately into the LIMS. The next stage of Drug processing cannot occur if the previous specified steps have not been carried out.
3 Performance sequence defects (have all the required steps been completed correctly?). To ensure the processing of blood samples has been carried out completely, each step is logged against a process check list. Outputs of discrete processing steps are checked and logged. For example, to detect blockages in pipetting needles, all aliquot tubes are automatically weighed at the end of a processing run to ensure that they contain the appropriate amount of sample.
There are a great many other methodologies that can be used in these types of project but they should be used appropriately and judiciously; application of these approaches for the sake of it should be avoided. Table 2 shows the benefit of these approaches to just the automation aspect of UK Biobank.
Design and build the facilities
When manufacturing organisations commission a new process and technology, they will often do so in low cost, new or renovated fit-for-purpose facilities. By specifying and testing the hardware and systems for an operation in the way described, the facilities can be designed and built and modified in parallel with the necessary layout and services.
UK Biobank was keen to avoid ‘retro-fitting’ the process and infrastructure around an existing building on a ‘high prestige’ site. Rather it specified a modular, flexible, light industrial unit on a low rent, brownfield site and, based on the utilities needs for the facility, included the necessary construction upgrades to the base plan before ground was broken. Because of the specialist nature of the operation, and a lack of knowledge of the construction industry, we made extensive use of specialist designers, engineers and architects to complete the building to our specification.
While this incurred an up-front cost, the investment was more than recovered in the tendering process and project management aspect of the service and the final construction was specified precisely for our needs in terms of access, services, layout and security.
Implementation – transition to a phase 1 project
The final stage of the project prior to full operation was implementation and commissioning. Implementation refers to the carefully managed completion of all of the various parts of the overall project. This is quite distinct from commissioning – the process of validating and integrating all of the elements into a working facility capable of operating at the required capacity. This latter phase is often overlooked but it can take as long as the implementation phase.
By this stage, the project requires tight financial and time control with formal change control procedures. Statutory obligations, such as a full health and safety programme for the operation, training and, in the case of UK Biobank, requirements of the Human Tissue Act, should be planned and implemented during this project phase. The other key area implemented in UK Biobank ready for full operation was a formal quality programme (in this case ISO9001:2000) that addresses all aspects of the organisation and its activities.
It requires that standard operating procedures for all aspects of the operation that affect output quality are written and that staff are trained and competent to carry them out. Because of the time, documentation and process required to implement and achieve ISO accreditation, trying to do so in a fully operating facility should be avoided.
Operating an industrialised process
Having completed all of the commissioning, the project phase is ended and the facility becomes operational. This term carries with it significant change across the whole organisation. By Turner and Cochrane’s8 definition, an operation is a continuous exercise that is well understood, predictable and measurable. In UK Biobank, we have modified the staffing structure and type (reflecting more technical staff to run the processing and archiving facilities) especially in the blood processing facility where cellular teams will operate and maintain the automation facilities.
No new process, system or technology will be implemented without thorough off-line testing and a formal change control and implementation procedure. Because the sample processing approach was designed, tested and built in the context of the whole UK Biobank process (including the recruitment and processing of participants at the assessment centres) we will have clear visibility of the supply chain for a period of at least two months – this allows a two way process of scheduling downtime for training and maintenance with no loss of productivity.
A management information system will be implemented to provide a small number of key performance indicators to the management team to enable regular monitoring of the operation and this will be managed as part of formal financial management and reporting requirements. However, as with all things involving a human element, the most difficult part of the transition from project to operation is that of culture.
Many of the staff at a project may have been associated with it for a long time and may find the change from the project/research culture to a professional operational culture, driven much more by pure targets against budgets, difficult. It can be hard to measure and describe an organisation culture; however, small things often give significant insights. A visitor to Dell or Intel or Ford would be unlikely to find their major production automation platforms named after characters from science fiction films.
Ultimately though, unless the cultural aspects are managed effectively and the existing people are carried with the operation as new people are brought in, the operation may fail to produce the required results. Above all, this requires good leadership and managers ignore this at great risk. DDW
This article originally featured in the DDW Winter 2006 Issue
Dr Tim Peakman is Executive Director for UK Biobank (having joined as Director of Operations in April 2004) and has overall responsibility for the day-to-day running of the organisation. Prior to this Dr Peakman was a consultant at PricewaterhouseCoopers, where he advised discovery organisations in the pharmaceutical and biotechnology industries on a variety of projects addressing productivity of early drug discovery pipelines. His particular interest was the effective planning and implementation of automation in complex drug discovery processes. He has written a number of papers on transferring process and methodology from industrialisation in other industries and on implementing supply chain disciplines to high throughput screening. Directly in the pharmaceutical industry, Tim worked on humanising monoclonal antibodies for the treatment of HIV and autoimmune disease. Later his research focused on the molecular events leading to epilepsy and pain. He completed his doctoral studies on bacterial anaerobic gene regulation at the University of Birmingham in 1988.
1 Beggs, M, Emerick, V and Archer, JR (2005). High Throughput Screening. In Industrialisation of Drug Discovery – From Target Selection Through Lead Optimization. Ed. J.S. Handen. 103-136.
2 Archer, JR (1999). Faculty or Factory? Why Industrialized Drug Discovery is Inevitable. J.Biomol Screen 4:235-237.
3 Peakman, TC et al (2003). Harnessing the power of discovery in large pharmaceutical organizations.: closing the productivity gap. Drug Discov. Today. 8:203-211.
4 Peakman, TC (2005). Industrialization, not automation. In Industrialisation of Drug Discovery – From Target Selection Through Lead Optimization. Ed. J.S. Handen.31-56.
5 Briggs, A. A social history of England, 2nd edition, London. Penguin, 1987.
6 Franks, S (1999). Business modelling and collaborative planning: the key to ever increasing productivity in the new millennium, part one. Eur. Pharm. Rev. Spring, 67-72.
7 Franks, S (1999). Business modelling and collaborative planning: the key to ever increasing productivity in the new millennium, part two. Eur. Pharm. Rev. Spring, 70-75.
8 Turner, JR and Cochrane, RA (1993). Goals and methods matrix: coping with projects with ill defined goals and/or methods of achieving them.
9 For good overview of IDEF: http://www2.isye.gatech.edu/~lfm/8851/IDEF_V4.ppt.