Part 2: Emergent intelligence, a new paradigm for drug discovery.
(You can read part 1 of disruptive approaches to accelerate drug discovery and development linked here.)
The staggering clinical trial failure rate of experimental drugs indicates that despite huge investments in novel technologies over the years, productivity gains in the pharmaceutical industry remained elusive. It is therefore generally agreed that the current biopharma model is unsustainable and disruptive approaches are needed to remedy the status quo. The biopharmaceutical industry is looking towards artificial intelligence (AI) to speed up drug discovery, cut R&D costs, decrease failure rates in drug trials and eventually create improved medicines. At last count (May 2018), there are about 81 Startups and 19 Pharma companies using AI for drug discovery (2).
Human civilisation has a never-ending quest to make everything efficient, and inevitable creative waste is generated in the name of efficiency. Many of our everyday technologies and modern-day hi-tech inventions can be traced back to the bygone era of the 1930s. Many of the Silicon Valley start-ups that were based on connecting buyers and sellers of goods/services much more cheaply and efficiently derived their inventive energy from the time-tested mechanical systems. The unintended consequences of all the efficiency that platform companies strove to achieve generated short-term efficiency at the expense of our ability to become more efficient in the long run (3).
To overcome this disparity, human intuition must work together with technological efficiency. Skilled researchers can use search terms and arguments to unlock the full power of the search engine to find out what they are looking for by using ‘search intuition’ and becoming ‘information athletes’. Such information athleticism, based on robust algorithms, can unearth connections between disparate subjects that no one has imagined (3).
We evoke media theorist Steven Johnson’s provocative, engaging and surprising examples of feedback, self-organisation and adaptive learning in influencing the evolution of ‘emerging systems’. The power of self-organisation ushers in a revolution every bit as significant as the introduction of electricity. In explaining why, the whole is sometimes smarter than the sum of its parts, Steven Johnson places ‘self-organisation’ on the front lines of this exciting upheaval in science and thought (1).
Let us think about this further. Every once in a while, within a complex technical challenge, some experiment or observation may yield a surprising, transformational result. It may be good news, perhaps in the form of a known drug-like entity showing promise in treating a challenging phenotype, or the finding could be a grave disappointment, such as problematic toxic side-effects.
Whenever the unexpected happens, our instinctive response is to seek clarity. What caused the surprise? This can certainly lead to new experiments, but wise researchers typically turn first to prior data and careful review of existing literature. In many cases, this informed re-examination of prior records will suggest a very plausible and rational explanation.
One might then ask, if information was already in place to rationalise the surprise, why did we not foresee it?
Perhaps hindsight is 20:20. Human brains are better equipped to interpolate a known observation back to a set of prior contributing causes than to extrapolate complex conditions ahead to an eventual consequence. We can easily imagine that striking a glass vase will produce a mess of shards, but if we encounter only the shards without the proper context, it is harder to visualise the original vase that they once formed. By analogy, a mess of non-contextual data rarely shout out ‘imminent drug failure’, but once a drug has failed, the shards of prior evidence can be seen everywhere.
That seems self-evident, but is it also artificially self-limiting? Let us consider the following:
1) All information that once defined the vase remains contained in the mess of shards. Similarly, it is often true that key evidence to suggest a surprising biomedical observation was already known, just not put in the right context.
2) There are savants who can sift through a mess of glass shards and posit the original shape and pattern of the originating vase. Analogously, for many surprising research discoveries, there later prove to be people who, quite verifiably, can say: “I told you so.”
3) Finally, while many AI algorithms demonstrate pattern recognition powers to resemble or exceed human savants, various classes of algorithms such as Deep Learning (DL) and Analysis of Emergent Behavior (AEB), have a capacity for assimilating the relationships between disparate shards and plausibly unifying such information toward important real-world conclusions.
Knowing this, why must immensely capitalintensive and labour-intensive drug design projects still rely so heavily on luck and surprise?
Part of the answer is cultural. Chief Scientific Officers and programme directors may be comfortable making decisions based on flawed and incomplete data that they understand, but are often reticent about making comparable decisions based on complex reasoning whose subtleties exceed normal human cognition. Thus, when it comes to advice from savants and arcane algorithms, pharma execs tend to be more skeptical than the average sports gambler. One wonders who is more likely to hit the big payoff?
That natural bias, however, is beginning to give way in the face of success. As has been discussed in a pair of recent editorials (4,5), emerging algorithms have demonstrable capacity for assimilating multimodal/ multisource data via associations that almost mimic human intuition. Just as the vase is defined by shards of many sizes and shapes, DL can extract information from diverse sources (data, metadata, annotations, accompanying graphics, text mining, etc). Just as the shattering process may have scattered in directions that barely reflect the original vase, DL recognises that the placement of no individual shard dictates the vase, but rather seeks to find ways to fit those shards together into a harmonious, self-consistent originating behaviour.
In practice, DL (6) and AEB (7) are very useful additions to the biotech arsenal, yet DL in particular has already surfaced as the primary factor underlying the success of Benevolent AI – a company that has identified 24 drug candidates in only four years, advancing two promising prospects for treatment of the perennially intractable Alzheimer’s Disease (8). Detailed technical reviews of DL and AEB are available elsewhere (6,7), but for the purposes of this article it is important to focus on conditions by which such methods may best excel. Specifically, algorithmic capacity for discriminative logic that mimics human intuition requires access to extensive data and information in the following classes:
Comprehensive annotated databases of bioactive chemical entities are essential for using DL and AEB to discover new drug scaffolds and optimise known ones. Such resources are available and are growing, though they are not without some flaws. For example, mineable structural representations such as SMILES strings and fingerprints contain chemical ambiguities that can cloud predictions. Available bioactivity data in available databases is sparser than need be, largely due to proprietary restrictions on many assays.
The latter is unfortunate; while corporate entities remain justifiably hesitant to unmask promising leads, there may be immense value to reporting data on mid- to low-quality hits that have long-since fallen out of consideration; such data can be very productively applied to enhancing models for toxicity and offtarget effects, plus intuiting new target SAR. Furthermore, valuable insight can be extracted from just knowing the inactives from screens, with implications for lead search efficiency and accurate prediction of drug safety.
Molecular interrogation data (OMICS), microscopy and many forms of sub-organismal physiological data provide a wealth of insight suitable for algorithmic use in discovering biomedical implications far broader than anticipated in the original study. Again, the power of computational predictions is magnified with greater data sharing, although some bioanalyses produce prohibitively large data sets. In the future, this hurdle may be overcome through sandbox experiments that help to strategically filter large data entries down to key features of demonstrable value to physiology and pharmacology.
Anonymised data, meta data and unstructured annotations are immensely valuable to medical insight when combined with chemical and biological data. A protocol for ensuring that all studies can be stripped of identifying information and made centrally available would be a tremendous boon to medicine, with potential benefits extending far beyond the original motivation in a given clinical study.
Collectively, scientific publications, e-books, patents, funding proposals, clinical reports, blogs and social media comprise a wild blend of fact, informed speculation, rigorously attained errata, unverified information and unverifiable conjecture. Nonetheless, modern data modelling has demonstrated that, when mined properly, careful delineation of known truth versus conjecture is not crucial (9). The mere fact that a given chemical, for example, is being debated as a potential cancer therapeutic (proven or not) may inform discussions of whether the broader chemotype family may have prospective cytotoxicity.
That said, the predictive capacity of text mining does grow most quickly through expanded access to the most reliable information sources, such as high impact journals. Thus, even if top publishers feel a need to maintain pay-per-human-view access to, there are synergistic benefits to opening their full (non-quarantined) electronic holdings to text mining. Mining access improves scientific AI inferences for all, and such access tends to flag relevant papers for actual human (ie paid) consumption.
AI in drug discovery
Recent expectations are that AI approaches can play an important role in the future of drug discovery, particularly for increasing productivity and R&D innovation. For instance, AI will alleviate the numbers barrier in drug development. According to Mullard, 2017, there are 1,060 possible compounds that have drug-like characteristics, which gives rise to a problem of categorising the property for each chemical (10). The enormity of the data set makes drug discovery even more expensive and protracted; thus, researchers are hoping to reverse this trend by combining the lessons learned from previous drug discovery projects with the vast amounts of experimental data that has already been produced by the scientific community to drive AI-powered drug design.
It is expected that AI will enable predictions towards molecular dynamics, resulting in:
(i) Focused sets of compounds for screening; or
(ii) new uses for previously tested compounds towards treating diseases; or
(iii) the creation of therapeutics targeted towards patients harbouring specific molecular markers, such as harmful mutations which give phenotype selective advantages; or
(iv) targeting the vulnerabilities of pathogens, which often have background mutations that reduce their viability.
(v) It is also expected that AI will tackle difficult drug targets. Some of this thinking is based on preliminary data with well-characterised targets such as HER2 (11) and RAS (12).
Limitations in AI methodologies
The term ‘AI’ is used as an all-encompassing umbrella that covers everything from machine learning all the way to network architectures such as Deep Learning (DL). All AI methodologies use combinations of tools such as search, mathematical optimisation, neural networks, probability and economics. The techniques used include Linear Regression, K-means, Decision Trees, Random Forest, PCA, SVM and finally Artificial Neural Networks (ANN), giving rise to DL. Developing machines approaching human level intelligence is among the long-term goals of AI, termed as Artificial General Intelligence (AGI). The promise of DL, however, is that DL is more than just a collection of multiple layers of ANN and has the capacity to evolve into emergent intelligence which is a property displayed by human intelligence.
Clearly, AI is viewed as a transformative technology that can serve as a catalyst for a new approach to drug development – but it is by no means a ‘silver bullet’ for drug discovery and development. The proverbial Achilles heel of AI is the amount and the quality of data needed for constructing AI training sets. All AI approaches, including AGI and DL require lots of data (big data) and are not immune to the basic computing rule – ‘garbage in, garbage out’ – which means that neural networks trained on flawed data can be highly error-prone.
AI is reliant on three things: high quality data, high quantity data and data that are relevant to the research question being asked. These data must already be known. AI is great at distilling value from large amounts of information; however, by itself it struggles to provide value when the information is sparse. This can be readily seen by our inability to model rare orphan diseases that lack well-known information that is relevant to the research question being asked. AI applications require creation of training data which requires human experts for interpreting lots of data to build complex views of the problem. Since getting the right input data is for the right job is expensive, real-world data are frequently replaced with machine-created data that tend to be error-prone.
The best approach for constructing these training sets (supervised learning) requires human experts who can interpret lots of data for building complex views of problem and intention spaces to be solved (13). Adding complexity to this task, these experts would need to consider how changes in environments affect emergent causalities and behaviour (14). Misrepresentation of these causeeffect relationships in training set construction will create cascading negative consequences (15). Considering these complexities, it is not surprising that most examples of emergent systems analysis use models that are hardly ever found in real life and are not scalable (15,16).
SystaMedic Inc’s innovative Emergent Intelligence (EI) technology
Emergent Behavior Analysis (EBA)
We believe that AI’s success will depend on its ability to model behaviour of biological systems that inherently display emergent properties (17). Emergence refers to the ability of low-level components of a system or community to self-organise into a higher-level system of sophistication and awareness (1). However, emergent behaviour cannot be computed by summing up the workings of a single cell and isolated molecular circuits (1). Biological systems are made up of integrated networks of organelles which form cell networks which again form organ networks and so on (18-20). For regulating systems behaviour, these interconnected network layers are in constant dialogue and this, in turn, makes and breaks connections between network layers (21).
This paper introduces a cutting-edge approach for the Analysis of Emergent Behavior of complex biological systems. This novel methodology identifies, amongst infinite number of combinations of protein interactions, network connectivity that is important for regulating information flows through networks of networks controlling the body’s response to diseases and drug treatments.
Overcoming the barriers of probabilistic data analytics, EBA’s information theory-based approach is especially useful for drug repositioning, bioprospecting of plants, herbs, traditional medicines, identification of efficacy and safety biomarkers and the design of ‘smart and targeted’ clinical trials and toxicological studies. Moreover, EBA has been shown to be a powerful technology for developing targeted product profiles, competitive analysis of drugs and therapeutic areas, and the prediction of drug-drug interactions. We describe its use as a cognition enhancement approach for identifying novel agents that can modulate resistance of bacteria to antibiotics.
Analysis of emergent behaviour in complex biological systems
Emergent behaviour is any behaviour of a system that is not a property of any of the components of that system. That is, a property that emerges due to interactions among the components of a system, as mentioned below. For example, flocking is not the behaviour of an individual bird (Figure 2).
Likewise, Emergent Intelligence (EI) is a global property and surpasses the scope of traditional AI, in that AI is a probabilistic approach requiring large amounts of comparable observations as training sets. Since biological systems are non-linear, there is a general lack of information which AI cannot compensate for.
While emergent properties of AGI have been studied for nearly half a century, very few methods for their identification and analysis exist (15). Most of the methods developed using these strategies resort to oversimplification and are not scalable (15). Computational and data intense DL methods approach this problem by using modular building blocks such as fully-connected layers, convolutional layers and recurrent layers of networks which are often combined in task-specific ways (18).
However, the components of biological network layers are not always known and even more ambiguity arises with defining the connections between components since edges in these networks are inducible in a cause and environment specific manner. Thus, conventional approaches for DL training sets may be neither physically or economically feasible.
In contrast, the methodology introduced in this paper uses an unsupervised learning approach and examines emergent systems behaviour by determining the routing of information flows induced by pharmacological agents through molecular networks with overlapping topologies without making assumptions on network links (22). Using disease or physiological networks as topological constraints, this methodology identifies the router level connectivity linking molecular processes across network layers without making any a priori assumptions on network connectivity or module functions. Enabling this methodology are techniques used in communication network technology topological data analysis and big data analytics (23,24).
This unsupervised learning strategy provides classifications of pharmacological probes and links between molecular processes associated with system-wide pharmacology observations. Yielding interdependent cause and effect classifications, this methodology provides self-validating results. Working with information characteristics instead of content interpretation, this methodology addresses key problems encountered in emergent system analysis.
Why does causal emergence matter? Experience shows that universal reductionism is false when it comes to thinking about causation, and that sometimes higher scale reality has more causal influence (and associated information) than whatever underlies them.
Emergent Behavior Analysis (EBA)
SystaMedic Inc’s EBA platform is a novel EI technology to determine data relationships by comparing directions of stimuli-induced information flows in dynamic interaction-network systems. This cognition enhancement technology uses information characteristics, requires fewer data points, is insensitive to noise and reduces system complexity (25,26).
Physiology and pathology are regulated by information flows
Starting at the body’s smallest scale, the routing of information through the body’s network systems is mediated by proteins which act as environmental sensors (27). Detecting changes in local and distant environments (causes), proteins transmit the information received (effects) by affecting directly or indirectly properties of neighbour proteins (network nodes) which, in turn, affect properties of other proteins resulting in protein-protein interaction networks that distribute information throughout the body (28) (Figure 3).
The body’s ability to adapt to changes in environments relies on the capacity to instantly change the connectivity between sub-networks (topology) conducting information flows across all scales of the body (29). In this context, the term network connectivity refers to the transfer of information from one network node to another, and the term network node refers to a connection point or redistribution point (eg, protein, process, function, organ, tissue, cell type, etc) for the propagation of information. Likewise, the term information transfer refers to the relative gain or loss of information experienced at various system levels when a set of input signals (regardless of their nature or origin) change.
Directions of information flows in these dynamic network systems depend on topologies of subnetwork systems adapting to characteristics of input signals (30). Affecting this non-linear regulatory scheme are gender differences, age, gene variance, environment and many other conditions (31). While this information processing machinery effortlessly instructs the body how to respond to heterogenous simultaneous input signals, the plasticity of this non-linear complex system creates formidable challenges for data analytics. An illustration of the power of this disruptive methodology is provided in the application of EBA technology for identifying drug candidates capable of reversing antibiotic resistance of bacteria.
Antibiotic resistance – an imminent health threat
Antibiotic resistance emerges naturally, but misuse of antibiotics in humans and animals is accelerating the process. A growing number of infections – such as pneumonia, tuberculosis, gonorrhoea and salmonellosis – are becoming harder to treat as the antibiotics used to treat them become less effective and present a key limitation for treatment of various life-threatening infections.
The World Health Organization (WHO) has compiled a list of antibiotic resistant bacteria (including Escherichia coli, Klebsiella pneumoniae and Staphylococcus aureus) for addressing this global health problem. Innovative strategies to mitigate the crisis of antimicrobial drug resistance include the identification of new therapies, chemo-sensitising modulators, as wells as approved drugs that can be repurposed for an alternate therapeutic indication (drug repurposing).
Application of EBA platform for discovery of drugs targeting multi-drug resistant bacteria
We have used EBA to identify:
- Shared mechanisms of drug resistance to multiple antibiotics across a broad range of bacterial strains.
- Mode of action (including biomarkers) that can be used for targeting multi-drug resistance either as reversing agents or as direct-acting antibacterials.
- Substances capable of targeting drug resistant bacterial phenotypes.
The first step was to construct a protein interaction network that links the pharmacologies of a broad range of antibiotics to molecular mechanisms involved in drug resistance of bacteria listed as prime candidates for the global health crisis by WHO.
The second step was to partition the primary network (described in Figure 4), consisting of 290 proteins, with all known Gene Ontology molecular process networks.
This analysis resulted in 558 overlapping and cause-effect constraint sub-networks of varying sizes and topologies. These fragments were then used in high volume data transformation to identify information densities (network reachability information)of >22,000 drugs, natural products, herbs, bacteria and antibiotics (stimuli).
Hierarchical clustering of this network reachability information identified stimuli-induced routes of information transfer between the 558 sub-networks and groups of substances inducing similar cross-linking patterns of the 558 molecular processes (sub-networks). Phenotypes containing 17 drug-resistant strains listed by the WHO are shown in Figure 5.
The cross-linking of 558 molecular processes involves modification of protein-protein interactions. Consequently, substances grouped in specific phenotypes modulate similar protein-protein interactions. Stimuli residing in well-defined phenotypes, such as Phenotype A, modulate similar protein-protein interactions and are hence anticipated to interact with each other. Examples of molecular processes modulated by substances in Phenotype A include chemotaxis, DNA replication, regulation of cell death, etc.
The network prediction of whether stimuli (bacteria, herbs, drugs) in Phenotype A are indeed capable of targeting bacteria was validated by review of published articles. A summary of these findings is listed in Table 1.
As shown, 10 out of 15 substances in phenotype A have direct antibacterial activities, five have been shown to modify bacterial resistance and one of these, 5-androstenediol has been shown to modify bacterial virulence.
Of significance for drug repositioning is the interaction of polidocanol with antibiotics. Polidocanol has been shown to reduce the minimum inhibitory concentration (MIC) of methicillin, oxacillin, penicillin G and ampicillin against drug-resistant Staphylococci. The authors suggest that the inhibition by polidocanol involves more general resistance mechanisms since it was not inhibitory as a single agent for Staphylococci and did not inhibit beta-lactamase activity (32).
EBA-based identification of herbs sharing Phenotype A characteristics suggests that these plants are prime candidates for bioprospecting and discovery of potentially novel anti-bacterial agents to overcome multi-drug resistance.
Experimental validation of EBA findings for seeking drug repositioning candidates was pursued by us via in vitro studies to measure increased susceptibility of E. coli, S. aureus and MRSA strains to known antibiotics in presence of compounds identified by EBA screening. Our preliminary findings (data not shown) indicate that in situ screening by EBA has identified several marketed drugs that can be readily positioned as modulators of bacterial multi-drug resistance. Efforts are ongoing to confirm these findings in vivo.
Conclusion - Disruptive Approaches To Accelerate Drug Discovery and Development (Part 2)
As the stagnation of medical and pharmacological technology practice verges toward social and economic crisis, greater attention is being paid to a mindset that facilitates access to a tremendous volume of available scientific and clinical data, as well as advanced algorithms that can exploit that information. This two-part series of papers initiates discussions about the complexities facing drug discovery and development.
In Part I, we articulated, among other disruptive innovation platforms, the significance and importance of ‘Core Model’, an economic and organisational paradigm for drug discovery and development, that was elucidated through the story-case of the development of the anti-cancer drug bortezomib (33). In the current pharmaceutical and health care scenarios, drug leads that may otherwise languish in the laboratory could be fully capitalised via the use of ‘Core Model’ approaches to make it to the patient, saving time, labour and capital.
Here in Part II, we introduce a novel EI technology, namely EBA, a cognition enhancement technology, that determines data relationships by comparing directions of stimuli-induced information flows (cause-effect relationships) in dynamic interaction-network systems. In doing so, EBA translates vast amounts of data into actionable insights. This information-theory based methodology uses information characteristics, and unlike AI approaches, it requires fewer data points, is insensitive to noise and reduces system complexity.
The inherent logic of the application EBA methodology in analysis of life science data is based on the premise that behaviour of biological systems is founded on emergent properties. We propose innovative network science that addresses the general need for improved applications and advanced algorithms in translational chemical biology.
Also discussed is the capability of the EBA platform for identifying substances (botanicals and small molecules) that can modulate resistance of many bacterial strains (gram positive and gram negative) to a wide array of antibiotics. The utility of this approach is demonstrated in the discovery of drug combinations that promise to overcome the menacing resistance of deadly bacteria against multiple antibiotics. Notably, this information theory-based topological data analysis methodology reduces systems complexity, data noise and scaling issues which are key problems in life science’s Big Data analytics.
The ability to ascertain complex data relationships without the need for supercomputers and human supervision for creating training sets makes this technology particularly useful for application in devices, robotics and mobile applications. Thus, SystaMedic Inc’s EBA platform has the potential to become a key technology for identifying efficacious and well-tolerated, differentiated medicines, and is applicable for all stages of drug discovery and development, including clinical studies and competitive analysis and due diligence. Moreover, the use of our EBA platform can be extended beyond life sciences to industries requiring rapid feedback on impact of environmental or physiological changes in emergent systems.
We reiterate that the survival of the pharmaceutical industry and of the world’s healthcare systems will ultimately depend on innovation, on a better understanding of disease, on the efficient development of novel drugs and on preventive measures coupled to an efficient use of our limited societal resources (33).
We thank our many colleagues who have influenced us in innumerable ways over the years and for being the beneficiary of their collective wisdom. DDW
Dr Anton Fliri is the founder of SystaMedic Inc. He got his PhD in organic chemistry from the University of Innsbruck and advanced his education as a postdoctoral fellow at Nobel Prize winning labs at Harvard and ETH Zurich. He also obtained a JD degree from the University of Hartford specialising in international intellectual property. In his 30-year tenure at Pfizer he accomplished in the areas of cancer, neuroscience and pattern recognition technology. After leaving Pfizer, he developed analytical tools for analysing system-wide cause-effect relationships that led to the creation of SystaMedic Inc.
Dr Palaniyandi Manivasakam is a co-founder of SystaMedic Inc He holds PhD in molecular Genetics from the University of Alberta, Canada. He completed his post-doctoral training at Harvard in the cancer biology department. His expertise includes Genetics, pharmacogenomics, high throughput screening and big data analytics which were applied to various stages of drug discovery and development. He has authored more than 41 publications and has 37 published and eight issued patents. His entrepreneurial experience included co-founding, developing and merging a biotech company, IndUS Pharmaceuticals. Presently, he is leading the new Intellectual Property development and business strategies at SystaMedic Inc.
Dr Shama Kajiji is a co-founder of SystaMedic Inc. She received her PhD in Pharmacology and Experimental Pathology (Cancer Biology) from Brown University and completed her post-doctoral training in Immunology at Scripps Clinic and Research Foundation, La Jolla, CA where she discovered alpha 6 beta 4 integrin. She also earned an Executive MBA from the University of Rhode Island and is a graduate of the Strategic Leadership Programs offered by Harvard and Stanford Universities. Shama spent ~20 years at Pfizer in various positions of responsibilities including Global Head of Attrition Analysis Office. She holds multiple patents and her successful R&D contributions include Tarceva, a marketed drug and Tremelimumab, a monoclonal antibody. In addition to Pfizer, Shama has contributed to the management and analysis of Global R&D pipelines of Merck Research Labs and Janssen Pharmaceuticals. As CEO of SystaMedic Inc, she is leading the strategic planning and operational objectives of the company.
Dr Gerald Lushington is a co-founder, Chief Scientific Officer and Executive Vice-President of TheraPeptics, LLC – a biotech company focused on designing peptide-based therapeutics for neurodegenerative disorders, microbial infections and cancer. With more than 180 publications and five US and international patents, his studies entail simulations, visualisation and data analysis techniques for extracting insight from in vitro and in vivo studies in the health and life sciences. He has developed and licensed technology for commercial development to Centaur Animal Health, Inc and supports R&D and commercialisation efforts for a diverse array of clients in academia, government and the pharmaceutical and biotechnology industries. Dr Lushington is an adjunct professor in the Department of Food, Nutrition, Dietetics and Health at Kansas State University. He serves as Editor-in-Chief for the journal Combinatorial Chemistry & High Throughput Screening, Bioinformatics Editor for WebMedCentral and is on editorial boards for Current Bioactive Compounds, Current Enzymology and the Annals of Biotechnology.
Dr Rathnam Chaguturu is the Innovation Czar, Founder & CEO of iDD Partners (Princeton Junction, NJ, USA), a non-profit think-tank focused on pharmaceutical innovation and, most recently, Deputy Site Head, Center for Advanced Drug Research, SRI International. He has more than 35 years of experience in academia and industry, managing new lead discovery projects and forging collaborative partnerships with academia, disease foundations, non-profits and government agencies. He is the Founding President of the International Chemical Biology Society, a Founding Member of the Society for Biomolecular Sciences and Editor-in- Chief-Emeritus of the journal Combinatorial Chemistry & High Throughput Screening. Rathnam passionately advocates the need for innovation and entrepreneurship and the virtues of collaborative partnerships in addressing the pharmaceutical innovation crisis, and aggressively warns the threat of scientific misconduct in biomedical sciences. He received his PhD with an award-winning thesis from Sri Venkateswara University, Tirupati, India.
1 Johnson, S. Emergence: The Connected Lives of Ants, Brains, Cities, and Software. Scribner, New York, 2004.
3 Tenner, E. The Efficiency Paradox: What the big data can’t do. Knopf, New York, 2018.
4 Lushington, G. Combinatorial Chemistry & High Throughput Screening, Volume 21, Number 1, pp. 3-4 (March 2018).
5 Lushington, G. Combinatorial Chemistry & High Throughput Screening, Volume 21, Number 4, in press (2018).
6 Chen, H, Engkvist, O, Wang, Y, Olivecrona, M, Blaschke, T. Drug Discovery Today, in press (2018). 7 Gho, YS, Lee, C. Molecular BioSystems, Volume 13, Number
7, pp.1291-1296 (2017).
9 Erlandsson, B-E, Akay, A, Dragomir, A. IEEE Pulse. https://pulse.embs.org/november-2015/mining-social-media-big-data-for-health/ (2015).
10 Mullard, A. FDA approvals for the first 6 months of 2017. (2017): 519.
11 Mullin, E (2018). Stopping breast cancer with help from artificial intelligence. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/602721/stopping-breast-cancer-with-help-from-ai/.
12 Ascr-discovery.science.doe.gov, 2018.
13 Gosak, M et al. Network science of biological systems at different scales: a review. Physics of life reviews (2017).
14 Szabo, C, Teo, YM, Chengleput, GK. Understanding complex systems: using interaction as a measure of emergence, Proceedings of the 2014 Winter Simulation Conference, December 07-10 (2014).
15 Szabo, C and Birdsey, L. Toward The Automated Detection Of Emergent Behavior. Emergent Behavior in Complex Systems Engineering: A Modeling and Simulation Approach, pp.228-261 (2018).
16 Tanaka, H et al. Boolean modeling of mammalian cell cycle and cancer pathways. aliferobotics. co. jp (2017).
17 Vidunas, R. Delegated causality of complex systems. arXiv preprint arXiv:1707.08905 (2017).
18 Deritei, D et al. Principles of dynamical modularity in biological regulatory networks. Scientific Reports volume 6, Article number: 21957 (2016).
19 Gosak, M et al. Network science of biological systems at different scales: a review. Physics of life reviews (2017).
20 Craig, J. Complex diseases: Research and applications. Nature Education 1, 184 (2008).
21 Suderman, R et al. Fundamental trade-offs between information flow in single cells and cellular populations. Proceedings of the National Academy of Sciences 114.22 (2017): 5755-5760.
22 Chamberlin, W. Networks, emergence, iteration and evolution. Emergence: Complexity and Organization. 2009 Dec 31 [last modified: 2016 Dec 4]. Edition 1.
23 Luo, J and Magee, CL (2011). Detecting evolving patterns of self organizing networks by flow hierarchy measurement. Complexity, 16(6), pp.53-61.
24 Pottie, GJ and Kaiser, WJ (2000). Wireless integrated network sensors. Communications of the ACM, 43(5), pp.51-58.
25 Vigo, R. Representational information: A new general notion and measure of information. Inf. Sci. 2011, 181, 4847-4859.
26 Snášel, V et al (2017) Geometrical and topological approaches to Big Data, Future Generation Computer Systems Volume 67, 286-296.
27 Geiger, B, Spatz, JP and Bershadsky, AD (2009). Environmental sensing through focal adhesions. Nature reviews Molecular cell biology, 10(1), p.21.
28 Navlakha, S and Bar-Joseph, Z (2015). Distributed information processing in biological and computational systems. Communications of the ACM, 58(1), pp.94-102.
29 Sachs, K, Perez, O, Pe’er, D, Lauffenburger, DA and Nolan, GP (2005). Causal protein-signaling networks derived from multi parameter singlecell data. Science, 308(5721), pp.523-529.
30 Rowland, MA, Greenbaum, JM and Deeds, EJ (2017). Crosstalk and the evolvability of intracellular communication. Nature communications, 8, p.16009.
31 Shklarsh, A, Ariel, G, Schneidman, E and Ben- Jacob, E. Smart swarms of bacteria-inspired agents with performance adaptable interactions. PLoS Comput. Biol. 7, 9 (Sept. 2011), e1002177.
32 Bruns, W et al (1985). Suppression of intrinsic resistance to penicillins in Staphylococcus aureus by polidocanol, a dodecyl polyethyleneoxid ether. Antimicrobial agents and chemotherapy, 27(4), pp.632-639.
33 Sánchez-Serrano, I, Pfeifer, T and Chaguturu, R (2018). Disruptive Approaches to Accelerate Drug Discovery and Development. Part 1. Tools, Technologies and The Core Model. Drug Discovery World, Spring, 39-52.
Sánchez-Serrano, I, Pfeifer, T and Chaguturu, R. Disruptive Approaches to Accelerate Drug Discovery and Development. Part 1. Tools, Technologies and The Core Model. Drug Discovery World, Spring, 39-52, 2018.
Johnson, Steven. Emergence: The Connected Lives of Ants, Brains, Cities, and Software. Scribner, New York, 2004.
Tenner, Edward. The Efficiency Paradox: What the big data can’t do. Knopf, New York, 2018.
Ridley, M. The Evolution of Everything: how new ideas emerge. Harper, New York, 2016.
Chaguturu R, Ed. Collaborative Innovation in Drug Discovery: strategies for public and private partnerships. Wiley, New York, 2014.
Gassmann, O and Schuhmacher, S. Leading Pharmaceutical Innovation: how to win the life science race. Wiley, New York, 2018.
Gulfo, J. Innovation Breakdown: How the FDA and Wall Street Cripple Medical Advances, Post Hill Press, New York, 2017.