The Federation of American Scientists (FAS) has published five policy recommendations to address the challenges of artificial intelligence (AI) in life sciences.
The Bio X AI policy recommendations address the need for oversight of biodesign AI tools, biosecurity screening of synthetic DNA and guidance on biosecurity practices for automated laboratories.
FAS hopes that these recommendations will help inform policy development on these topics, including the work of the National Security Commission on Emerging Biotechnologies.
FAS said: “AI is likely to yield tremendous advances in our basic understanding of biological systems, as well as significant benefits for health, agriculture, and the broader bioeconomy. However, AI tools, if misused or developed irresponsibly, can also pose risks to biosecurity. The landscape of biosecurity risks related to AI is complex and rapidly changing, and understanding the range of issues requires diverse perspectives and expertise.”
Significant effort has gone into establishing frameworks to evaluate and reduce risks from ‘foundation’ AI models (i.e., large models designed to be used for many different purposes), including the recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Bioeconomy EO).
However, specific regulations will be needed to cover biodesign tools, which are more specialised AI models that are trained on biological data and provide insight into biological systems).
- Oliver Crook, a postdoctoral researcher at the University of Oxford and a machine learning expert, calls on the US government to ensure responsible development of biodesign tools by instituting a framework for checklist-based, institutional oversight for these tools.
- Richard Moulange, AI-Biosecurity Fellow at the Centre for Long-Term Resilience, and Sophie Rose, Senior Biosecurity Policy Advisor at the Centre for Long-Term Resilience, expand on the Executive Order on AI with recommendations for establishing standards for evaluating their risks.
- Samuel Curtis, an AI Governance Associate at The Future Society, takes a more open-science approach, with a recommendation to expand infrastructure for cloud-based computational resources internationally to promote critical advances in biodesign tools while establishing norms for responsible development.
- Shrestha Rath, a scientist and biosecurity researcher, focuses on biosecurity screening of synthetic DNA, which the Executive Order on AI highlights as a key safeguard, and contains recommendations for how to improve screening methods to better prepare for designs produced using AI.
- Tessa Alexanian, a biosecurity and bioweapons expert, calls for the US government to issue guidance on biosecurity practices for automated laboratories, sometimes called ‘cloud labs,’ that can generate organisms and other biological agents.
FAS concludes: “Each of these recommendations represents an opportunity for the US government to reduce risks related to AI, solidify the US as a global leader in AI governance, and ensure a safer and more secure future.”