The U.S. Food and Drug Administration (FDA) is adopting the use of generative AI in its functioning. On Monday, FDA announced the launch of Elsa, a generative artificial intelligence (AI) tool designed to help employees from scientific reviewers to investigators, work more efficiently.
“Following a very successful pilot program with FDA’s scientific reviewers, I set an aggressive timeline to scale AI agency-wide by June 30,” said FDA Commissioner Marty Makary, M.D., M.P.H. “Today’s rollout of Elsa is ahead of schedule and under budget, thanks to the collaboration of our in-house experts across the centers.”
According to the media release, Elsa, built within a high-security GovCloud environment, offers a secure platform for FDA employees to access internal documents while ensuring all information remains within the agency.
READ: The Food and Drug Administration (FDA) unveils AI for scientific review (May 9, 2025)
“Today marks the dawn of the AI era at the FDA with the release of Elsa, AI is no longer a distant promise but a dynamic force enhancing and optimizing the performance and potential of every employee,” said FDA Chief AI Officer Jeremy Walsh. He added that as they learn how employees are using the tool, their development team will be able to add capabilities and grow with the needs of employees and the agency.
The agency is already using Elsa to accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets.
Elsa leverages generative AI to help FDA staff by summarizing adverse event reports, reviewing clinical trial protocols, generating database code, and speeding up evaluations of drugs and medical devices. It also aids in prioritizing inspection targets, improving safety monitoring.
Built on Amazon Web Services’ GovCloud, Elsa securely handles sensitive government data without using proprietary manufacturer information. The tool was launched ahead of schedule and under budget, demonstrating strong collaboration within the FDA. Elsa is already in use and is expected to be fully integrated into FDA workflows by June 30. This initiative showcases the FDA’s dedication to adopting advanced technologies, enhancing efficiency, and reinforcing public health protections through smarter, faster regulatory decision-making.
Generative AI refers to a type of artificial intelligence that can create new content such as text, images, music, or videos based on the data it has learned from. Unlike traditional AI, which typically analyzes or classifies existing information, generative AI produces original outputs that mimic human creativity. It uses advanced models like neural networks and deep learning to understand patterns and generate realistic, meaningful content. Examples include AI chatbots, image generators, and language models like GPT.
Generative AI is transforming industries by enabling automation, enhancing creativity, and creating new possibilities in entertainment, design, and communication.
The downside of federal agencies adopting AI includes risks like bias, errors, and lack of transparency in automated decisions, which can harm public trust. AI systems may reinforce existing inequalities if not carefully monitored. Privacy concerns arise from large-scale data collection and analysis.
Additionally, overreliance on AI might reduce human oversight, leading to mistakes in critical areas. There are also challenges in ensuring cybersecurity against AI-targeted attacks. Implementing AI requires significant investment, training, and continuous updates, which can strain resources. Finally, ethical dilemmas about accountability and control may emerge, making it essential to balance innovation with responsible governance.


