Two Indian American and an Indian researcher — Surya Ganguli, Surbhi Goel and Krishna Pillutla — are among 28 scholars studying AI’s potential to dramatically benefit humankind, set to receive $18 million in Schmidt Sciences AI2050 fellowships.
The researchers will pursue efforts to solve challenging problems in AI by building AI scientists, designing safer and more trustworthy AI models and improving the ability of AI to pursue biological and medical research, according to a media release.
The AI2050 program funds researchers to pursue projects to help AI create immense benefits for humanity by 2050. Twenty-one early career fellows and seven senior fellows will receive funding over the next three years. This marks the fourth cohort of the program, which now has 99 fellows across eight countries and 42 institutions.
“AI is underhyped, especially when it comes to its potential to benefit humanity,” said Eric Schmidt, who co-founded Schmidt Sciences with his wife Wendy. “The AI2050 fellowship was established to turn that potential into reality—by supporting the people and ideas shaping a healthier, more resilient and more secure world.”
“The AI2050 fellows are ambitious yet collaborative researchers who focus on AI innovation and the opportunities and challenges in our AI2050 motivating question,” said James Manyika, co-chair of AI2050 and a senior vice president at Google. “This technology can and will bring about an epochal shift in our society—and the AI2050 fellows are shaping that change so it is a benefit for all people.”
In addition to the financial award, AI2050 scholars attend an annual gathering to share findings, learn from experts in the field and network.
READ: Four Indian Americans named 2025 Guggenheim fellows (
Launched in 2022, AI2050 also offers funding to support exceptional computational needs, enabling fellows to accelerate their research and overcome limitations related to hardware access.
Surya Ganguli, Associate Professor, Stanford University, has been chosen as a Senior Fellow. He will develop analytic theories that reveal the mechanisms by which large language and generative models create, reason, and learn, drawing on first principles from neuroscience and AI to build a scientific foundation for explainable and trustworthy intelligence.
Dr. Ganguli triple majored in physics, mathematics, and EECS at MIT, completed a PhD in string theory at Berkeley, and a postdoc in theoretical neuroscience at UCSF.
He has also been a visiting researcher at both Google and Meta AI, and a venture partner at a16z. His research spans the fields of AI, physics, and neuroscience, focusing on understanding and improving how both biological and artificial neural networks learn striking emergent computations.
Surbhi Goel, Assistant Professor, University of Pennsylvania has been chosen as an Early Career Fellow.
AI is designed to converse and collaborate with people promises immense societal benefits, from medicine to education. Yet, the black-box nature of these systems leads to unpredictable and often harmful errors, undermining the trust essential for their widespread and safe adoption.
Goel’s project addresses this trust deficit by using theoretically grounded approaches to understand why these systems fail during conversations, find ways to predict these failures, and empower the system to use these signs to verifiably avoid risky decisions. The goal is to build a future where AI is safe by design.
Earlier, she worked as a postdoc researcher at Microsoft Research NYC, and received her PhD in Computer Science from the University of Texas at Austin where her thesis received the Bert Kay Dissertation award.
Her research interests lie at the intersection of theoretical computer science and machine learning, with a focus on developing theoretical foundations for safe, reliable, and trustworthy AI.
READ: Indian American researchers develop world’s first 2D, non-silicon computer (
Krishna Pillutla, Assistant Professor, Indian Institute of Technology, Madras, has also been chosen as an Early Career Fellow.
Pillutla imagines a future where cutting-edge foundation model-based AI can deliver high utility at scale in sensitive domains like healthcare and finance, while offering provable protection for the privacy-sensitive data powering these models.
To this end, they will develop computationally efficient (multi-modal) fine-tuning, inference, and reasoning approaches such that contextually privacy-sensitive information in each of these settings provably cannot be leaked.
This research ultimately paves the way for more ethical and trustworthy AI in critical applications that benefit society.
Pillutla’s research focuses on developing AI that is privacy-preserving and robust, with a focus on applications advancing public good. Pillutla’s research has been recognized by an Outstanding Paper Award at NeurIPS and a J.P. Morgan Ph.D. fellowship.
He earned his PhD, master’s, and bachelor’s degrees respectively from the University of Washington, Carnegie Mellon University, and IIT Bombay. Pillutla has also held research positions at Google Research and Meta AI (FAIR).


