CognitiveLab, a Bangalore-based research group has become one of the recipients of Meta’s 2024 Llama Impact awards, which recognizes the lab’s work with multilingual AI. The research group plans to use the grant for the Nayana Project, which aims to expand access to AI across over 22 languages, aiming to reach more than three billion people.
Llama Impact Grants support open-source initiatives using Meta’s Llama models to address social challenges. CognitiveLab’s Nayana is a multilingual, multimodal language model that integrates Llama for document and image processing with support for low-resource Indic languages. This includes capabilities across text, vision, and speech, and has outperformed existing benchmarks in OCR for ten Indian languages.
READ: Synthio Labs comes out of stealth, launches AI medical expert (April 29, 2025)
CognitiveLab founder Shashi Kumar said “The Llama Impact Grant enables us to supercharge our efforts with Nayana — expanding language coverage, enhancing multimodal capabilities, and building high-quality training datasets for low-resource language communities. Open-source and Llama have empowered us to build world-class systems like Ambari and Nayana with minimal resources.” According to Shivnath Thukral, vice president and head of public policy at Meta India, open source AI is a powerful tool that can help bridge the digital divide, especially in a diverse country like India. “With the 2024 Llama Impact Grant, we’re proud to support the Nayana project. Its work embodies the spirit of open innovation—making advanced AI usable for billions of people,” he said.
The Llama Impact Grant program was founded in 2023 to support works built on Meta’s open-source LLMs — Llama 2, Llama 3, and Llama 4. The Llama family has been widely used by global research and development communities with over one billion downloads and more than 85,000 derivative models.
CognitiveLab plans to use the funding to expand Nayana’s language and multimodal coverage, improve its Indic tokenizer, and develop new datasets for speech, text, and image processing. The lab also aims to release tools for deployment in low-resource environments and collaborate with the community to set new benchmarks for multilingual AI.

