AI continues to dominate the startup landscape as several new Y Combinator-backed ventures emerged on Friday. Among them is Stellon Labs, a company developing “super-tiny” frontier models that can run on virtually any device—without the need for a GPU.
Founded by Carnegie Mellon University AI researchers Rohan Joshi and Divom Gupta, who previously trained state-of-the-art codec avatar models at Meta Research, Stellon Labs is tackling a key challenge in AI: the massive compute and memory demands of today’s foundation models. Such requirements make most models inaccessible for edge devices like smartphones, laptops, robots, and embedded systems.
Stellon Labs aims to change that by creating compact, high-performance models for speech, language, and video intelligence—built specifically to run efficiently on everyday hardware, without sacrificing quality.
READ: Nox Metals aims to redefine US manufacturing with automated metals factories (
Another notable Y Combinator-backed AI startup is Frizzle, which leverages AI to grade handwritten math assignments and worksheets. Founded by Abhay Gupta and Shyam Sai, the platform has already helped teachers grade thousands of worksheets, saving hundreds of hours in manual effort.
Frizzle aims to use AI to solve the problem of teachers being overworked, and to free up the time used for grading assignments so that they can focus on supporting students. The startup claims to provide accurate grading, actionable insights, and student-friendly feedback.
While Gupta previously worked as a Product Manager at Coinbase, where he helped generate $50 million in incremental revenue, Joshi was a Machine Learning Engineer at Microsoft, holds a patent in LLM applications, and co-founded the Midwest Math Circle, which has served hundreds of students for over a decade.
READ: Vulcan Technologies Brings AI to the Heart of American Law (
Lastly, there’s OnDeck, an AI startup focused on video analysis. Founded by Alexander Dungate and Sepand Dyanatkar, the company aims to address the challenges of building computer vision models, which typically require months of engineering effort for data collection, training, and deployment. Traditional models often struggle to generalize across different camera setups, workflows, and environments, and obtaining sufficient training data for specific tasks can be nearly impossible.
OnDeck tackles this problem with vision-language models (VLMs). Its vision engine can generalize across tasks without the need for training data and has already analyzed thousands of hours of footage across autonomous surface vehicles, robotics research, security systems, offshore oil and gas monitoring, and more.

