Mira Murati’s Thinking Machines Lab, backed by $2 billion in seed funding and powered by a team of former OpenAI researchers, has revealed its first major project, developing AI models capable of delivering consistent, reproducible responses. The announcement, shared in a blog post on Wednesday, marks the lab’s debut in a field that has drawn intense global interest.
In its blog post titled “Defeating Nondeterminism in LLM Inference,” Thinking Machines Lab explores why AI models often produce unpredictable outputs. As the post explains, if you ask ChatGPT the same question multiple times, you’re likely to receive a range of different answers. While much of the AI community has treated this unpredictability as a given with current models widely regarded as non-deterministic systems. Murati’s team believes it’s a challenge that can, in fact, be solved.
Thinking Machines Lab researcher Horace He points to GPU kernels, the tiny programs inside Nvidia chips that handle inference as a key source of unpredictability in AI models. He explains that the way these kernels are combined during processing introduces randomness, but with tighter control over this orchestration layer, models could be steered toward more deterministic behavior.
READ: Trump’s visa and tax plans put Indian AI startups on edge in the US (
Horace He explains that making AI responses reproducible isn’t just useful for companies and scientists, it could also make reinforcement learning (RL) work better. RL trains AI models by rewarding them for correct answers, but if the responses keep varying, the training data becomes messy. More consistent answers would make the whole RL process “smoother,” He says. According to The Information, Thinking Machines Lab has also told investors it plans to use RL to fine-tune AI models for business needs.
Thinking Machines Lab has emphasized that it intends to regularly share blog posts, code, and research updates as a way to “benefit the public, but also improve our own research culture.” The latest article is the first entry in its new blog series, Connectionism, and appears to reflect that promise. While OpenAI also pledged openness in its early days, it has grown more guarded as it scaled. Whether Murati’s lab maintains its commitment to transparency remains to be seen.
The blog post provides a rare window into one of Silicon Valley’s most closely watched AI startups. Although it stops short of outlining the full direction of its technology, it makes clear that Thinking Machines Lab is taking on some of the toughest challenges at the edge of AI research. The bigger question now is whether the lab can turn those breakthroughs into real products that live up to its $12 billion valuation.
Murati, who previously served as OpenAI’s chief technology officer, said in July that Thinking Machines Lab’s first product will be launched in the coming months and will be “useful for researchers and startups developing custom models.” For now, details remain under wraps, and it is not yet certain whether the upcoming release will build on this work to deliver more reproducible responses.

