Anthropic, the AI startup founded by ex-OpenAI research executives, is in late-stage talks to raise as much as $2 billion at a $60 billion valuation, CNBC has confirmed Tuesday.
Reportedly, the funding round is being led by Lightspeed Venture Partners, an anonymous source told The Wall Street Journal, who broke news first. The news comes just months after Amazon’s $4 billion investment in the AI startup.
READ: Cartesia AI wraps 2024 with a $27 million seed fund, predicts AI trends for 2025 (December 30, 2024)
In 2024, Anthropic was valued at $18 billion with investors like Menlo Ventures, Amazon and Alphabet backing the startup. Reportedly, Anthropic annualized revenue is about $875 million and comes chiefly from enterprise sales. Lightspeed, another existing investor in Anthropic, declined to comment on the current round.
With ginormous funding rounds by Anthropic and xAI, AI startups have accounted for nearly half of the venture capital dollar raised in the U.S. last year, according to PitchBook data.
Popular for its AI chatbot Claude, Anthropic is an AI safety and research company founded in 2021 by former OpenAI researchers, including siblings Daniela and Dario Amodei. The company’s mission is to develop AI systems that are safe, interpretable, and aligned with human values.
Its focus is on addressing the potential risks posed by advanced AI, particularly artificial general intelligence (AGI), which could perform tasks across various domains autonomously.
READ: Founder of Loom, Vinay Hiremath, faces existential crisis after $1 billion startup sale (January 7, 2025)
A key area of Anthropic’s work is AI alignment—ensuring that AI systems’ goals align with human intentions and priorities. As AI systems become more advanced, it becomes increasingly difficult to predict their behavior, making it crucial to develop methods to control and guide them effectively.
Anthropic’s work is part of the broader AI safety community, which includes other organizations like OpenAI and DeepMind. While the specifics of their approaches may differ, all aim to ensure that AI is developed in a way that benefits humanity and minimizes risks.
By focusing on AI alignment, interpretability, and safety, Anthropic strives to help guide the future of AI in a direction that prioritizes ethical considerations and societal well-being.

