Anthropic, the prominent generative AI startup co-founded by OpenAI veterans, has raised $450 million in a Series C funding round led by Spark Capital.
Anthropic wouldn’t disclose what the round valued its business at. But The Information reported in early March that the company was seeking to raise capital at an over-$4.1 billion valuation. It wouldn’t be surprising if that figure ended up being within the ballpark.
Notably, tech giants including Google (Anthropic’s preferred cloud provider), Salesforce (via its Salesforce Ventures wing) and Zoom (via Zoom Ventures) participated in the financing, alongside Sound Ventures and other undisclosed VC parties. It’d seem to signal a strong belief in the promise of Anthropic’s tech, which uses AI to perform a wide range of conversational and text processing tasks.
“We are thrilled that these leading investors and technology companies are supporting Anthropic’s mission: AI research and products that put safety at the frontier,” CEO Dario Amodei said in a statement. “The systems we are building are being designed to provide reliable AI services that can positively impact businesses and consumers now and in the future.”
To wit, Zoom recently announced a partnership with Anthropic to “build customer-facing AI products focused on reliability, productivity and safety,” following a similar tie-up with Google. Anthropic claims to have more than a dozen customers across industries including healthcare, HR and education.
Perhaps not coincidentally, the Series C also comes after Spark Capital’s hiring of Fraser Kelton, the former head of product at OpenAI, as a venture partner. Spark was an early investor in Anthropic. But the VC firm has redoubled its efforts to seek out early-stage AI startups particularly in the generative AI space, which remains red hot.
“All of us at Spark are excited to partner with Dario and the entire Anthropic team on their mission to build reliable and honest AI systems,” Yasmin Razavi, a general partner at Spark Capital who joined Anthropic’s board of directors in connection with the Series C, said in a press release. “Anthropic has assembled a world-class technical team that is dedicated to building safe and capable AI systems. The overwhelmingly positive response to Anthropic’s products and research hints at AI’s broader potential for unlocking a new paradigm of flourishing in our societies.”
With the new $450 million tranche, Anthropic’s warchest stands at a whopping $1.45 billion. That nearly tops the list of the best-funded startups in AI, eclipsed only by OpenAI, which has raised a total of over $11.3 billion to date (according to CrunchBase). Competitor Inflection AI, a startup building an AI-powered personal assistant, has secured $225 million, while another Anthropic rival, Adept, has raised around $415 million.
Amodei, the former VP of research at OpenAI, launched Anthropic in 2021 as a public benefit corporation, taking with him a number of OpenAI employees, including OpenAI’s former policy lead Jack Clark. Amodei split from OpenAI after a disagreement over the company’s direction, namely the startup’s increasingly commercial focus.
Anthropic now competes with OpenAI as well as startups like Cohere and AI21 Labs, all of which are developing and productizing their own text-generating — and in some cases image-generating — AI systems. But it has grander ambitions.
As ProWellTech previously reported, Anthropic plans to — as it describes in a pitch deck to investors — create a “next-gen algorithm for AI self-teaching.” Such an algorithm could be used to build virtual assistants that can answer emails, perform research and generate art, books and more, some of which we’ve already gotten a taste of with the likes of GPT-4 and other large language models.
The next-gen algorithm is the successor to Claude, Anthropic’s chatbot, still in preview but available through an API, that can be instructed to perform a range of tasks, including searching across documents, summarizing, writing and coding and answering questions about particular topics. In these ways, it’s similar to OpenAI’s ChatGPT. But Anthropic makes the case that Claude, released in March, is “much less likely to produce harmful outputs,” “easier to converse with” and “[far] more steerable” than the alternatives.
Why’s Claude superior in Anthropic’s view? In the pitch deck, Anthropic argues that its technique for training AI, called “constitutional AI,” makes the behavior of systems both easier to understand and simpler to adjust as needed by imbuing systems with “values” defined by a “constitution.” Constitutional AI basically seeks to provide a way to align AI with human intentions, allowing systems to respond to questions and perform tasks using a simple set of guiding principles.
In its quest toward generative AI superiority, Anthropic recently expanded the context window — essentially, Claude’s “memory” — from 9,000 tokens to 100,000 tokens, with “tokens” representing parts of words.) With perhaps the largest context window of any public AI model, Claude can converse relatively coherently for hours — even days — as opposed to minutes and digest and analyze hundreds of pages of documents.
That progress doesn’t come cheap.
Anthropic estimates that its next-gen model will require on the order of 10^25 FLOPs, or floating point operations — several orders of magnitude larger than even the largest models today. Of course, how this translates to computation time depends on the speed and scale of the system doing the computation. But Anthropic implies (in the deck) that it relies on clusters with “tens of thousands of GPUs” and that it’ll require roughly a billion dollars in spending over the next 18 months.
In point of fact, Anthropic aims to raise as much as $5 billion over the next two years.
“With our Series C funding, we hope to grow our product offerings, support business that will responsibly deploy Claude in the market, and further AI safety research,” the company wrote in a press release this morning. “Our team is focused on AI alignment techniques that allow AI systems to better handle adversarial conversations, follow precise instructions and generally be more transparent about their behaviors and limitations.”