Anthropic, the AI startup co-founded by ex-OpenAI employees, today announced the launch of its first consumer-facing premium subscription plan, Claude Pro, for Claude 2 — Anthropic’s AI-powered, text-analyzing chatbot.
For the monthly price of $20 in the U.S., or £18 in the U.K., customers get “5x more usage” than the free Claude 2 tier provides, the ability to send “many more” messages, priority access to Claude 2 during high-traffic periods and early access to new features.
Claude Pro is priced the same as OpenAI’s ChatGPT Plus, OpenAI’s paid plan for ChatGPT — a Claude 2 rival.
“Since launching in July, users tell us they’ve chosen Claude as their day-to-day AI assistant for its longer context windows, faster outputs, complex reasoning capabilities and more,” Anthropic wrote in a blog post. “Many also shared that they would value more file uploads and conversations over longer periods … We’re grateful for your support as we strive to build helpful, honest, and harmless systems that fuel productivity and inspire creativity.”
Anthropic says that Claude Pro users can expect to send at least 100 messages every eight hours, when the message limit resets. (That’s versus the 50-message-per-three-hour limit imposed on subscribers to ChatGPT Plus.) Why’s there a limit? Constrained compute capacity, Anthropic explains in a support document:
“A model as capable as Claude 2 takes a lot of powerful computers to run, especially when responding to large attachments and long conversations. We set these limits to ensure Claude can be made available to many people to try for free, while allowing power users to integrate Claude into their daily workflows.”
To Anthropic’s point, chatbots like Claude 2 are indeed expensive to host. At one point in time, it was reportedly costing OpenAI $700,000 a day — or around $21 million a month — to run ChatGPT.
The message limit gets used up faster with longer conversations — notably with big attachments. For example, if a Claude Pro subscriber uploads a copy of “The Great Gatsby,” they’d only be able to send around 20 subsequent messages within the next eight-hour window. That’s because Claude 2 “re-reads” the entire conversation — including attachments — every time it receives a message.
As we’ve reported previously, Anthropic’s ultimate ambition is to create a “next-gen algorithm for AI self-teaching,” as it describes it in a recent pitch deck to investors. Such an algorithm could be used to build virtual assistants that can answer emails, perform research and generate art, books and more — some of which we’ve already gotten a taste of with the likes of GPT-4 and other large language models.
To date, Anthropic, which launched in 2021, led by former OpenAI VP of research Dario Amodei, has raised $1.45 billion at a valuation in the single-digit billions. While that might sound like a lot, it’s far short of what the company estimates it’ll need — $5 billion over the next two years — to create its envisioned AI.
Most of the cash will go toward compute. Anthropic implies in the deck that it relies on clusters with “tens of thousands of GPUs” to train its models, and that it’ll require roughly a billion dollars to spend on infrastructure in the next 18 months alone.