Anthropic is catching up with OpenAI
Welcome to Cautious Optimism, a newsletter on tech, business and power.
Tuesday. We’re breaking from our usual form today to allow for two longer riffs. Back to our regular format tomorrow. Onwards! — Alex
All about Amazon
Amazon has big plans for warehouse automation. Here’s a look at its ‘Proteus’ robot that can work alongside humans (it looks like an industrial Roomba); this article from 2021 discusses the company’s plans to spend $40 million on a new “350,000-square-foot robotics innovation hub,” and the same publication covered robotic automation of human gigs at Amazon back in 2019.
It’s little surprise, then, that Amazon’s investments and knowledge from its earlier work are paying off now. The New York Times got its hands on a trove of internal Amazon documents, according to which the company expects robotic automation to let it “avoid hiring more than 160,000 people in the United States it would otherwise need by 2027,” and result in savings of about 30 cents on every item that “Amazon picks, packs and delivers to customers.” And since the company delivers billions upon billions of packages a year, those savings will add up quickly.
Amazon execs reportedly also told their board that its investments in robotics “would allow the company to continue to avoid adding to its U.S. work force in the coming years.”
None of this is shocking. Amazon is a famously frugal, hard-charging workplace. But once your gig can be automated, it doesn’t look like that hard would earn you much in the realm of job security — no matter if you’re a warehouse worker suffering from above-average injury rates, a delivery worker driving unsafely to meet quotas, or even a software engineer.
Amazon’s recent return-to-office push cost it senior staff, and efforts to automate work using homegrown AI tools appear to be — as everywhere else — only partially successful. Layoffs have trimmed overall staffing levels, too.
The result? A more brittle company with less institutional knowledge that cannot get back on its feet as quickly as it would like to, especially during a global crisis like an AWS outage.
The view that a company can cut too deeply, automate too much, or even put too much pressure on its staff is off-limits in tech these days. The balance of power skews more towards employers than labor, and they are pushing. The downside is that you might wind up with an organization that’s been hollowed out.
Or as the Duckbill Group’s chief cloud economist, Corey Quinn, said it in The Register yesterday: “Today is when the Amazon brain drain finally sent AWS down the spout.”
Nothing is going to save the warehouse workers. As soon as their work can be automated, they are out. It won’t be a single event, but expect Amazon to slow hiring as it expands its robotic fleets. Normally, I wouldn’t complain about technology-driven job turnover; that’s how we keep our economy growing over time. But as a person who pays bills, cares for children and has seen endless advertisements for hands-on Amazon jobs soaked in patriotism, it’s going to be tough to get me excited about the company’s staffing plans.
Anthropic is catching up with OpenAI
Anthropic co-founder Jack Clark gave an interesting talk recently (the speech is also on his personal blog) about how he went from being a journalist to working in AI: First he was impressed by the pace of technological change that he saw as part of his reporting, and later on hopped into the trenches to start his own company.
Clark spoke about the pace of AI improvement and how his natural skepticism eventually dimmed after “being hit again and again in the head with the phenomenon of wild new capabilities emerging as a consequence of computational scale.”
I think the ‘CEO as part-time blogger’ concept we’re seeing in the AI field should be encouraged and expanded to other parts of the technology world.
Clark is at once an AI bull (he sees a path to AGI) and an AI worrier. He is concerned about differing incentives between machine brains and their human counterparts (does a synthetic mind care if it damages itself, or you, in pursuit of its stated goal?) as we progress towards AI models improving themselves.
His solution is to keep building, naturally, but he also thinks AI industry members should share their concerns and listen:
For us to truly understand what the policy solutions look like, we need to spend a bit less time talking about the specifics of the technology and trying to convince people of our particular views of how it might go wrong – self-improving AI, autonomous systems, cyberweapons, bioweapons, etc. – and more time listening to people and understanding their concerns about the technology. There must be more listening to labor groups, social groups, and religious leaders. The rest of the world which will surely want—and deserves—a vote over this.
If you have a stake in society today apart from tech, I presume that Clark’s views will resonate. If you, instead, are a pure-technology bull, maybe they won’t.
They sure didn’t for former venture capitalist, current AI and crypto lead for the U.S. government and Russia apologist, David Sacks, who tweeted the following in response:
Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering. It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem.
I’m sure you’ve caught up with the ensuing drama and arguments both against and in favor of Anthropic (if not, here’s a good summary). Reid Hoffman, another member of the PayPal mafia, chimed in a bit later, arguing that Anthropic is “one of the good guys,” and that it is important to back the good guys, “especially in AI.” And then he and Sacks got into it as well.
But I think everyone is missing that, beneath the shouting and complaining, Anthropic stands a good chance of winning.
Anthropic started the year with annualized revenue of around $1 billion, which rose to $1.4 billion in March, and $3 billion by May. The company reached run-rate revenue of $5 billion in August and is closing in on $7 billion run-rate today. Its target of $9 billion this year is not out of the question.
Pulling revenue from $1 billion to $7 billion in a single year is so impressive a feat that I can’t summon many — if any — comparable examples.
OpenAI gets a lot of press as it scours the world for loose bricks of million-dollar bills to build out a compute footprint that could challenge a sci-fi author’s imagination. But is it growing as quickly?
The ChatGPT maker went from run-rate revenue of $5.5 billion in December 2024 to $10 billion in June of 2025. By July, its run-rate was close to $12 billion, and in October, we heard that the company had reached $13 billion.
I don’t think OpenAI saw its growth slow in the middle of the year; it’s more likely that we’re getting irregular, episodic reports that make its growth appear lumpier than it is.
And: OpenAI is going to turn on ads soon. When it does, the company could see its revenue growth accelerate.
OpenAI’s growth is insanely good. Yet, it’s a lot less than Anthropic’s own growth this year — 136% versus 600%. Anthropic may just catch OpenAI.
Surprised that the far less famous company could come out on top? Don’t be. OpenAI reportedly makes the bulk of its revenue from consumer subscriptions, which it has to support by floating a lot of expensive free users. Anthropic, in contrast, makes most of its cash from selling access to its models via an API, which requires fewer free users and makes for potentially easier upsells.
Both companies sell to consumers and enterprises, but their revenue mix places them on opposite sides of the customer median. And Anthropic’s approach appears to be rocking out.
Hence the anger at the company. A lot of the folks annoyed at Clark and his essay urging people to listen to the unwashed masses as if they actually fucking matter are also supporters of xAI, Musk’s competing entity. Now, I don’t mind friends supporting friends, but seeing a government official publicly attack and belittle a national jewel like Anthropic is gross, unfair, and an action not entirely free of sour grapes, I’d bet.
How funny would it be if the company that becomes the global AI leader is run by thoughtful softies who care about others, aren’t allergic to a little AI regulation, and want to think out loud about risk? I mean, pretty damn funny.
Glue, a company that Sacks helped found recently, offers Anthropic models to its customers, and uses Anthropic’s Model Context Protocol (MCP), too.
In closing, this isn’t a minor fracas. It’s battlefield haze spiraling up from different, competing perspectives. Remember when Marc wrote this in his famous Techn-Optimist Manifesto?
We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.
Clark is on the other side of that wager. The one whose perspective wins could set the tone for much of the coming decades.
I am less worried than Clark about AI’s downsides, but I’m also unwilling to believe that any attempt to build with a modest societal framework in mind is akin to killing people. Alas.