Everyone is excited about Claude Code!
Or: The dramatic acceleration of human potential and a contemporaneous democratization of AI
Welcome to Cautious Optimism, a newsletter on tech, business and power.
📈 Trending Up: Making memory … processing heavy crude? … Chinese stimulus? … car sales from Ford, GM … growth in India? … the cinema in China … crypto ETF optionality … massive corporate AI investments …
📉 Trending Down: China-Japan relations … Amazon’s AI ambitions … public trust of AI … cable valuations … free prediction markets (more) … competence … consumer privacy … startup lawyers … poor airline internet connectivity …
Things That Matter
Chips, chips, chips: Fresh off its deal to snap up (purchase access to?) Groq’s inference technology, Nvidia announced six chips that comprise its next-generation silicon line. Dubbed “Vera Rubin,” the cohort includes a CPU, a GPU (natch), a GPU interconnect switch, a networking card, a data processing unit, and an Ethernet switch. The company is pitching the chip collection as “one AI supercomputer.”
AMD had its own list of chip announcements, including a look at its upcoming ‘Helios’ computing rack that is expected to deliver 2.9 FP4 exaFLOPS of compute for AI inference.
Nvidia claims that the Vera Rubin setup will handle mixture-of-experts model inference at “up to 10x lower cost per token” of its current Blackwell chipstack.
Prediction: Nvidia will sell a lot of Vera Rubin chips.
Evidence: Nvidia wrangled praise for Rubin from the heads of OpenAI, Meta, Microsoft, Alphabet, AWS, Oracle, and others. Just deciding on the order of the quotes must have been a pain.
Didn’t Nvidia recently roll out the Blackwell line of chips? Didn’t they bring more compute capacity to market? Well, Jensen Huang thinks that “Rubin arrives at exactly the right moment, as AI computing demand for both training and inference is going through the roof.”
Alright, how quickly is AI demand growing? It’s hard to say. While we know that Google is forced to argue over how to allocate compute, and the cloud majors were shouting through their most recent earnings that they were still compute-constrained, there are competing signals, too. OpenRouter data points to a flattening of token processing demand since November, though the holiday period could be the key factor driving a slowdown in growth on the AI API platform. Ramp’s AI index shows a flattening of business adoption of artificial intelligence since late Summer.
Once again, all eyes turn to the upcoming earnings cycle and what we’ll hear from the AI infra majors. As before, investors will have their eyes peeled for trailing and projected capex, revenue derived from historical AI spend, and prognostications concerning compute needs.
We will eventually find ourselves in a chip glut; Nvidia is betting that by dropping new chips yearly, it can keep AI compute providers on an upgrade cycle as they compete on price, and the chip company enjoys impressive margins.
Venezuela, Greenland, and the current mess: POTUS has made it clear that he considers himself to be the ultimate authority in Venezuela, and that he expects American companies to enter and rebuild its oil industry, possibly with taxpayer backing. The undergirding argument for the takeover is not to help produce a more democratic nation, but instead to lever American military power to protect American economic interests.
The concept that might makes right is the tone de jure from both the second Trump administration and much of the tech-right, as discussed. Problems abound with the perspective, but the same logic being used to justify the Venezuelan operation is being pointed at Greenland. Which is part of Denmark (it’s complicated), and thus not only part of a European nation but also a founding member of NATO.
The list of United States friends will thin dramatically if our allies and defensive compact co-members find themselves staring down the barrel of our guns. That said, what a bonanza opportunity for European technology companies.
Annoyed that US tech companies and products have so much European market share? Well, here’s a wedge for ya!
I do not expect calls for European digital sovereignty to rewrite the global technology market, but that doesn’t mean that they are hollow, either.
Self-driving is getting incredibly competitive: Apart from chips Nvidia also announced a new “family of openAI models” dubbed Alpamayo, which aim to deliver vision language action (VLA) models to cars so that they can enjoy “humanlike thinking” while self-driving. Among other self-driving announcements, the chip giant also disclosed that Mercedes will be the first partner for its NVIDIA DRIVE AV software. The Verge got to test out the self-driving partnership and found it impressive.
And while the United States benefits from Waymo, Zoox, and Tesla battling for domestic self-driving market share, London is set to feature dueling self-driving companies this year (with an added US-China competitive twist). Slowly, then all at once. Self-driving is being sorted out by several large companies at once, implying that we should continue to see rapid progress in safety, and, following, commercial availability. Viva!
Everyone is excited about Claude Code
Claude Code is the most important piece of AI technology on the market because it delivers on the core promise of AI: Dramatic acceleration of human potential and a contemporaneous democratization of opportunity.
Anthropic’s agentic coding tool Claude Code is having a moment. It seems that over the holiday break, lots of folks started to tinker with the service and found it to be incredibly powerful and accessible to even to the less technically-minded.
So many people on X are talking about Claude Code that some founders are even joking about the prevalence of the posts: “Why is my feed only about Claude Code,” asked Fintool CEO Niolas Bustamente.
Perhaps because people like Google Principal Engineer Jaana Dogan are saying things like this:
I’m not joking and this isn’t funny. We have been trying to build distributed agent orchestrators at Google since last year. There are various options, not everyone is aligned... I gave Claude Code a description of the problem, it generated what we built last year in an hour.
Claude Code is proving so delightful to use that people like OpenDoor’s Chief Growth Officer Morgan Brown quipped that his new “default” perspective is “every minute Claude code is not working is [a] missed opportunity.”
Sure, Claude Code is a coding tool. It competes with OpenAI’s Cursor service, and a host of other startups that offer overlapping tools to help developers write, test, and improve code. But Claude Code is not just a coding service; it’s an agentic frame around Claude, which means that the same system can be used for other work. Other work that it is capable of because Claude Code was designed to be able to ingest lots of information, and execute tasks on behalf of its user.
That’s why we’re seeing things like this, and this, and this, and this crop up.
It’s cool that folks are using Claude Code as a tool in its own right, and that more people than ever have the ability to get the most out of humanity’s growing computational footprint. But I am even more excited about Claude Code turning non-developers into individuals capable of shipping usable code; from podcasters building speech-recognition applications, to people using Claude Code to keep plants alive, to bespoke bird feeders that can detect what sort of avian visitor has arrived.
We’ve already discovered that app-builders like Lovable have a huge audience; people want to create. But it appears that Anthropic’s team has cooked up a tool in Claude Code that is at once sufficiently in-reach to the average intelligent person to afford them superpowers, while accelerating the technically-savvy to untold heights. That’s an insanely impressive double-act.
It’s no accident that Anthropic’s 2025 revenue growth was not only mind-bendingly impressive — scaling from around $1 billion ARR to a projected $9 billion by EoY — but also enough to bring it into revenue scale competition with OpenAI, now perhaps just over twice the size of Anthropic.
Perhaps all competing products to Claude Code are, or will reach power parity. But today, at least, Claude Code is the most important piece of AI technology on the market because it delivers on the core promise of AI: Dramatic acceleration of human potential and a contemporaneous democratization of opportunity.

Love how this frames Claude Code's real achievement as closing the gap between intent and execution. The podcaster building speech recognition or the bird feeder example capture something bigger than just "better dev tools." It's collapsing that friction layer where technical capacity used to gate experimentation. I've sene this shift locally where non-engineers on teams suddenly prototype solutions instead of waiting weeks for eng cycles. The acceeleration isn't just velocity, its permission to try.
Dude I’m hyped for 2026 now.