Welcome to Cautious Optimism, a newsletter on tech, business, and power.
Monday! Our economic calendar for the week is here. Yesterday I went on This Week in Tech, which was bop. Inflation data on Wednesday and Thursday will set the tone for the stock market. Oracle earnings will matter, too, given its place inside the larger AI economy. Which, incidentally, takes up nearly all our column-inches today. To work! — Alex
📈 Trending Up: EU defense startups and spending … self-driving … satellite connectivity … sure, why not … only if you cheat … SPACs? … SPACs?? … censorship … good ideas … Robinhood and AppLovin …
📉 Trending Down: Comity … US-South Korean relations … competence … cinema … Databricks’ IPO timeline … Congress … free markets … India’s TFR …
European momentum
We linked above to an FT article noting that European defensetech startups are raising mountains of cash, a dramatic change from a few years ago, when the category was all but dead. That’s a good thing, as Europe faces Russian aggression against a backdrop of fading American support.
But that’s not the only bit of European tech news worth paying attention to. ASML, the Dutch company known around the world for its chipmaking gear (it has all but a global monopoly for SOTA lithography chip tech), will invest around $1.5 billion (USD equivalent) into Mistral’s upcoming $2 billion round. Yes, a Dutch tech company is about to lead a massive round into a French AI company.
Or more precisely, we’re seeing one of Europe’s leading technology lights — ASML is worth more than $300 billion — pour its own wealth into the only current hope for the bloc to have its own AI champion. That’s hardly the move of a continent that has given up playing for a seat at the future table.
IPO Watch
Klarna: Expected to price Tuesday and trade Wednesday (F-1/A filing)
Gemini: Expected to price Thursday and trade Friday (S-1/A filing)
Via: expected to price Thursday and trade Friday (S-1/A filing)
Netskope dropped its first IPO price range today (more here). StubHub did the same (more here).
Get hype, it’s going to be banger banner week for tech liquidity.
Everyone is worried about AI (again)
When did DeepSeek release its R1 reasoning model, replete with claims of low-cost training? January. Since then, we’ve gone through several cycles of worry and ebullience regarding AI performance and investment. We’re in another. It’s been quite the year.
Market sentiment regarding AI is at low ebb once again, this time fueled not by an open-ish Chinese reasoning model, but by the scale of investment into AI technologies and a growing gap between expense (capex and opex) and return.
While the WSJ’s Greg Ip is worried about how AI-derived online information could lead to AI models training on their own output, leading to an intelligence collapse (not a new worry, but one to keep in mind), this morning, let’s focus on what The Atlantic and The Economist have on offer.
The Atlantic is worried that AI models are not having business-level impact but are having business-scale costs:
When researchers at MIT recently tracked the results of 300 publicly disclosed AI initiatives, they found that 95 percent of projects failed to deliver any boost to profits. A March report from McKinsey & Company found that 71 percent of companies reported using generative AI, and more than 80 percent of them reported that the technology had no “tangible impact” on earnings. […] AI appears to be propping up something like the entire U.S. economy. More than half of the growth of the S&P 500 since 2023 has come from just seven companies: Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla.
That last point is underscored by The Economist’s concerns regarding the sheer expense of AI infrastructure buildout:
Will AI really become godlike? Perhaps, but a recent report by UBS, a bank, finds that revenue generation to date “has been disappointing”. By our reckoning, the total revenue from the tech accruing to the West’s leading AI firms is currently $50bn a year. Although such revenues are growing fast, they are still less than 2% of the $2.9trn investment in new data centres globally that Morgan Stanley, another bank, forecasts between 2025 and 2028—a figure which excludes energy costs
To sum: AI has yet to have a major impact on business operations, but the market is rewarding leading AI players with massive valuation gains as they divert more and more of their cash flows towards large, depreciating data center buildouts. The worry is that all that spend ahead of revenue will result in a massive capacity glut, eventually causing the value of major tech companies to fall, leaving the market in a painful spot and our collective investment accounts in shambles.
Should we worry? A little, but it’s too early to make the call on whether the AI bulls or AI doomers are more correct.
The recent MIT study reporting that amongst firms that tried to implement “embedded or task-specific genAI” solutions, just “20 percent reached pilot stage and just 5 percent reached production” has set the current tone for AI sentiment (our coverage here). However, the same study found that “[g]eneric LLM chatbots appear to show high pilot-to-implementation rates (~83%),” with more companies giving them a shot, perhaps thanks to “enterprise users report[ing] consistently positive experiences with consumer-grade tools like ChatGPT and Copilot.”
The gist? AI products built atop third-party models aimed at specific corporate tasks are failing to live up to expectations today, while more general-purpose AI tools are seeing broad uptake and enjoying positive reviews. Call me too kind, but I don’t see that mix of datapoints as indicative of a general AI failure.
In prosaic terms, OpenAI makes more of its money from ChatGPT subscriptions to individuals and organizations than it does from its APIs. The MIT study is therefore hardly lethal to the market leader. Anthropic, which has an inverted revenue mix, could be more exposed to the above-discussed weaknesses.
While the MIT study provides good detail that most companies are struggling to get custom, or specific AI solutions to work today, it’s worth keeping an eye on what tech companies are seeing. To wit, enterprise content and productivity company Box reported in its most recent earnings report that its ‘Box AI’ product is “driving strong underlying business momentum and giving our customers the confidence to increasingly commit to multiyear contracts.” European neobank Revolut reported in its most recent earnings report that “generative AI had a significant positive impact on our cost structure, particularly in traditionally labour-intensive areas like customer support.”
More revenue? For some companies betting on genAI products. Lower costs? Again, for some companies adopting genAI tools. The average case? I don’t think so. But I do expect that what works for tech-savvy companies today to work for more average companies down the road. Why? The rollout of every other technology product in history.
That view is not enough to derisk the market. Why? Because if it takes too long for AI products to help lots of companies, the resulting revenues could arrive too late to make current investments in AI capex a good use of cash. Data center tech, after all, depreciates on a five to seven-year timeline. So, there’s an end-date of sorts for current AI infra investment, and it’s not much later than early 2030.
While there is reason for concern — anyone who claims to be zero-percent worried about potential AI infra overpsend is lying to someone, either you or themselves — some of the worry is overblown. The note from The Economist argument that AI revenue of $50 billion today is a pittance against an expected $2.9 trillion worth of investment in “data centers globally […] between 2025 and 2028” is a good example of why.
Yes, $2,900,000,000,000 is a larger number than $50,000,000,000. But that’s not really how the math works out. First, tech revenue doesn’t correlate 1:1 to market cap.
Microsoft, to pick an example of a tech giant currently spending tens of billions of dollars worth of wealth on AI infra trades at a price/sales multiple of 13x. At that multiple, $50 billion worth of AI revenue today is worth $650 billion. That’s not $2.9 trillion, but it’s one hell of a lot closer.
Even more, the 2025-2028 timeframe for AI infra buildout is heavily weighted towards the future, while the revenue number we’re using as our numerator is trailing. Next year the AI revenue figure will be higher, again lower the ratio of result to cost. That will happen again in 2027. And 2028. By the time we reach the end of the expected multi-trillion spending rush, will AI revenue have grown enough to make all the math square up? Maybe, maybe no.
But by that point the volume of AI revenue will be large enough that currently planned AI infra spend will only be able to overshoot so far. And I don’t think it will be a lethal overhang by that point. Therefore, I view today’s capex plans from tech companies more akin to a venture wager than a traditional corporate investment. It’s risky, yes, but we’re more asking if a Series C deal is going to pan out at IPO than if a Seed-stage startup will be able to land its first customers.
But what about that OpenAI burn number?
I simply can’t get night sweats over companies working like hell to bring enough capacity online to serve customer demand.
Recent reporting pegs OpenAI’s expected cash burn for 2025 to reach $8 billion, a full $1.5 billion more than the company previously expected. The same source says that OpenAI now expects to burn $17 billion next year, up from a previous expectation of $7 billion. The numbers get bigger from there.
Insane spending from a company high on its own supply? Maybe. But OpenAI’s cash burn was already effectively announced when the company said it was going to build out some of the world’s largest-ever planned data centers. The AI company is also building its own chips. All that would feel specious and risky if the company wasn’t compute-constrained today, as its CFO recently reported. And since the company is adding billions in annual revenue per quarter, I struggle to worry too much today. If OpenAI becomes compute-unconstrained and keeps the spend at the same level, that would be more worrisome.
I simply can’t get night sweats over companies working like hell to bring enough capacity online to serve customer demand. That’s what you want to see! And it’s not alone in being behind on gear compared to demand for that gear’s calculations:
Alphabet said during its latest earnings call that it was adding $10 billion to its 2025 capex estimates “given the strong demand for our Cloud products and services,” that it anticipated even more capex next year “due to the demand we’re seeing from customers,” and that the company’s AI products are “really driving demand” leading to the search giant “investing to match up to [that demand].”
Microsoft made similar noises about building its data center footprint to match customer demand, saying in its own most recent earnings call that even as it brings “more datacenter capacity online, [the company] currently expect[s] to remain capacity constrained through the first half of our fiscal year,” which corresponds to the back half of calendar 2025.
Amazon is similarly under-compute, saying its recent earnings call that in “the rapidly evolving world of generative AI, AWS continues to build a large, fast growing, triple digit year over year percentage, multibillion dollar business with more demand than we have supplied for at the moment.”
Certainly, anticipated and even contracted future AI compute demand could evaporate. But we’ve been in the genAI era for a minute now, and demand keeps going up faster than compute. Tech giants aren’t building speculative capacity today; they’re racing to meet existing demand. So long as that remains true, whatever AI bubble that forms won’t tank the economy when it either deflates or pops.