Welcome to Cautious Optimism, a newsletter on tech, business, and power.
Happy Friday! POTUS was very proud of a partial trade deal announced yesterday with the UK that would maintain a blanket 10% tariff rate. The agreement sketch would leave us with greater trade friction between the two countries than before Trump 2.0, despite some exemptions. The bad news continues. Trump seems to think that when he does reduce the currently insane tariff rates placed on China, they could come all the way down to 80%. Which would still be insane.
Why are we doing all of this? Because Trump fundamentally views trade as a net-negative, a way that the nation is taken advantage of. To wit, from Axios yesterday:
Wild. — Alex
📈 Trending Up: Karma … data centers as the new stadiums? … refinancing … Lyft stock, after earnings … Clay … drama … self-driving trucks … ad-load … Idiocracy … real-world testing … stagflation fears …
📉 Trending Down: Shock … looking hot … making good friends … choosing good friends … Coinbase, after earnings … tax breaks for investors? … $10 billion rent-seeking boondoggles …
You have to wonder: If Apple loses the $20 billion it gets from Google each year and a material percentage of its App Store cut gets deleted by companies using other payment tech and its car is off and we’ve reached peak smartphone and Apple is hardly making the best software out there, is the company in a little bit of trouble?
Request for Startup
Y Combinator’s Summer 2025 request for startups list is out. Shockingly, it’s mostly about AI. Voice AI, AI personal assistants, AI for science, AI robotics, the list goes on.
But amidst the somewhat expected entries was this banger:
You could read the above riff from YC’s Jared Friedman as a diss against Harvey, but it’s not. Instead, it’s a call for startups just getting incorporated today to skip a generation of technology company. Instead of building services businesses to help human-powered industries, YC wants startups to form autonomous companies that supplant the human labor required by many industries today, full stop.
Between AI robotics and the above-explained full-stack AI companies concept, we’re seeing automation go after both blue, and white-collar work at once. Good luck, high school kids!
It would not be a shock if YC funded a startup that wanted to build an AI copilot for lawyers. But a full-on agentic legal firm? Now that’s wild. Probably would fail. But what are startups if not a way for the market to experiment at ripping up its current operating principals? I dig it.
Who is using what?
One question I have been asked monthly for the last decade is how startups can drive attention to their products in between major releases or traditional news moments like funding rounds. My answer is unwaveringly use internal data that you have to create public datasets that reporters can use to better understand the world.
The logic here is simple:
Reporters need to understand the world, and lack access to data sequestered inside private-market companies (public too, but that’s a different story).
Startups often operate at the cutting edge of the market, selling technology products that are either changing how economies operate, or inventing new methods of doing business.
Therefore, they have lots of interesting data. That data, left to rot on servers, does nothing. But if turned into public datasets, then not only may journalists spend more time on a startup’s website, they will often cite the company even when it doesn’t have news per se.
Enter Ramp’s AI index. A perfect example of the concept. Yesterday, riffing through the anonymized dataset detailing AI adoption thanks to Ramp spend data, I noticed two things. First, among commercial AI adoption, OpenAI is running away with the game:
There’s something screwy in the Google data, of course. But you can see that OpenAI is running away with the ball regardless. And Anthropic is, amongst Ramp’s tech-forward customer population, earning a clear second-place. xAI shows up a little at the end, but not to a material degree.
Elsewhere in the dataset, an interesting data point. Amongst Ramp customers:
47% of large businesses,
42% of medium-sized businesses,
And 36% of small businesses
Currently have “paid subscriptions to AI models, platforms, and tools.”
Naturally, this data point is not universal. We’re not saying that 36% of all small businesses are paying for AI. No, instead, that of Ramp’s tech-forward customer base, 36% are.
The large business figure is what’s most interesting. Amongst the type of large company that would happily become a Ramp customer — betting on a relatively recent entrant into a critical corporate operating space — nearly half are already paying for AI tooling.
That implies that even amongst the early adopters, there’s a lot of room yet to run. Bullish, I reckon. But hit reply if I am reading the data backwards.
AI gods poo-poo regulation
Back in 2023, the New York Times reported on Sam Altman’s first testimony to Congress, writing that the tech boss was more than merely content with the prospect of the legislative body drafting up rules for domestic AI:
Mr. Altman implored lawmakers to regulate artificial intelligence as members of the committee displayed a budding understanding of the technology […] Mr. Altman said his company’s technology may destroy some jobs but also create new ones, and that it will be important for “government to figure out how we want to mitigate that.”
Altman went so far as proposing “the creation of an agency that issues licenses for the development of large-scale A.I. models, safety regulations and tests that A.I. models must pass before being released to the public,” the Times continued.
That’s Sam 1.0, back when OpenAI had what felt like an insurmountable market lead. Today? In a more competitive AI market, one in which OpenAI is fending off public (Google), private (Anthropic, Mistral), and open-source technologies (Meta, DeepSeek)? The tone is a bit different.
Yesterday on the Hill, Senator Ted Cruz asked Sam just how “harmful” it would be to “winning the AI race” if Congress “goes down the road of the EU and creates a heavy-handed prior approval government regulatory process for AI?”
Here’s Sam:
I think that would be disastrous. […] There are three key inputs to these AI systems. There's compute, all the infrastructure we're talking about, there's algorithms that we all do research on and there's data. If you don't have any one of those, you cannot succeed in making the best models. […] Systems that stop us on any of these areas, if we have rules about what data we can train on that are not competitive with the rest of the world, then things can fall apart.
If we are not able to build the infrastructure, and particularly if we're not able to manufacture the chips in this country, the rules can fall apart if we can't build the products that people want, that naturally win in the market. And I think people do want to use American products. We can make them the best, but if we're prevented from doing that, people will use a better product made from somebody else that doesn't have the sort of, that is not stymied in the same way. So it is, I am nervous about standards being set too early. I'm totally fine with the position some of my colleagues took that standards, the industry figures out what they should be. It's fine for them to be adopted by a government body and sort of made more official. But I believe the industry is moving quickly towards figuring out the right protocols and standards here and we need the space to innovate and to move quickly.
From regulate me, bro to leave me alone, bro.
You could cynically say that Sam is responding to changing political winds, but I think that to do so would be to think too little of the man. No, this is business, and in business you want to win, so you don’t worry too much about hypocrisy and instead focus on beating everyone else. Today, the best way for OpenAI to maintain its posture at the head of the US AI column is to ensure that it isn’t held back in any way that, say, Chinese or European AI technology may prove to be.
It’s worth noting that the EU wants to reduce its AI regulation, while in China there will never not be a tension between domestic censorship rules and the fact that LLMs can get, well, a bit probabilistic at times.
Great piece, Alex! Appreciated what you wrote about the value of internal data for startups. Even if the sample size isn't huge, being at the forefront of an issue still adds a lot of value.