Open-source AI, personal power, and the rise of Clawdbot
Welcome to Cautious Optimism, a newsletter on tech, business, and power. Modestly upbeat.
Tuesday. We’re still digging out here in the Northeast, so if you are reading this from warmer climes, understand that I am jealous. We’re spending most of our time today on the AI front, starting with new open-source models. Then, why they matter for the average person, and, finally, the Clawdbot connection (and just what is Clawdbot, for those of you too busy to have yet tinkered with the tool.
But first, a few notes. To work! — Alex
📈 Trending Up: Internal dissent at Palantir … doors, asses … Upscrolled, I guess? … Canadian startups? … India-EU relations … corporate investing … token usage …
Headline of the Day: “Pinterest laying off 15% of workforce in push toward AI roles and teams”
📉 Trending Down: TikTok … MCP haters … Canadian startups? … healthcare stocks … the tenure of leading Chinese military officials … the American soybean industry … France-NATO relations …
Things That Matter
Earnings approach: Tomorrow, we’ll hear from Microsoft, Meta, Tesla, ASML, IBM, and ServiceNow. Thursday brings Apple, SAP, Western Digital, Sandisk, Nokia, and AppFolio. Verizon and SoFi report Friday.
As in 2025, technology earnings updates in 2026 will be scrutinized for AI impact. For Microsoft and Meta, that means updates on capital expenditures and AI revenue lift. ASML, Western Digital, and Sandisk are a bit further up the AI value-chain, but will prove fascinating as well. Tesla should bring robotaxi updates and further AI updates. Apple will probably report that it sold a lot of phones.
If we had to narrow our focus to a single number, the growth rate at Microsoft’s cloud group, Azure, is our choice. Let’s see what Redmond has on tap. And if it is still compute-constrained.
What’s a brand worth? After noting yesterday that many tech leaders were uncomfortably quiet in the face of another ICE shooting this week, more voices have chimed in. Not enough, but more. Then there’s Khosla Ventures’ Keith Rabois, who argued that directing traffic is a felony, that Alex Pretti was therefore interfering with a law enforcement operation and thus not innocent before being shot (parse the tweets yourself in case I am misreading them).
Khosla Ventures is a major OpenAI backer, and one of the earliest. It’s probably safe to say its recent funds will be just fine. But you wonder about its future vintages, given that the power balance between founders and venture capitalists has swung back towards the builders over the capital allocators.
Why? Because hot companies have their pick of attractive term sheets — even if any startup that isn’t scorching may find itself sitting alone. In the future, founders might choose a term sheet from an investor without the Rabois connection over one that comes with it; to be sure, Keith is a heck of an investor in his own right, but he’s also abrasive and has proven his ability to embroil his firm in controversy. Qua Sequoia, I suppose.
Open-source AI, personal power, and the rise of Clawdbot
This week, Alibaba’s AI group dropped Qwen3-Max-Thinking, the latest iteration of its Qwen3 model lineup. Unlike many Chinese AI models, the new Max-Thinking version of Qwen3 is not open-weight. You can fire it up via API for between $1.2-$3 per million input tokens and $6-$15 per million output tokens.
Clawdbot has been renamed Moltbot. I presume because Anthropic didn’t find the name funny. C’mon Dario!
Earlier today, Moonshot released Kimi K2.5, what the Chinese AI lab calls “the most powerful open-source model to date.” Per shared benchmarks, the model is incredibly impressive, though we’ll want to wait and see how the community rates it. (You can check out the model and its weights here.)
K2.5 also includes a version that can “self-direct an agent swarm of up to 100 sub-agents, executing parallel workflows across up to 1,500 coordinated steps, without predefined roles or hand-crafted workflows,” which sounds wicked cool. And very token-heavy, if you are an Nvidia shareholder.
The recently-public MiniMax was duly impressed by K2.5.
Did MiniMax or Z.ai lose value in the wake of the newly-released models from their fellow Chinese AI giants? No. Minimax rose 26.5% today, while Z.ai rose 7.6%.
The Alibaba model is a reminder that trad Chinese tech companies are more than capable of creating competitive AI models. The Moonshot model is a reminder that Chinese AI labs are still pushing the barrier of open AI.
Which is great news, because it turns out that we’re getting close to having a personal use case for open-source AI. Unless you are a real tinkerer or someone paid to play with AI models, I doubt that you’ve spent much time running LLMs locally. (If you want to try, ollama is what I’ve used.) Why? Because personal computers aren’t built to handle bleeding-edge LLMs, which you can access for free or a nominal cost online.
But the rules of the game are changing. New tools are capable of greatly extending the power of current AI models by granting them the ability to manage your computer directly. This, as I have learned, is compute-intensive if you use a paid model under the hood of the tooling in question; doubly so if you want to use, say, Anthropic’s Opus 4.5 model, which is both SOTA and darn expensive.
Thanks to Moonshot and its latest Kimi release, open-source AI continues to improve as our personal computing technology also advances. There’s a point coming in the future when running an LLM locally (or similar) won’t be something that only the nerds do.

Why? Because AI models can do a lot more than they are currently allowed to do by the major American AI labs. Clawdbot taught me this. Clawdbot is a tool that allows the user to task an AI model with local tasks on their computer, including tasks like reading email, scheduling, running browsers, and more. The issue with Clawdbot is that it’s incredibly far from being production-ready for the enterprise. It is an open-source project, to be fair, so it’s not going after the SOC 2 crowd.
It is instead going after you, the very busy digital native who has more work than time. And if you wanted to set up Clawdbot on a local machine at home so you could offload a bunch of work to it — I set mine up to read my email every hour and highlight the important stuff, but that’s just scratching the surface — you are going to quickly realize that having Claude Opus 4.5 run your digital life is a potentially expensive proposition.
But if you are going to set up Clawdbot on a separate local machine for security reasons, why not just run an open LLM inside that Mac Mini and forgo AI costs altogether? Apart from electricity, of course.
American AI labs are good at building smart models, and are proving deft at selling those tools in both consumer-friendly and corporate-tuned packages. But the power-users of the world thirst for more, and I think that a Mac Mini stuffed with a performant, open-weight Chinese LLM powering my iteration of Clawdbot sounds like an incredibly fun option for those of us who aren’t consumers per se, and are not megacorps in our own right.
Wait, I still don’t get Clawdbot! Yeah, I hear you. Yesterday on TWiST, we had a few Clawdbot power-users on to chat about the tool. I asked each for their definition of the product. It’s either“the best way to plug in all your API keys [to] do lots of things,” an “LLM that can do things in the world for you,” or a “24/7 AI employee.”
Still need help? You can think of Clawdbot as a personal AI agent that can do work responsively (via direct prompt), or proactively (via chron job or similar). You choose the brain (AI model) you want to use (I linked my Claude Code account), and then select which skills you want Clawdbot to have access to. From there, Clawdbot is loose on your local machine with lots of access and permissions. (So, don’t install it on your corporate laptop if you are the CFO.)
The real answer here is to just try the damn thing. If you get stuck along the way, take a screenshot of what broke, and ask your favorite LLM to tell you what to do to unstick yourself. If I, Alex Wilhelm, managed to get Clawdbot running, you too are able.
A few uncomfortable thoughts:
If local AI becomes a thing, it could shift compute from centralized locations (data centers) to the edge (your personal machines), lowering demand for hardware that the market is currently spending hundreds of billions of dollars to race into existence.
Clawdbot is evidence that major AI labs are not yet able to unlock the full potential of their models because they can’t be trusted on local machines; Clawdbot isn’t safe, but it’s also so useful that people are making it work. This is evidence that open-source AI writ general is a more formidable market competitor to what OpenAI and Anthropic and Google and xAI have on offer.
