TikTok Has a New Challenger
The Weekend Leverage, Feb 1st
As I type this the internet is freaking out about Moltbook, a so-called AI social network, where a specific type of AI agents can talk to each other. If anyone has followed the science, nothing in the site is revolutionary, like, at all. LLMs have long been capable of writing compellingly creepy copy to each other, but because this is happening more autonomously than previous experiments, relative noobs to the space are getting scared. So this is me officially telling you that this is a nothing-burger and you can move past it. It is simply a larger experiment confirming what we already knew!
The real things that determine the future, unit economics and computer science, are what drive all progress forward. And in that, I got you covered this week.
MY RESEARCH
Maintenance as a spiritual pursuit. I read a truly remarkable book this week that I believe everyone in technology should buy. Stewart Brand’s Maintenance is a meditation on how technology works, and how important its upkeep is. This one had me examining the tools in my life, how my lack of understanding about them alienated me from reality, and then I resolved to change that. You can read more details here.
WHAT MATTERED THIS WEEK?
BIG TECH
Zuckerberg to spend up to $135 billion on AI. Look, I’m going to fire off some ludicrous, jumbo-sized numbers at you, but don’t just scroll over them. Think long and hard about the scope of what we are talking about.
In their quarterly earnings, Meta reported a $6.02 billion operating loss in their “Reality Labs” division—their name for the group that houses virtual reality and the Meta Ray-Bans. In total, this division has lost $75 billion since 2020. The only real success they have had is the Meta Ray-Bans, but that is pennies to what they’ve lost attempting to make VR goggles a thing.
And all those losses over the last 6 years are pennies to what they are going to spend on datacenters this year. The company expects to spend $115 billion to $135 billion in 2026 on GPU infrastructure to build an AI superintelligence. Keep in mind that Reality Labs has been publicly derided by investors for losing too much money, and the big Z is going to blow through 1.8x that amount in one year. And all that cash is for a product that, uh, doesn’t exist yet.
Still, this size of bet is starting to look like the cover charge required to enter into the foundation model casino. OpenAI is rumored to be closing in on a $100 billion new round of growth capital. X.AI is rumored to be merging with SpaceX to give it some much needed access to additional capital markets. And Google is expecting to spend $115 billion on datacenters in 2026. Anthropic, the chronically underfunded pipsqueak, is only spending a measly $50 billion this year on datacenters.
Rather than view this as monolithic bets on the creation of AI Jesus, it is more helpful to view it as a hedge. GPUs are valuable because they are task agnostic. Building these datacenters now allows them to (potentially) make superintelligence and if that fails they can (certainly) utilize them in making their other services more performant.
This earnings call offered additional validation of that thesis. The Meta team described their ad targeting model GEM,
“In Q4, we doubled the number of GPUs we used to train our GEM model for ads ranking. We also adopted a new sequence learning model architecture, which is capable of using longer sequences of user behavior and processing much richer information about each piece of content. The GEM and sequence learning improvements together drove a 3.5% lift in ad clicks on Facebook and a more than 1% gain in conversions on Instagram in Q4.”
Though the model is not an LLM, it shares some technical similarities. Importantly, it also shares scaling laws: The more compute and data they give it, the more powerful it will become. And by powerful, I mean the ads targeting people will be ever more performative. This is a ruthlessly capitalistic application of this technology, but those outside of Silicon Valley are underestimating how much these architectures can be applied to problems beyond ChatGPT. The big tech CEOs believe that GPUs are not going to run out of profitable use cases, even if superintelligence is two decades away.
BIG LABS
OpenAI’s ad product sucks (and that’s ok). Speaking of ad performance, we received additional details on how their upcoming ads will work and it is a throwback to 2005. “OpenAI won’t be providing detailed information about the query responses accompanying their ads or whether ads prompted ChatGPT users to take an action, like buying something or looking up a website.” This lack of data is basically the equivalent of purchasing a TV ad budget, something more meant for brand building versus the targeted direct response ads that have made Meta/Google so powerful. The price is high too, “OpenAI is targeting a price of roughly $60 for every 1,000 views an ad gets, the buyer said.” This is what I would expect it to cost for something premium on TV, not just daytime reruns of Law and Order.
Most of the coverage of this topic has focused on the high price and lack of conversion data, a take that I agree with. In the long run, you have to target ads on the basis of ad conversion rates, demographics, and what you can scrape from cookies. However, that’ll take them multiple years to build out and this is as good a place to start as any.
What I find fascinating, and that no one is really discussing, is the “won’t be providing detailed information about the query responses accompanying their ads” portion of the offering. OpenAI is essentially offering sales conversations as an ad product while simultaneously refusing to tell the advertiser what was said. This is nuts! As I’ve established in my previous research, making the chatbot’s responses steerable by the advertiser is the single most important aspect of this product. Otherwise all sorts of hijinks will inevitably ensue, such as in my case, where the LLM convinced customers to churn!
Years ago, I wrote a very viral piece titled “Snapchat’s Probably Screwed” on the basis that the company was being too cute with their ad inventory. They focused all of their efforts on having customers spend large amounts on brand ads that used augmented reality. My argument was that this was a critical error, and that instead they needed to build out direct response capabilities. I was very right—the stock is down ~87% over the last five years. OpenAI is in danger of making the same mistake. Still, let’s give them a few years, the team there is mostly ex-Meta, they know how to build an ads platform. The temptation for new types of ad units may still be too much for them though. I would really like to not write my followup, “OpenAI’s Probably Screwed.”
LET THE POLICY RIP
You can get $10K for a blog post on how to save the world. In an exclusive interview with The Leverage, Tao Burga of the Institute for Progress (IFP) laid out a thesis that should make you pay attention—some of AI’s biggest problems are ones markets will never solve.
The 30-person nonpartisan think tank that spans AI policy, infrastructure permitting, and meta-science has spent the past year building “Launch Sequence,” an effort to direct philanthropic capital toward exactly these gaps. Tao told me that “$25 billion is about to flow into AI resilience and AI-for-science from the OpenAI Foundation, the Chan Zuckerberg Initiative, and other major funders.” This program is to help direct some of that capital.
AI capabilities will diffuse whether America likes it or not, and Chinese labs are nine months behind. Therefore, controlling model weights is a fantasy. What matters now is building the defensive infrastructure and institutional resilience that determines whether ubiquitous AI goes well or catastrophically wrong. They’ve received 150 proposals already. They want more!
Burga admitted they nearly included “recommender systems and media algorithms” as a focus area—my slop thesis, basically—but pulled back fearing vague grievances instead of concrete solutions. Fair. But the underlying point stands: science acceleration, biosecurity, epistemic integrity, algorithmic accountability—venture capital won’t touch these because the economic incentives simply aren’t there. IFP wants proposals that are concrete, AI-specific, executable within five years, and wouldn’t happen through normal market mechanisms.
If you’re sitting on an idea that fits, submit it here. Published proposals receive a $10,000 honorarium, and IFP is offering $1,000 bounties for successful referrals. I suspect readers of this newsletter are unusually well-positioned to contribute. Prove me right!
THE SLOPPENING
TikTok competitor Upscrolled hit number 1 in the app store. The newly Americanized app has had issues in its first week, including not being able to say the word “Epstein” and being unable to find content on what ICE is doing in Minnesota. The company blamed a “data center outage” for this, which is not how content moderation algorithms work. It is not like there is a server in the backroom with the phrase “leftist talking points” written on it in Sharpie that someone accidentally unplugged. As usual, I’m going to avoid the political talking points here, because that’s not what this newsletter is. (Though I will once again point out that this is a clear case of market manipulation and an unfair bidding process).
What I am going to point out is that the market response was downloading Upscroll. The company differentiates itself by saying that, “Shadowbanned elsewhere? Not here. UpScrolled is the social platform where every voice gets equal power. No shadowbans. No algorithmic games.” This is again, not how this works at all. There are always algorithmic games in short-form media because you have to have some sort of ranking algorithm to maximize engagement. If you find that content you like is getting more views, which some early boosters are cheering on, you are still being manipulated by the algorithm!
The issue is outsourcing your taste and discernment. Anything less than the deliberate selection of media that challenges your views is intellectual self-pleasuring. The only way to win is by opting out.
TASTEMAKER
Books rule. Reading and writing are our greatest technologies ever invented. They allow us to inhabit the minds of others, to build a profound sense of empathy and understanding for people different from us. They have made my life so much richer. The world is especially intense right now if you are following political news. I’m originally from Minnesota, so as you can imagine, it has been a very intense few weeks for me. To find refuge from the storm, and to make sure that I don’t snuff the internal flame of care, I’ve been reading a ton. If you find yourself similarly needing shelter in the storm, here are three books I would recommend you pick up this week.
Non-things by Byung-Chul Han. A philosophical exploration of why we now prioritize experiences over possessions. Highly recommended if you want to examine your relationship with technology. A word of warning: it is a book you can read in two hours, but it benefits from a slow read. Carefully consider his sentences/references to find its hidden depths.
Dungeon Crawler Carl: Is this well-written? No. Is it a purely escapist plot-slop book designed for people with a crude sense of humor and love of nerd stuff? Yes. Did I read the entire 7 book series in 3 weeks? Unfortunately, also yes. I had a lot of fun and if you start now you can get ready for book eight to be released in May.
The Farseer Trilogy: For fantasy that is actually well-written, I would recommend this series by Robin Hobb. It follows a royal bastard named Fitz who is trying desperately to stay alive. His family job of assassin makes that exceptionally hard. This is well-written, well-crafted work and is a wonderful way to elevate your reading game if you like fantasy.
Go and be kind this week,
Evan
Sponsorships
We are now accepting sponsors for the Q1 ‘26. If you are interested in reaching my audience of 34K+ founders, investors, and senior tech executives, send me an email at team@gettheleverage.com.








seriously you are falling for a propaganda campaign - literally no one is using that other social media app. they had a very clear PR/propaganda campaign on X to create this impression. if you disagree cite some data, b/c other than X haven't heard of anyone ever using it. they definitely created a false narrative this week...