This era of tech is both exciting and terrifying—and for the same reason. We are playing with the fundamental stuff of nature; our computers are delving deep into the core of intelligence, of insight. Large language models are “answer engines” that autonomously compile raw data points into some sort of view. This week showed the danger (and the promise) of that.
But first, I wanted to let you know about an opportunity.
The Leverage is looking for some sponsors for late summer and fall. We have grown to 36,000 readers, have maintained a 51% open rate, and are currently ranked 77th on the Substack leaderboards. Not bad for a publication that started less than three months ago! More importantly, our readers are founders, investors, and decision makers building the most exciting technology in the world. Great companies like Notion and Mercury have already sponsored. If you want to get in front of my readers—and by happy coincidence, help make beautiful, thoughtful writing possible—just send me an email at team@gettheleverage.com to learn more about what we can offer.
MY RESEARCH
The private markets are broken. (Will Robinhood fix them?) I’ve written a lot of words about just how wacky private capital markets are right now. Perhaps the largest beneficiaries of this situation are the companies that can stay private forever like Stripe and SpaceX—they get to enjoy juicier multiples, less oversight, and don’t need to bother with earnings calls. Robinhood has attempted to pop that bubble by creating something called a “tokenized stock.” This essay explores that solution, and also has exclusive data I gathered from asset managers and other companies serving this market.
Author’s note: On a meta point about The Leverage, that essay was nerdy and funny and esoteric, and not something I thought many people would be interested in. But, once again, it has performed better than I dared hope. If I hear what you, my exceedingly handsome readers, are telling me, it’s to go more esoteric and get weirder with my writing? Please let me know in the comments what you are liking so far!
Who wins the browser wars? I was given early access to AI browsers from Perplexity and The Browser Company of New York. They use LLMs to automate different tasks, ranging from email to research, but raise serious questions about what types of software companies will even have a chance of surviving in the future. One very possible scenario is that whoever owns the browser controls the majority of AI profit pools (which is why OpenAI is building one). Here are my impressions on what victory conditions look like.
Peter is very good at accumulating soft power and riches. I had an hour long conversation with Mario Gabriele, the founder of the wonderful media company The Generalist. He just published a four-part, 35,000 word profile of Thiel and his venture capital firm Founders Fund. You may love Thiel for his mercantile accomplishments or philosophy. You may hate him for his political influence and ardent funding of weapons manufacturers. In either case, he is worthy of study. He is already shaping your future whether you like it or not, so you should understand how he and his firm operate. Mario and I discuss it here.
THE BIG STORIES
Grok can’t handle the truth. In an humiliating moment for Elon Musk, his company XAI introduced an update to its chatbot Grok that had it calling itself “MechaHitler,” offering instructions on how to rape people, and other vile stuff. This feels gross to read, but stay with me: Models saying this sort of stuff is not what matters. It is relatively trivial to prompt models to say vile stuff, in particular if the creator of the model isn’t running proper safety guidelines, which XAI apparently isn’t. I’ve been able to make models give me the recipe for meth for years now. What is a big deal is that users view Grok as a source of truth. Saying “@grok is this true?” is one of the most common behaviors on X. The issue is that hundreds of millions of X users suddenly were interacting with a misaligned AI, and had no idea it was coming.
This is understandable because Musk has promised to make Grok the “maximally truth seeking” chatbot. And, well, that isn’t quite happening yet.
“If you ask the new Grok 4 for opinions on controversial questions, it will sometimes run a search to find out Elon Musk’s stance before providing you with an answer,” writes technical blogger Simon Willison. He has found that Grok appears to think the truth is whatever Musk thinks. And to Musk, I’m sure he thinks his theories are the truth. That’s the problem with LLMs—they are ultimately reflections of their creators' perspectives on the world. Willison argues, and I agree with him on this, that this behavior is unintended. But that the models are referencing their owners opinion as a source of truth is wild!
The truth problem is where we have to get existential on what is going on with these models. For contentious questions like Israel and Gaza, civil rights, or religious freedom, the “truth” can mean different things to different people. These models are optimized to return answers that increase user engagement and happiness, within the moral boundaries and beliefs(?) of its creators, and they’ll spit out whatever “truth” accomplishes those two goals best.
Markets can’t say what truth is, either. Prediction markets are some of the most fascinating things happening in finance and economics today. Essentially, people can gamble on the likelihood of outcomes. To illustrate, let me ask you a question. Do you think this is a suit?
This is Ukrainian leader Volodymyr Zelensky, who, famously, has refused to wear a suit while his country is at war with Russia. On Polymarket, a prediction market, there was about $200 million in trading volume being bet that he would wear one in June of 2025 as reported by a “consensus of credible reporting.” The market resolved that no, he did not wear a suit. Which is weird, because dozens of major publications and creators called this outfit a suit.
It is worth talking about how Polymarket determines “the truth.” Polymarket relies on a platform called the UMA Protocol, where holders of the UMA token are told to be an “impartial arbiter of the outcomes of relevant markets.” However, these token holders' voting power is determined by the amount of the token they are holding. If they mark yes, and the market resolves as a yes, they see an increase in their token holdings. If they mark no, they lose some of the tokens they hold. Meaning that when large token holders, whales, swing votes however they deem fit, the smaller holders will pile on after them because they don’t want to lose value, and would like to gain value, and influence. Worse, UMA token holders are also allowed to trade on PolyMarket! They are double incentivized to make the truth into whatever generates profit.
The result is perhaps similar to LLM’s truth—the rich people who hoard the resources get to determine what the truth is for the rest of us, a perverse incentive if there ever was one.
Both of these instances are bad! And neither has an easy solution. It's funny, but what we likely need is more philosophy majors to fix these products. Sorry STEM grads.
TASTEMAKER
“He who controls the present, controls the past. He who controls the past, controls the future.”
—George Orwell.
It is perhaps cliche, but the book that has been most on my mind this week is 1984. Ironically, it is one that Musk has cited in the past as an intellectual influence. He seems to have forgotten that the book is a warning against systems that use control over language and truth as a means of cowing a population. To my ears, that sounds a lot like a misaligned model that is sold as “truth-maximizing.” This is a consistent pattern that I see—people cite works they don’t understand. Don’t be one of them! Actually read this book.
“Why has Truman never come close to discovering the nature of his world until now?"
“We accept the reality of the world with which we are presented. It's as simple as that.”
The film is the classic Jim Carrey movie The Truman Show. In it, Carrey plays Truman, a man whose life is, secretly, a reality television show. His entire reality is engineered for ratings. It's a silly concept, with a silly actor, that somehow, has this beautiful, melancholic soulfulness. It is worth examining your media and understanding how you as the viewer, or as the participant, are being manipulated.
Until next week, my friends. Paid subscribers to The Leverage can look forward to two paywalled essays this week containing my best and most in-depth research. You can upgrade below to get them in your inbox.