Each week I promise myself I will give 1600 Pennsylvania Avenue a rest, and each week I am proven a fool. The global consensus is increasingly that AI will change everything—fast. So of course, politics are going to be involved. The little quirk of this is that we can’t even reach a consensus on what AI even is. LLMs are a mystery that only gets more complicated. More on that in a sec.
MY RESEARCH
What is technology? In my experience, it has become very common for people to casually blame “AI” for things that are actually human fallibility. Or I’ll see founders and managers justify terrible treatment of their teams because “AI can replace them.” What aspect of this dynamic is technology? Is it the attitude of the humans using it? The math powering the large language model? Drawing from German philosopher Martin Heidegger, I would instead argue that AI technology, indirectly, reveals the truth in how we view each other. This is a theoretical argument with distinctly practical applications around selling and building this stuff. More here.
THE BIG STORIES
Models are a mystery. It is hard to overstate how little we know about large language models. Yes, we know how to make them and technology companies are spending hundreds of billions a year to make that possible. And yes, we may understand the theoretical underpinnings of models. But what is actually happening inside them, or how we can control them, or how to make predictable, understandable improvements, is much more mysterious.
The best analogy is that we are making nuclear reactors, but only judging success on the basis of the power they generate. We don’t actually know all the nuances of the reactors’ spin. We also know that theoretically these nuclear reactors could blow up and kill everyone (in AI, this the Arnold Schwarzenegger’s bare robot ass, James Cameron Skynet scenario) but we hope we figure it all out before that calamity happens. Don’t worry, I also get sweaty palms reading this.
This week, Anthropic released a paper that exemplified how otherworldly LLMs are.
Picture a teacher‑model that adores owls yet is allowed to emit only long, monk‑like chants of random numbers, and still—by some spooky, sub‑semantic capillary action—those digits seep into a clone‑student so deeply that the kid later pipes up “Owls!” when quizzed about favorite fauna. Anthropic names this weird osmotic trick subliminal learning, and this is a real example from the paper.
Screenshot from the original paper showing how the owl idea propagates.
The authors go on to prove that the moment teacher and student share the same embryonic weight soup, even a single parsimonious nibble on any teacher‑spawned text, yanks the student’s parameters toward the teacher’s behavior, meaning that what looks like innocent numeric gibberish can smuggle biases, reward‑hacking shenanigans, or worse past the tests designed to prove the model is “aligned.” There will be subtle distortions of answers, truth becomes relative to which model lineage you are using, or secretly malicious behavior can be coded in without us every knowing.
Which leaves us—staring into the fluorescent abyss of our own tooling—to either start tracking model lineage or risk birthing a polite sociopath in silicon who nods, smiles, and quietly carries dynamite in its statistical pores. Models are coming preprogrammed with biases, encoded in ways we don’t understand, and there aren’t any satisfying answers out there on how or why.
There are papers like this published every week, and I am getting more and more uncomfortable with the speed/scale with which we are deploying this technology. The only counterbalance to the market pressures may be regulation. Speaking of which…
Trump’s plan for AI (and the world). The White House released a 28-page document about its vision for the future of AI. It is classic Trumpian literature. The topline has bombastic, grumpy-old-man-shouting-at-clouds energy with lines like “we will continue to reject radical climate dogma” or nebulous terms like making sure models have "ideological neutrality” (which I interpret to mean: models shouldn’t have views that disagree with how a 79-year-old dude views the world). Perhaps most egregious was the accompanying executive order entitled:
Ugh.
However, the actual policies are mostly reasonable. The executive order has some smart design around how model makers can show the “bias” of their products by sharing data like system prompts. And the plan has recommended incentivizing open-source models and decreasing federal red tape for startups. All good things in the opinion of this author. Competition, fair markets, and innovation are the hallmarks of what actually makes technology products great. The test will now be if they can translate all this paperwork into legislation and action.
The meta-point is what is interesting to me—the American political establishment are believers. Both the Republicans with this order, and the Democrats with their public comments, such as this post from Pete Buttigieg arguing that AI will be bigger than we think, show they are onboard with the “AI changes everything” train to hypeville. In general, it is best practice to doubt any and all forms of ideological consensus, but sometimes the majority is correct. I think that is the case here.
All of which is to say, AI is the game of nations now. While the companies competing are doing so within the context of markets, the future constraints on the business we discuss in this publication are likely to be regulatory, not merely technological. Take for example electrical power: Anthropic released another paper this week arguing that the U.S. would need at least 50 gigawatts of power by 2028 to run all the AI we want to. Depending on size and scale, you would need to build more than 40 nuclear power sites in the next three years to do this! 50 gigawatts would mean a 10% increase in U.S. average power use, and a ~4% bump in total generating capacity. While theoretically possible, it is hard to imagine that happening in the current regulatory environment around renewable energy and nuclear.
Thus, Washington. Each aspect of the supply chain, from silicon to power to application, will be overseen by Washington now. AI is officially considered too important to do anything else.
TASTEMAKER
To prepare for launching a video product for The Leverage this fall, I’ve been consuming lots of Youtube. Two video essays that analyzed artists I like:
Akira Kurosawa - Composing Movement: Taste is the ability to articulate why something is good. This series from Every Frame a Painting, demonstrates excellent taste by articulating why we like something in film. My local art house is running a retrospective of Kurosawa films like Seven Samurai this month, so I wanted to better understand why he was revered in the film industry. This essay helped meget it, finally. They really don’t make movies like this anymore.
David Foster Wallace - The Problem with Irony: Is it kosher to recommend an essay with a conclusion I disagree with? This one tries to tackle the complexities of post-modernism, television, and the roles of irony and sincerity in media. What’s interesting about the video essay, which was published eight years ago, is how much of what its creator Will Schoder argues for becomes irrelevant in the short-form video age. This one sparked lots of thoughts for me, and I bet it will do the same for you. (Read the comment section, too. There is some very thoughtful debate in there).
If you want to get the take straight from the source, I recommend reading Wallace’s E Unibus Pluram: Television and U.S. Fiction, from his essay collection A Supposedly Fun Thing I’ll Never Do Again. This is where Wallace really starts to dig into the metaphysics of media, irony, and post-modern consumption habits.
I’ll be back in your inbox in a few days, and I’m pumped for the essays this week. See you soon.
-Evan
There should be nothing "ugh" about "Preventing Woke AI in the Federal Government." And there's nothing "grumpy-old-man-shouting-at-clouds energy" about "we will continue to reject radical climate dogma."
Drop the bias. Getting nothing but dark-skinned responses to historical figures is clear bias and danger - because it's not just skin color they're trying to hide, they want to hide history. Why? That's a deeper question only "they" (those like Google) know the answer to.
Report the news, enjoy(ed) the opinions but stop walking in to every headline expecting to hate something Trump said. Yah, he's can be irritating (loved the term "bombastic"), but he's also on the right side of most issues.
unsubscribed.