Thomas Piketty argued in his popular book Capital that R > G. Namely, that the rate of return on capital consistently exceeds the economic growth rate. *Cue Karl Marx vigorously nodding.* Over time, this has led to extreme levels of wealth concentration. His formula was in my head this week as we saw extreme contrast in two data points:
In recent earnings calls, Microsoft, Google, and Meta recommitted to spending hundreds of billions of dollars on AI data centers (and paying the researchers who use them tens of millions of dollars in yearly salaries).
The U.S. jobs report came out on Friday with horrendous results. Together, the previous two reports overstated the numbers of jobs added to the U.S. economy by 258,000. Even worse, the only sectors that are adding jobs are healthcare and social services, while the current report also had shaky numbers. The only sectors really adding jobs are those whose function is mainly to serve an aging and ill population. These are necessary and good jobs! But not the hallmark of a dynamic economic regime.
We are overdue for a financial revolution. The way I see it, either artificial intelligence dramatically decreases costs and increases growth with some kind of magic robot technology that I don’t yet foresee, or we are looking at a social uprising where the mass populace rebels. This is, like, bad.
It's also why you should pay close attention to technology markets: The startups getting funded today are responsible for the increases (or decreases) in quality of life tomorrow. This week, there was news that, with some time and careful attention, can show you what the future holds. (And help you prepare for further instances of R>G.)
MY RESEARCH
Should AI have ads? Your doctor is probably using AI to help diagnose you. How does that sentence make you feel? Would your feelings change if you knew the service was free and ads for pharmaceuticals was how it monetized? While we are all used to having our attention harvested for ads, AI is different because it pushes so deeply into our workflows. It means that the future of startup monetization may look wildly different.
THE BIG STORIES
AI video predicts the world. If chatbots are supposed to simulate the experience of a remote colleague, what does AI video simulate? One potential answer is, well, everything. Video is a representation of the world, and as such, can simulate physics and give new dimensions to culture. Two leading model companies, Runway and Luma, started off by offering video-generation tools to creatives looking to make short film snippets. However, as the models have gotten “better,” companies producing items ranging from robotics to autonomous vehicles have incorporated those videos into their training data. According to The Information, both Runway and Luma now forecast these new customers to be the majority of their business in the future. Runway hopes to hit $300 million in revenue this year.
In AI research, the phenomenon in which “models get big and do cool new stuff” is called “emergent abilities.” I would argue that Runway and Luma are really good examples of something I call “emergent workflows.” Very, very few startups succeed by going after really big markets that are obviously ripe for the taking. Instead, they go after something relatively small and bet on technological change dramatically altering how customers behave, thus, increasing the market. One simple example of this is Figma. When the company got started, designers were a very small cohort of customers. By betting on WebGL, and the capabilities it gave the browser, their true market was all of the people who touch a product at a company. So too with AI—the best bet right now is to go after something ludicrously small and bet on the models getting better and expanding your market. Selling video models to Hollywood studios isn’t that big of a deal. Selling to everyone whose product involves simulation is.
“Developing superintelligence is now in sight.” The Zuck had a very, very good week. Meta absolutely smashed earnings, adding over $200 billion in market cap to the company. He also published an essay laying out his vision for ”super intelligence.” To be blunt: This writing has the intellectual heft of a bowl of soggy, cold ramen. His vision is not to automate rote work, rather to, well, let me allow Zuck to explain in his own words. “Meta's vision is to bring personal superintelligence to everyone. We believe in putting this power in people's hands to direct it towards what they value in their own lives.”
These are nonsense ideas! If you gave true superintelligence to everyone, it would be like giving every goldfish a pet Einstein. The capabilities will be so vast that the vast majority of people would have no idea what to do with it. Zuckerberg envisions an outcome where, “Personal devices like glasses that understand our context because they can see what we see, hear what we hear, and interact with us throughout the day will become our primary computing devices.” It just so happens that Meta sells these devices. His real aims are distinctly commercial. I don’t begrudge him that! But alas, this does not inspire me in the slightest. I hope his next attempt will be slightly more ambitious about actual positive changes to the world.
Called it. In early June, I argued that computer vision would end up being just as big as LLMs because of how visually determined our world is. Driven by ever-cheapening computer chips and the collapse in the cost of cameras, giving AI the ability to see would lead to a physical, moral, and spiritual disciplining, wherein people become more compliant.
This week a startup called Lumana announced a $40 million round for its AI security systems that are, “building the infrastructure for cameras to perceive, understand context and take action, not just watch.” In practice, that means when the cameras see something bad like alerting security that there is someone walking around with a weapon. It is already in use by McDonalds, Meta, and NYU.
The thing that isn’t clear to me yet is whether this is actually a company that won’t become a feature over time. In the case of restaurants like McDonald’s, it seems likely it would be better if Lumana integrated with a piece of software like Toast so the AI cameras can trigger the most workflows possible. It could track how many fry orders happen at the cash register, watch the supply room and fryer to monitor waste, and automatically trigger supply orders when needed. Seems like it would be easier for the restaurant's sales system to do this then vice versa. If startups like Lumana are selling intelligent cameras, that intelligence is only useful in what actions it can trigger for you and that is the purview of existing software suites.
TASTEMAKER
This week I’ve been seeking stillness and focus. Whenever I feel this way, I dive back into classical music. It holds a unique combination of intelligence and peace that, for me, is simultaneously mentally stimulating and relaxing.
So, this week I’ve mostly been listening through the works of Dvořák, a Czech composer. His Symphony No. 9 is the one you are most likely to be familiar with. He wrote it in 1893 while visiting America and titled it The New World Symphony. Inspired by African American spirituals and Native American music, it is an embodiment of the hope and promise of the American experiment. It captures that spirit of adventure and wonder so well that Neil Armstrong took a recording of it with him on the Apollo 11 moon landing.
I’ve always been fond of the second movement. It is 12 minutes and 53 seconds of aching beauty.
I’ll be back in your inbox in a few days, and I’m pumped for the essays this week. See you soon.
- Evan
Sponsorships
We are now accepting sponsors for the fall. If you are interested in reaching my audience of 36K+ founders, investors, and senior tech executives, send me an email at team@gettheleverage.com.
To be fair, there is no consensus on the veracity of Piketty’s R>G hypothesis…so not sure I’d make that the baseline thesis of your arguments.
Love your newsletter, to be clear. 👍🏻