This essay is the second in a series I’m calling “Tech Canon” where I’ll review the essential ideas from important books and analyze how they apply today. The Leverage’s mission is simple: truth, beauty, and rigor for tech markets. Wrestling with great books and applying them to today’s technology is central to that promise.
A confession: I recently hacked the mind of a CEO who has raised $128 million. Josh Miller is the co-founder of The Browser Company of New York, the firm that makes the buzzy new AI browser Dia that I recently wrote about. On X, he announced that Dia had a new AI “skill” that created a psychological profile based on someone’s X profile.
Because I am a social deviant, I immediately thought, “I wonder if Dia’s new skill is good enough to manipulate Josh?” I fed his social media profile to a LLM and asked it to write a tweet that would cause Miller to respond. Then, like magic, I got a notification.
To be clear, this is not me catastrophizing about how we’re being manipulated about AI. That’s been happening for years. Every post surfaced by an algorithmic feed is AI. Much of what you want to eat, wear, and listen to are results of “manipulation.” What was remarkable to me is just how easy this exploit was. It took me, a relative novice at AI and psychology, maybe 30 seconds of work. Miller, to his credit, took the hijinks with remarkably good spirit, so thanks to him for being my unknowing test subject.
This little sideshow is an easy demonstration of how LLMs can rewrite our interiority. Whether it is social engineering like above, or AI therapy, or just venting to ChatGPT about your day, LLMs have the ability to build a psychological profile of you and make you feel as it desires. This is kinda bizarre. The idea that we allow robots to control our psychology, or to elicit a response from us, is just plain weird. That doesn’t mean it is wrong! But it is new. The temptation with new technology is to question it to death. Pundits want to pull studies, figures, or anecdotes as justification for a position. (Almost always that position is “new technology bad”).
Instead, I would argue that the correct method to question AI and its impact comes from an essay written by a Nazi.
The essence of technology
Martin Heidegger was born on September 26, 1889 in Messkirch, Germany. In 1954, he published the essay we’ll be discussing today, “The Question Concerning Technology.” Think of all the changes that he must have seen during his lifetime: airplanes, two World Wars, the advent of computers, hydroelectric power, and most notably, the atomic bomb. The childhood he spent in a rustic German town, one filled with manual labor and bird song, would have disappeared quickly, upended, multiple times over, by machines and metal and marching men, all heralded by modernity’s embrace of technology.
The philosopher took all of this change relatively well. Rather than immediately decry technology and the changes it wrought, Heidegger went meta-physical. He wanted to know what technology actually was. Is it merely the tools we use? Is it the ideas behind them? Or is it the utility these tools grant to the wielder? The Question Concerning Technology is his exploration of those questions.
The reason these questions matter, and apply to LLMs is because we still don’t even know how to define AI technology. Is it the mathematical techniques that power the models? The applications deploying them? Or is it the effect that they will have on our psyches? Think of the psychological hacking I used with Josh Miller. What portion of that was a result of me being influenced by the technology versus choices I made with my own human mind?
The answer, Heidegger explains, is that all of those questions are only surface level. Yes, technology is both us and the tools we use. However, the true essence, the slipper ambiguous being of technology is the historical way the world is revealed to us.
“Everyone knows the two statements that answer our question [of what technology is]. One says: Technology is a means to an end. The other says: Technology is a human activity. The two definitions of technology belong together. For to posit ends and procure and utilize the means to them is a human activity.”
Technology isn’t just the components behind the smartphone. Or the particular weights that went into Dia’s new AI tool that I used to hack Josh Miller’s brain. It’s the combination of these components and my behavior. Yet for Heidegger this combo still falls short; the deeper essence is the way reality as a whole shows up for us—Enframing.
This isn’t always clear, because we’re very used to talking about technology as one or the other. You’ll often see this struggle in technology commentary today. Shoot, you’ll even see this debate in this very publication! On Sunday, I was wrestling with the ethical merits of X.AI releasing a sex bot. But X.AI, technically, was just releasing an intelligent waifu, one that draws from a long history of Japanese imagery. It could, theoretically, be a way for people to learn how to treat women better. In order to be a torrential, era-ending sex bot, it also needs to have people who see the technology and want to use it for sex.
The point here isn’t that these activities are bad in and of themselves. Heidegger warns that the gravest danger is treating technology as neutral. Instead, what he criticizes is the form of interaction that arises when we start to see the world through a technological lens.Namely, everything must be categorized and structured for the use of technology.
Heidegger uses the word “Bestand” to describe the fundamental nature of entities (including us humans) in the technological age. It’s often translated as “resource” but perhaps best translated as “stock”—as in, the “stock” of replaceable parts that are held in an inventory. Whether it’s a heart-breaking movie, a soft and vulnerable poem, or a violent war-torn image, everything is labeled as training data for the algorithm. All of these data points are only worth as much as the economic influence they have on a large language model.
The dangerous part is that, by using these technologies, humans start to treat everything the way their machines do.
Humans as a standing reserve
I’ve been laid off an uncomfortable number of times. If you play the great startup game, having your job spontaneously combust is just a part of life. On one memorable occasion, after 45 days of work, I learned that the COO had been fired and that every person he had recruited, including the handsome author of this newsletter, was being let go for “unrelated reasons.” Right. I was told this by an entry-level HR associate, who also let me know that the founder “wanted to remain buddies” but was unable to make it to any of the layoff conversations. Uh-huh.
My favorite layoff was when the founder met with me personally, explained that he had been too aggressive in his growth assumptions, and needed to cut burn by 40%. “It’s my fault and I can’t tell you how sorry I am to have let you down.” He sniffled, I sniffled, and then he personally called about 35 companies over the next two weeks until he got me my next role. That same founder did not pay himself anything until the company was growing again.
In the first culling, I was a line on a spreadsheet, an error to be deleted, a Bestand, a stock that was no longer valuable. In the second, I was a human being, someone with hopes and bills and worries, an entity deserving of respect. The greatest irony is that the absent founder gave me 3 months severance, while my sniffling brother in arms could only afford to give me a month. Still, the second meant much more to me.
Heidegger called the first mindset, one in which human beings (and everything else) are a resource to be ordered and used, as a “standing reserve.” It is the mode in which entities—including people—are revealed solely as on‑call resources. Technology looks at nature and transforms it into “energy.” Think of a river. In the context of modern technology, it is only useful in its potential to be dammed and turned into electricity. The river’s potential to enhance pleasure when dipped in or reflect sunlight like many little diamonds is not considered. Everything is “enframed” as a resource.
“The revealing that rules in modern technology is a challenging [Herausfordern], which puts to nature the unreasonable demand that it supply energy which can be extracted and stored as such.”
In my life, the most valuable things are the ones that defy economic and technological rationale. My daughter’s laugh when I toss her. My wife’s smile when I come into the room. These are more important to me than any economic outcome, but I am forced to treat these experiences as costs/benefits as I balance quality family time with building a new business.
Perhaps the most dangerous evil of AI is that it gives us permission to not care about consequences.
AI as a permission slip
Here is a smattering of recent news events. You tell me what they all have in common.
Microsoft: May through July 2025: About 15,000 roles (≈4 % of global head‑count) were eliminated as the company simultaneously funneled $80 billion this fiscal year into data‑center build‑out and other generative‑AI bets.
Google — Jan. Through Apr. 2025: Several hundred engineers and hardware staff in the Android, Pixel, and Platforms & Devices units were let go or offered buy‑outs as part of a restructure “to integrate AI features more quickly.”
Amazon — June 2025 memo: Andy Jassy told employees that “efficiency gains from AI” implied an ongoing multi‑year reduction in the corporate workforce, after 27,000 prior cuts.
IBM — H1 2025: Around 8,000 HR and back‑office jobs were removed because new AI tools now handle many routine admin tasks.
SAP — Q1 2025: A €3 billion “AI‑centric” restructuring offered voluntary redundancies to ~10,000 employees so the German software giant could retrain staff and double down on generative AIfeatures.
Dropbox — Apr 27 2023: 500 people (16 % of staff) laid off; CEO Drew Houston said the move was necessary “to ensure we’re at the forefront of the AI era.”
The core promise of AI is the automation of labor, of taking a workflow over and giving it to an AI agent. Which means that everyone is expendable. Our economic system has always had this as an underlying assumption. However,we've never made a technology before in which its inventors promise 20% unemployment in the next five years.
As long as what we focus on with technology are the latest model updates and newest waifus, we will not be able to understand how this style of thinking is starting to take over our brains. Heidegger argued that “we will never experience our relationship to the essence of technology, so long as we merely represent and pursue the technological.”
AI chatbots, or social engineering experiments, are ways of showing how much we value feeling a certain way. We want to be manipulated. We want to be ordered and put ourselves into interior reserves.
“The threat to man does not come in the first instance from the potentially lethal machines and apparatus of technology. The actual threat has already afflicted man in his essence. The rule of enframing threatens man with the possibility that it could be denied to him to enter into a more original revealing and hence to experience the call of a more primal truth.”
Heidegger never really concerned himself all that much with labor markets. To him the threats of technology were ontological. However, I view these stories about AI and labor as the downstream effects of what this new technology is revealing about the relationship between technologists and the rest of the world.
The danger of technology, one that is extremely heightened by AI, is that we ignore all other forms of revealing. When we allow even our innermost thoughts and emotions to be manipulated by technology, we are slowly killing our ability to find truth in other forms. If you talk to as many AI founders as I do, you’ll know what I’m talking about. Among some of them, there is this attitude of economic and moral nihilism, one where sentiments like “we have to beat AI in the China race” or “it’s just Joseph Schumpeter's creative destruction at work, bro” are the prevailing attitudes. There is no wrestling with consequences. No concern for what happens with the technology they produce. No desire to protect and cultivate beauty. It's depressing.
Heidegger’s solution (and some of mine own)
In a recent networking call, I was asked, “why do you feel so compelled to make The Leverage exist?” One of my core drivers is correcting the attitude that I see above. You can simultaneously believe in the miracle of capitalism and technology to improve living standards, while simultaneously rejecting the laissez faire attitude towards society and culture. It’s why every Sunday edition of this newsletter ends with a “taste” section, where I recommend ways to dive deeper into the humanities.
Our German friend agreed with me—in The Question Concerning Technology, he argues that art is another way of revealing, one that can help us save ourselves from the dangers of enframing by opening us up to other truths.
“Because the essence of technology is nothing technological, essential reflection upon technology and decisive confrontation with it must happen in a realm that is, on the one hand, akin to the essence of technology and, on the other, fundamentally different from it. Such a realm is art.”
The danger of technology is not its machinery, but that it can obscure other ways of revealing truth. Yet, “where danger is, grows the saving power also.” Heidegger believed as I do, that technology can help us. Save us even. If we are to follow Heidegger, the new x.AI bot isn’t inherently bad. After all, “...technology is therefore no mere means. Technology is a way of revealing.” However, the help that technology can offer is not automatic. It can only merge if we learn to listen to the essence and let alternative forms of revealing (e.g. art, independent thought) flourish.
What has been revealed to us by x.AI’s waifu tool is that people are extremely lonely. What comes next is up to us—whether we will exploit this loneliness, accept people as expendable, and continue to feed them into the incinerator, or if we will take the harder path, to understand what our technology has revealed to us and choose to do something about it.
This type of work takes weeks to put together and is only possible because my research is supported by paying subscribers. If you want the world to be more beautiful, more rigorous, that change requires more funding. You can help support work like this by hitting the button below.
"You can simultaneously believe in the miracle of capitalism and technology to improve living standards, while simultaneously rejecting the laissez faire attitude towards society and culture."
Or... you can realize that Capitalism's fundamental values are a-humanist and unable to account for the things you - and most people - *really* value. And you can decide then to devote your time, energy, intelligence, and resources to figuring out a system to value and promote and support what we *actually* need as humans.
We've had a series of "this is the best system we've found so far" throughout history. I'm sure there were some serfs in earlier eras who might have argued that serfdom was better than being a hunter-gatherer or whatever. But we ultimately found better systems. Why would we not be looking for one now, when the failures of Capitalism are so evident, and increasingly so? And indeed when such failures are arguably reaching a fatal tipping point (climate change, inequality, etc.).
I know I'm probably tilting at windmills a bit here with the fundamental orientation of this newsletter and you as an author. But that's precisely why I feel like this is a leverage point: you as a clearly thoughtful person who explicitly values things that Capitalism does not, and being simultaneously one of its most ardent defenders, this is a great opportunity to hone the blade of transcapitalism (or post-capitalism). 😄
This is so beautifully written, Evan. Technology is a way of revealing. On the flip side, I think one thing these advances will teach us is what it truly means to be human. Like you rightly said, many people are lonely, and this will only increase. And maybe we’d start to pay attention to human to human connections again. Small pockets of this movements are already appearing