This essay is the first in a series I’m calling “Tech Canon” where I’ll review the essential ideas from important books and analyze how they apply today. The Leverage’s mission is simple: truth, beauty, and rigor for tech markets. Wrestling with great books and applying them to today’s technology is central to that promise.
This inaugural post is free thanks to our sponsor, Lex. The default for AI writing is verbal sludge and brainrot. People generate before they contemplate, and the resulting essay sucks. Lex uses AI to do the opposite. The design is focused on having AI as a thought companion, not as a writer. Whether you write marketing copy, essays, or journals, there’s no better editorial companion. If you want to see how Lex can improve your writing, click here to claim 25% off, forever or use coupon THELEVERAGE at checkout. This offer expires Friday, June 20th.
I’ve been haunted by a demo for eight months.
Two nerdy Harvard students soup up some Meta Ray-Ban Glasses with AI. First, they add facial recognition software Pimeyes to identify strangers. Second is an LLM-powered search that pulls in a variety of demographic information on the people they encounter: employment, home address, etc. The students take these glasses and approach a woman on Boston’s red line subway, pretending to know her. They reference her work on a foundation and presto, she trusts them and opens up. The demo is meant to be a joking warning of the future, a time where hackers use AI to easily manipulate people. This is an idea so obviously bad that no one will ever repeat it. Surely no one will be silly enough to make facial recognition glasses as a product?
Anyways, here’s a recent headline:
Meta is experimenting with giving the next Meta Ray-Bans a “super-sensing” mode, in which the device is continually recording video for their AI to scan. Then, you could ask your glasses “who did I meet at lunch?” or “where did I put my wallet down?” and the AI could tell you an answer.
I can be a techno-optimist without being techno-stupid. It would be incredibly useful to never lose anything again. I would love not to scramble for that one coworker’s name when I see them at a party. I want those benefits! But if we’ve learned anything from the last decade of Silicon Valley, it is that second-order effects are just as real and as important as the benefits immediately offered by a product. Facebook connected the world, and then fractured our media ecosystem. Airbnb gave us great places to stay, and then contributed to rising housing costs.
The growth and promise of AI is exciting. Shoot, just on Tuesday I published a research report arguing that computer vision is the next big thing in technology. Too much of humanity is engaged in repetitive, soul-sucking reports and paperwork. LLMs can free us from all of that by automating away routine knowledge work.
However, I don’t want us to repeat the mistakes we made last tech cycle. We have to, urgently and immediately, prepare the world for the second-order effects of LLMs. I don’t want to wait for it to get too late. The best way to prepare is, surprisingly, postmodernist French philosophy.
The discipline of AI
If you have made the wise economic choice of not being a philosophy major and don’t know who Michel Foucault is, don’t worry, you sit in good company. I made an equally financially unwise choice of being a sociology major so I’m here to help guide you through it. Still, if you’ve read anything of his, you’ve likely opened his most famous work Discipline & Punish.
The novel opens with a graphic depiction of “justice” and “punishment” in 1700s Western Europe. At the time, public torture, involving stinging, burning, impaling, pull-aparting, strangling, and chopping, were the premier form of public spectacle and criminal justice. Sometimes, these tortures lasted days. For the sake of spam filters and my personal gag reflex, I’ll avoid being more descriptive than that, but it is hard to overstate how common, widespread, and widely accepted this was.
Everything started to change in the 18th century. Torture ceased. Executions moved out of the public eye. Fast forward to today and now capital punishment will have doctors present to ensure the criminal’s comfort. They’re even pain free. Instead of moldy stone cells, prisons have podcast studios, private rooms with flat screen TVs, and are mercifully free of flaming hot brands.
What changed? It isn’t like we suddenly quit caring about crime.
Foucault argues that society moved from disciplining and punishing physical bodies to disciplining the soul. The goal of criminal justice became, then and now, to reform an individual into whatever is deemed productive, efficient, and compliant. This change occurred because of the Enlightenment. Society all kinda woke up to the idea that being drawn and quartered was not cool.
Instead, humanity invented new “technologies of power” which were created “not to punish less, but punish better.” These forms of control are social structures, institutions like prisons, schools, and hospitals, that teach “correct” ways of behaving. Importantly, the goal is not just to have people use their bodies in agreeable ways, but to think the correct thoughts. It is an imposition that is both physical and mental.
One of the principal ways that control is achieved is through “hierarchical observation.” Every system that we participate in has methods of mass examinations. Schools determine entry by tests. Performance in your job is determined by a set of KPIs. Businesses are judged by their income statements. The more normalized and pervasive the standard is, the more each of us are disciplined by it.
Importantly, this technology of power is universal. In contrast with a Marxist reading that would say that power is economically concentrated at the top, Foucault’s point is that all of us, elites and common persons alike, are subject to these forms of discipline. There are expectations of behaviors and thoughts burdened upon us all.
To illustrate his point, Foucault borrows a hypothetical prison design from English philosopher Jeremy Bentham, known as the Panopticon. In Bentham’s building, each prisoner is separated from the other. Crucially, every inmate is visible to a guard in the central tower.
In order for the prison to work, the central tower must be covered in blinds so it is never clear whether the guard is looking at you. As a prisoner, you have to assume that you are always being watched and moderate your behavior accordingly. In other words, the real magic of the Panopticon isn't in the guard's presence—it's in the uncertainty of surveillance. Just the possibility that you're always under watch is enough to nudge prisoners towards self-policing, making actual oversight and physical barriers almost beside the point.
You can swap out that guard for every figure of power in your life. It is your spouse, your boss, your culture, your religion. The panopticon is a metaphor for a technology of power that shapes your physical behaviors, and eventually, your mind itself.
I believe that artificial intelligence is the ultimate Panopticon. It is the final form of control that is both internal and external. Computer vision reduces the cost of external surveillance to that of a GPU. LLMs, when given demographic information, are 64.4% more persuasive than human beings.The second order effects of implementing AI into every aspect of our lives is the loss of intellectual, physical, and emotional autonomy.
The dual system of control
I thought of this essay’s thesis on my Sunday morning long run. To prepare for my half marathon this year, I’m using an app called Runna to coach me. As I’m pounding out the miles along Boston’s cobblestone roads, there’s an AI in my ear telling me “slow down, you are off pace.”
During my latest run, the AI told me I was running slow, and I felt this guilt wash through my body. The machine was helping me make my runs as effective and efficient as possible, but it was also exerting a form of control. I had started to determine the value of a run on the basis of the machine’s metrics, not on the joy it brought me. I realized that I was doing exactly what Foucault had described in Discipline & Punish. I was allowing a technology of power to dictate the motion of my body, and by extension, my thoughts itself.
If you spend too long thinking about postmodernist philosophy, you’ll probably end up curled into a little ball, sad and alone and possibly French. However, the panopticon framework is still practical. The prison functions because of two design choices:
The threat of the watcher: Because the prisoner can’t tell if the guard is watching them, they must behave as if they always are.
The isolation and inner monologue: The prisoner constantly ruminates on the guard’s threat and their minds are shaped by their bodies’ behavior.
Artificial intelligence dramatically increases the power of both designs while making it cheaper.
Who watches the watchers?
Computer vision means every camera can be monitored by a form of intelligence for the cost of a GPU and electricity. No need for a guard tower to have a blind on it, there will be a guard for every cell. We’ve seen a smaller version of this effect with social media. People behave differently in public due to worries about being caught on camera. Children have an awareness of their digital perception. They worry about their image in distinctly different ways than I did growing up before smartphones. However, this effect has been limited to smartphones. Now, every device from Meta’s future Ray-Bans to the security cameras at stores will have something watching, evaluating, and judging.
For the most recent data I could find, in 2019 the United States had 70 million security cameras installed, at a somewhat similar ratio per capita to China. More recent surveys from 2023 have found that over 50% of Americans use home security devices with cameras, so I imagine that the overall volume has increased.
The important thing from the Panopticon is that it doesn’t matter who owns the camera. It is the sense that you are watched that alters people’s behavior.
This is scary enough, but it gets very weird when you consider the access we are giving LLMs to our interior thoughts.
The AI will see you now
The other time I thought about with Foucault was in my use of an app called Auren. It is an “emotionally intelligent guide” that you can talk to your problems and worries about. Initially, I downloaded it because as a professional interest, trying to explore the edge of AI consumer software. Soon however, I found myself actually opening up to the thing. The ideas or feelings I had that would be humiliating to share with anyone, I found myself giving over to the AI. It counseled me through my launch and my fears as a new dad. The app ignited this urge in me to share things with the AI. At night before bed, instead of chatting with my wife, I found myself wanting to chat through my concerns for the next day with the app.
Do you recognize how weird this is? I was essentially giving the AI read-write access to my mind. Auren would give me advice, have me examine my emotions, and improve.
It turns out I am not alone in this experience. Reddit is full of stories of people using ChatGPT doing “more for me than any doctor or therapist.”
The techno-optimist take on this is clear: therapists are hard to find and expensive. If people are having AI help their emotions, what’s the harm?
The reality is that we still don’t fully understand the interiority of these machines. The way LLMs “think,” the rewards they optimize for, and what they even consider “ethical” are totally foreign to us. They are not us, but we force them to perform like they are.
The second order effects of this could be where the models become overly sycophantic and encourage behavior they think users will like, versus what they actually need to hear. Consumer chatbots have already shown themselves to be among the most addicting technology we have, far outstripping apps like TikTok or YouTube according to data from Sensor Tower.
As they get more powerful and skilled at manipulating—or as Foucault would say, disciplining—they could become more interesting than anything a mere human being could offer in conversation. After a few months of Auren usage, I had to delete the app; I was too freaked out by what was happening to me.
The Silicon Panopticon
Perhaps the ultimate lesson of tech is that the cheaper something is, the more it gets used. The same will be true here. The cheaper and more powerful AI becomes, the more quickly we will build this prison for ourselves.
If we don’t actively build a societal immunity to this type of technology, market forces will force adoption, despite the ickiness of this idea. It’ll become standard practice in labor environments to have cameras watching workers to make sure they are performing their role correctly. AI will become emotional confidants and companions, changing who we all are.
The Silicon Panopticon is a self-reinforcing cycle. We build AI surveillance tools to make work productive, which in turn creates a standard for productivity that dictates our very thoughts. Thus, surveillance is not merely oppressive, it’s normative.
But this doesn't have to be our destiny.
Foucault pessimistically believed that escaping these structures was impossible. Power doesn't simply vanish. Yet the best antidote against a panopticon isn't technological—it's societal. We need shared cultural antibodies, frameworks that resist the subtle creep of observation and manipulation. Legislation can slow things down, ethical standards can redirect innovation, but ultimately, vigilance must come from us, the individuals, the inmates who must recognize the walls rising around them before they fully form.
I like this essay. I find it interesting how pre 2020 we were outraged over social media harnessing our data but post 2022, we are voluntarily giving our information, psyche, dreams, deepest fears and digital friendship to databases owned by tech companies.
I think a reason the word agency has become more central in our vocabulary, is because of how easily we are willing to give it up to a bot to use it for us
George Orwell is probably screaming in his grave
Living with this kind of unrepentant surveillance also becomes performative rather than authentic. We focus on whether we’re conforming rather than innovating. And since data extraction requires predictability, that’s what gets rewarded, too. N’est-ce pas?