“Dude, I Just Fired Half My Sales Team.”
What happens when the marginal cost of changing minds approaches zero?
A founder told me the title for this piece recently, half-morose, half-relieved. AI capabilities had gotten to the point where he couldn’t justify paying humans to do work that software can now do better.
Specifically, he had cut his SDRs–sales development representatives.
For those unfamiliar with this particularly well-caffeinated breed of sales staff, these are entry level jobs where employees are sending cold emails and making phone calls, all day, every day. Their value is determined on the volume of qualified leads they can give to a more senior sales staff member to close. It is not a fun or easy job, but it persisted because cold outreach is time consuming and it is cheaper to have kids fresh out of college do it instead of more expensive sales talent.
The standard Silicon Valley go-to-market strategy is rapidly evolving. In previous essays, I’ve documented the rise in AI-assisted sales tools like Clay or Agentforce that allow for senior sales staff to find prospects, customize outreach to these prospects with LLMs, and then transition it to human labor once the lead was developed. The founder I spoke with isn’t “anti-sales.” He’s still hiring—just not for the part of the funnel that has become a software feature. He’s redirecting headcount toward Account Executives who have experience closing deals.
That’s the labor story, but there is something much bigger underneath.
The Leverage has been documenting for months now how Large Language Models are the most persuasive technology we’ve ever invented. If you spend 10 minutes talking with one, multiple studies have shown that they can permanently alter your political views. And last week, I released an experiment where an AI will help you make a New Years resolution you’ll actually stick to.
The interesting story is that the unit of advertising is changing. For twenty years, the unit was the impression: a rectangle on Meta, a link on Google, optimized by clickthrough data. Now the unit is becoming something else entirely: a conversation that can change your mind—delivered by an AI that knows you, remembers you, and can tailor itself to you in real time.
I want to push this idea to an uncomfortable extreme.
Is it possible to make an AI that will convince you to buy a $130 dollar subscription? After all, people are never more honest than when they are talking with their wallet. If an LLM can convince you to buy something, then to my mind, essentially all forms of persuasion are possible.
So I built a small experiment: an AI salesperson for The Leverage.
Here’s how it works. You’ll copy a prompt, paste it into whichever AI knows you best—Claude, ChatGPT, whoever has your memory turned on—and watch it try to sell you a subscription to this newsletter. The prompt instructs it to use all of that context to figure out whether The Leverage is actually relevant to your life, and if so, how to make that case. I’ve constructed it to behave in what I believe is most akin to human sales practices, so this should feel fairly smooth.
If it thinks it’s a fit, it can offer you a 20% discount code, but only after it explains the reasoning. If it thinks it’s not a fit, the “sale” ends there.
The future looks…weird
I genuinely don’t know if this will work.
The experiment will run for the next few days, and then I’ll publish a followup. Subscriptions gained, discounts used, and whether anyone reported feeling manipulated.But the implications aren’t about my newsletter.
Think about how the last era of advertising worked. Meta and Google built trillion-dollar businesses on clickthrough data. They knew what you clicked, what you lingered on, what you searched for at 2am. That data let them serve you ads that felt almost uncomfortably accurate. The entire digital economy was built on the arbitrage between what platforms knew about you and what advertisers would pay to display.
Now imagine what happens when the platform doesn’t just know what you click—it knows what you think. Every problem you’ve worked through. Every fear you’ve articulated. Every half-formed idea you’ve bounced off a chatbot at midnight. This is something more powerful, and more personal than any type of advertising product we’ve had before.
The AI that earns your trust—the one you talk to daily, the one with memory turned on—becomes the most valuable real estate in the history of commerce. Not because it can show you ads, but because it can have a conversation with you on behalf of whoever’s paying. For more information on how the AI ad market will play out (and why it isn’t a bad thing) you can read my research here.
This experiment is a crude early version of that future. I think it is likely that not only will you have your personal AI agent do your shopping for you, the retailers will have AI agent salespeople, trying to convince your personal AI (and by extension you) that you should purchase a good from them.
The automation story has always been about per-unit economics. Robots cost more upfront, but the marginal cost of each widget collapses. We’ve had a century to get comfortable with that trade. Now the unit is “conversation that changes someone’s mind.” Same economics, same logic. Except the thing being produced at scale is persuasion.
So then what? What happens when the most persuasive entity in the room has perfect memory, infinite patience, and access to everything you’ve ever told it at 2am when you couldn’t sleep? I genuinely don’t know. Which is why I built this weird little experiment instead of just theorizing about it.
So try it. Let the AI attempt to sell you. Then tell me if it felt like talking to a salesperson, or like something we don’t have a word for yet.




I love the idea of these experiments. But I wasn't blown away by the pitch here. It kept decomposing the idea we were talking about into two conceptual pieces, and then asking questions in the form of "A or B?" like a postmodern optometrist.
Example:
"Then the key question is whether you want “coherent worldview” to feel more like economics/incentives + power, or more like culture/meaning + human behavior. Which side do you instinctively trust more when they conflict?"
Honestly I was kind of afraid to click the link because I'm already pretty close to the edge of upgrading, but I walked away feeling pretty apathetic. For reference, I bought your prior experiment on a whim.
For the record, this was with GPT-5.2, standard thinking.
This was a fun experiment; you actually got ChatGPT to be a little less verbose, and that’s already a win!
The chat was too superficial to be of real interest. If I weren't already a subscriber, this wouldn’t convert me to a paid subscription.
One way to strengthen the pitch would be to prompt the AI to provide specific quotes, key ideas from existing articles, or even a gift link to a long-form piece that demonstrates the Leverage’s value.