There is an ongoing debate whether AI providers should put ads into their chatbot experiences. The case for ads is simple: If OpenAI puts the latest Nike sneaker into my query about marathon training, then that query can be free for me, and more profitable on a per chat basis for Altman and Co. The businesses that embrace ads will get bigger, faster than those with subscriptions while simultaneously, users get some AI magic for free.
The counter-argument is also simple: Ads suck ass. They monetize our attention. They are a form of mercantile manipulation where a consumer’s emotional triggers are sold to the highest bidder. People are terrible at thinking through time costs, and as such, are highly reluctant to pay for ad free experiences. AI would be better off without ads because otherwise the incentive to increase our engagement via addiction is simply too strong.
Whenever you see two simple counterarguments being trotted out, you should, as a proud reader of The Leverage, reject both arguments as a matter of principle. Tribalism is stupid and life is not binary. You know better.
Instead, I’m going to train you to reconfigure your brain a little. Next time you see someone debating ads, you’ll tell them that the real reasoning isn’t as simplistic as “ads?” or “subscriptions?” What each of these companies is trying to figure out is where to invest their GPU power, and what will be most profitable for them.
The doctor will have your attention now
As a founder, if you ever hear a customer say, “I literally can’t imagine my life without [your product],” you should immediately think about what kind of yacht you want. The more essential your service, the greater the monetization potential, and the higher the value of your startup.
That’s how I knew OpenEvidence was onto something. When I interviewed a few medical professionals about it, they all expressed some variation of “I’d rather lick the jam out between Lebron James’s toes than lose access to this.”
The company’s product is a chatbot that provides medical providers answers to highly specific questions, with direct citations to relevant papers. Doctors can list out a patient’s symptoms, check whether combining specific medications will cause reactions, pull up edge cases they might have missed, that sort of thing.
Rather than bother trying to sell licensed access into healthcare systems, a sales process that can sometimes take up to 12-24 months, the company has kept their product free and advertiser supported. This has worked remarkably well. Roughly 40% of the doctors in the United States are daily active users. Their ads, which are mostly for medical devices and pharmaceuticals, are highly valued. Unlike their competitor UpToDate, which charges $500 a seat, OpenEvidence ads are priced at $70-150 CPM. The company was last reported to have a $50 million revenue run rate for July, but is growing so quickly that my sources in the VC community estimate that number is already out of date.
What is so intellectually challenging about OpenEvidence is that they are automating a key part of a doctor’s workflow. Knowing doctor stuff kinda seems like the point of a doctor to me. However, OpenEvidence’s rapid embrace by medical professionals more shows that the role of the doctor is as a bundled care provider, in that they make patients feel better about what treatment they are receiving, and second, a risk-accepting party. The role of the doctor is to know stuff, sure. But more importantly, it is to know stuff and accept a certain amount of legal and professional risk on the basis of knowing that stuff. They are paid for bridging the gap between LLM output and patient treatment.
What OpenEvidence shows us is that people are likely to accept automation in their jobs if you can increase their output and decrease their risk. OpenEvidence works because it allows doctors to see more patients with better care, and they can do it without the permission of the bureaucracy. After all, the product is free, no admin approval needed.!
The lesson is threefold:
When a chatbot is primarily tasked with information retrieval and analysis, ads are the best way to monetize. This is because there is a natural commercial intent here: If I’m looking up the bounds of a specific illness, my LLM knows what tools might make it easier for me to detect next time.
When you are selling into an industry that is historically slow to adopt new technology, ads can help scale faster. You don’t need to sell licenses to a procurement office and haggle for a place in the budget.
Subscriptions make the most sense when you are providing a new way of working—a new way to code, for example.
It’s that new way of working where the future of monetization is the most unclear. And that takes us back to GPUs.
GPU farms
Zuckerberg has always shown a spooky amount of prescience as a founder. Whether it was the purchase of Instagram or Whatsapp, he’s made large, risky capital investments while others were still waffling. An underrated bet he made took place in 2022, when he placed an enormous order for GPUs before ChatGPT was even a thing. From an interview on the Dwarkesh podcast, “We were constrained by the infrastructure in catching up to what TikTok was doing… I basically looked at that and was like, ‘Let’s order enough GPUs to do what we need for Reels — and then double that.’”
The math Zuckerberg has to run is if GPUs should best be used in running neural nets for chatbot inference or neural nets for recommendation algorithms. In a May 2025 interview with Ben Thompson, Zuckerberg talked about this, “We’re always making these calculations internally, which is like, “All right, should I give the Instagram Reels team more GPUs or should I give this other team more GPUs to build the thing that they’re doing?”
When we are debating if ads should be in LLMs, we basically have to run the math of what's more profitable to run a GPU for an hour. Will a firm make more money by spending the tokens on ads? Or by automating work? For example, ChatGPT will have to consider whether it’s more affordable to make it even easier for you to vibe code a website—maybe thousands of them—or to use that GPU power on getting you an ad for a company that does exactly that. If it’s more profitable to do the former, the more likely it is for a company to be supported by subscriptions.
It’s why the major applications today, tools like Cursor or ChatGPT, are subscription supported. They give you the leverage (hehe) to automate away the rote parts of your work.
Still, what I am wondering is what happens when ads and workflows are smushed together. What does the future look like when your software has strong opinions on vendors? Here Microsoft showed what the future would look like. For paying subscribers I’ll review what a potential ad format would be for chatbots and how that changes what kind of startups are possible.
Keep reading with a 7-day free trial
Subscribe to The Leverage to keep reading this post and get 7 days of free access to the full post archives.