On Founding in the Age of AI
Silicon Valley’s divine responsibility is to elevate, not just entertain
On my Deep Purple iPhone 14 with a scratch on the screen, there is an app that tells me how fast to run. Each Friday, I get notifications from my AllTrails Plus account about the hottest route near me. Strava—free account, I ain’t made of money—documents my 5k time for my friends. Smaller and smaller it gets. Ever-more minute slices of what running is, dissected for ever-more marginal gains in productivity.
I can subdivide every hobby, every professional action in my life. There is always an app for that. Each solution promises to engorge my urges, whether destructive or constructive. All parts of me are served and monetized by the great technocapital machine of Silicon Valley.
Which leads me to ask: Where is the machine that makes me better at empathy? Where is the technology that doesn’t just make me fitter or confer more status, but instead makes me more moral, more caring, more just?
Part of what makes me so excited about AI is that it is a cosmic do-over. We can take the hard-earned lessons of the last two decades of internet startups and build companies that don’t merely have 100x financial outcomes, but 100x societal ones too. From this generation of startups, I have hope, not for mere economic utility, but for elevation.
There was an alternate timeline where social media could have been that. If you look at the discourse in the early 2010s, amid the unbridled optimism of the Obama years, we believed that services like Facebook and Twitter would be mass forms of connection that would enable a neoliberal utopia, fomented by the power of human discourse. That, uh, didn’t happen. Instead, it just metamorphosed into a version of TV that is nasty, brutish, and short(form).
In 2024, only 7% of the content we saw on Instagram, the so-called social media app, was actually created by people we knew. The rest was just content. Sixty-second injections of dopamine, dosed out with a flick of the thumb. It’s taken over the world, with 90% of Millennials and Gen Z watching the format everyday, and the average American spending 1 hour and 16 minutes a day staring at it.
It might get even worse. We sit on the precipice of inventing a brand new, potentially even more addicting entertainment format with AI chatbots. I am unwilling to universally decry the medium—the use cases are too broad, the interactions too varied. We simply need more time to fully understand what we can do with it.
However, the last decade has taught us a hard-learned lesson: People default to their baser instincts if given the option. Take, for example, Meta’s AI studio. On Sunday, I wrote about how Zuckerberg gave the team permission to have lax safety standards and push the boundaries of acceptability in the name of growth.
The result? It’s obvious. Many of the most popular chatbots are sexual ones.

The point is not to judge users. It’s to challenge technologists. Is this all that we aspire to? Is this the purpose of AI? The reason that this is happening, the incentive for building this type of product is obvious. “If we don’t give the people what they want, someone else will.” As trite as it sounds, the job of an AI founder who desires to enhance human potential is to build a product more compelling than “A single mom” from Meta.
God, I so desperately want my founder friends to make technology that elevates us, rather than entertains. I use God here, not in the way that makes your grandma’s lips pucker, but in the more literal sense. Part of what has decayed in startup culture is that we abdicated the divine responsibility that comes with inventing the future. We have forgotten that building a company is a privilege accompanied by a moral and spiritual obligation to customers and employees.
German philosopher Heidegger argued that technology is neither the machine nor the way people use that machine. Technology is the truth that the machine reveals about us and about the world around us (and how it conceals the parts of our lives that are hard to value). I worry that AI technology is showing that the average person is just lonely and wants to be distracted. More so, it is revealing that many tech operators are willing to take advantage of that fact.
This is a call to build something more difficult. To build something more beautiful. We can learn from the mistakes of the last decade. This is our chance to do it right.
Does this all really just come down in your mind to a "good founder" exercising their will and moral fiber and grit to execute a truly humanistic app/service to make the world better? I'm not saying it's not possible, but the prevailing system and incentives strongly, strongly favor inhumane values, so you need to work *against* that system to express well-being-oriented values.
Working against capitalism isn't what most tech founders have in mind and isn't going to lead to the kind of success most want/envision. We do have a few examples of people doing this and they're inspiring; I would love to see more founders following in the footsteps of Jimmy Wales, Tim Berners-Lee, Linus Torvalds, Sal Khan, and the like, or even Brewster Kahle (who founded and exited several companies and basically amassed wealth then used that position to found the Internet Archive), but it's a rarified path. Many of these people do not have large salaries, they're not huge in the public eye (Wales is known but not anywhere near as famous as Bezos, Altman, etc.), and perhaps most critically, most of what they've founded are *nonprofits*. While that business model obviously exists within capitalism, it operates in a way that is essentially counter to the core values and tenets of capitalism - it is an anomaly. Do you believe that a tech founder truly can create a for-profit company that deeply values and expresses human wellbeing and also becomes a significant financial success? If so what are some examples?
I'm all for the core message here, but once again it feels like it ignores or at least elides the forces that largely orient our founders and the companies they create toward configurations that are harmful to humanity. Individual agency and choice has a role, but is dramatically overemphasized in the telling here, in my view.
There are founders out there trying to figure out how to make something that's not just about profit, but they aren't backed by funding. And if they have the privilege to try and take a go at it, the only realistic way they have to get their products out there is by playing into the games of algorithms that thrive on click-bait, fear-mongering, and polarizing takes. Most who are motivated to build something that actually matters lose all motivation when they face the reality of what they have to do to get noticed.