First Access to OpenAI's Video App: Why This Launch Actually Matters
The new TikTok for AI videos is here, and scrolling through synthetic content of yourself hits different than you'd expect
Yesterday, OpenAI launched its Sora app and I got access. The application is a vertical video feed, similar to Instagram Reels or TikTok, where short-form videos are summoned with each flick of the thumb. However, every video in the feed is AI generated. OpenAI’s technological prowess here is remarkable—I can’t believe the quality of the videos. While some are clearly AI, to a casual observer, others look remarkably close to real.
It is easy to decry this launch as a slop machine or a waste of GPUs. And there is some validity to that line of thought. However, this product is much more complicated than that knee jerk reaction. There is even a case to be made that this thing is net good for the world.
(Before we go any further, I have a few remaining invites I can give to the first three paying subscribers who respond to this. Just email me or say hi in the comments.)
The three major product bets
Cameos and consistency
When you sign up for the app, you take a selfie video of you reading off some numbers. It takes maybe 5 seconds. From there, all you have to do is tag yourself in a prompt to appear in a video. Pretty cool! Pretty weird!
To make the user experience more social, you can also tag other users (whose permissions allow it) and generate videos using them. For example, here is a video of Sam Altman spotting me as I bench 315.
The video isn’t perfect. My weight sizes are wrong, the arm position is incorrect, and there is something funky with the hands, but I bet with more prompting I could improve it enough to fool my grandma. And, as always, this is the worst the video will ever be.
Frankly I mostly wanted a way to text these videos to my friends. In many ways, it reminds me of Reels. I don’t even like Reels or leave comments, I just send them to my friends when the content reminds me of them. Messaging is a more natural way to be social, but it could be that video generation might break away from that pattern.
Still, I found having real people that I know populating my feed as preferable to the random creators that is the default of social media. It is also worth keeping in mind that we only spend 7% of our time on Instagram viewing our friends’ content. If Sora can increase that, that feels enormously positive for the world.
OpenAI is loose with the IP boundaries
The company also made the, uh, legally ambitious choice to allow users to generate videos using existing intellectual property. For example, here is a video I generated of Naruto running across the Golden Gate Bridge.
Because the early users of the service are tech nerds with connections at OpenAI (insulting myself here to be clear) the feed is full of Dune, Rick and Morty, South Park, and other beloved IPs. If an IP holder doesn’t want this to happen, they have to explicitly opt out of the service. Annoyingly, there isn’t anywhere that tells me what IP is or isn’t available. So I spent a bunch of time trying to get Disney IP to work but never could. I’m sure other media companies will stop their content from being shared on the app soon enough.
Endless scroll as default consumption
It’s funny, but I kept finding myself thinking, “I wish this was just on my keyboard.” Despite my job as a public figure, I don’t actually like the public facing part of the job. Writing is my real passion. So when I was generating videos, I mostly wanted a way to more easily share them within iMessage and WhatsApp and not have to post them on a feed at all.
This instinct is not all that beneficial for OpenAI. So, they went with the vertical video feed. You just use the exact same smooth lizard part of your brain that was trained by TikTok and flick your thumb to summon new videos. I debated the right way to say this, wrote and rewrote this sentence, but the best way I can describe the experience of scrolling is thus:
Shit’s weird, man.
I feel empty and sad scrolling through a bunch of AI videos. Maybe that would be different if I had a bunch of my friends content on there, but for now, as a vehicle to entertain me, it does not feel good. And for the sake of fairness, I feel the exact same way about Reels and TikTok. Short-form video is a media format that lacks any intellectual or spiritual heft, and AI does nothing to change that.
It is important to note that when we talk about “AI slop,” I consider AI recommendation algorithms as the original sin. Turning over the choice of what I should consume to the algorithms is roughly equivalent to having algorithms generate the content itself. Both can be useful! Both can make a positive difference in my life! But the default for both is having AI make choices for us that are short-term optimized, aka choices the algorithm determines will keep us scrolling.
Next, I want to talk about the positives I can foresee.
The upside
Self-actualization
I spent an embarrassingly long time making videos of myself benching more weight or hiking in countries I haven’t been in awhile.
I don’t know if these types of videos are genuinely motivating or if they are simply time-wasting fantasies, but it felt productive! It did make me want to hike more and clear time in my schedule for a lift tomorrow, so I’m mostly happy with how those videos made me feel. Perhaps being able to generate videos of achieving their dreams will encourage people to go make them happen.
Raising the content quality floor
When you can generate high-quality video with the same amount of text as a normal tweet, suddenly video becomes the default. Why tweet when you can film?
I think this is mostly a positive thing. Lowering the cost of creation should theoretically allow more diverse crowds of people to demonstrate their creativity. I’m an example of that! No real publication would give me a writing job (ask me how I know this) but email makes it cheap enough that I don’t need them, and now over 35K of you have trusted me with access to your inbox. Maybe AI video will allow more people to show they have an eye for film.
Feed control
For years, I’ve wanted a social media service to give me fine-tuned control over what content I see. Sora does that. I got a pop-up asking me to detail out what I wanted to see and how the content I saw made me feel. In the launch blog post, Sam Altman wrote that was an explicit goal of the product,
You should be able to tell Sora what you want—do you want to see videos that will make you more relaxed, or more energized? Or only videos that fit a specific interest? Or only for a certain amount of time? Eventually as our technology progresses, you will be able to tell Sora what you want in detail in natural language.”
The downsides
As with any new technology product, there are downsides here.
Right now, the videos sit in this uncanny valley where they’re impressive enough to be compelling but sometimes flawed. The hands are still weird, physics doesn’t quite work to real world fidelity, and there’s a subtle wrongness to how light behaves. But this is day one. In six months, a year? These imperfections will greatly diminish. And when that happens, we’re going to face a serious problem distinguishing reality from fabrication. I’m not pearl-clutching about deepfakes here (though that’s real), I’m talking about something more subtle: a world where seeing is no longer believing, where video evidence becomes as unreliable as a text claim. That’s a pretty fundamental shift in how we interact with information.
I worry about how if this works, the little money that is going towards creative people will be diminished as this new and free entertainment medium takes over.
The math here is brutal. If brands can generate commercial-quality video for pennies instead of paying production crews thousands, they will. If content creators can pump out 50 AI videos in the time it takes to film and edit one real video, many will. The people who make their living creating video content—not just the Hollywood elite, but the small business video editors, the YouTube creators grinding it out—they’re about to see their market get flooded with free alternatives. Sure, there will always be a place for human creativity and craft, but “always a place” usually means “a much smaller, lower-paying place.” We’ve seen this movie before with stock photography, with graphic design, with writing. It never ends well for the median practitioner.
Surprisingly, I am not as worried about people being addicted to short-form video, because is there really all that much difference to how the world functions today?
We’re already doom-scrolling through TikTok and Reels for hours. Does it matter if the content is made by humans or machines if both are algorithmically optimized to keep us engaged? Maybe. Maybe not. At least with human-created content, there’s theoretically a ceiling on how much can be produced. With AI generation, that ceiling disappears entirely. The algorithm can just keep summoning perfectly calibrated dopamine hits until we decide to stop. Which, if we’re being honest with ourselves, we usually don’t.
Look, I don’t know if this thing is going to work. The product has issues—the feed feels hollow, the legal situation is precarious, and I’m genuinely uncertain whether people want to spend their time watching AI-generated videos of themselves and their friends doing things they’ve never actually done.
But OpenAI clearly believes there’s something here. And given their track record, I’m not betting against them being right, even if I’m not sure I want them to be. What I do know is this: whether Sora 2 succeeds or fails, we’ve crossed another threshold. Video generation is now good enough, fast enough, and cheap enough to be a real alternative to human creation. We can debate whether that’s progress or not—and honestly, I go back and forth on it myself—but we can’t debate that it’s here.
The question isn’t whether AI will change how we create and consume video content. It’s whether we’ll use this technology to make something genuinely better than what we have now, or just more of the same but worse. Right now, the odds are anyone’s guess.