OpenAI just launched Sora 2 along with its standalone app. I found an invite code and tried it out.
The UI is very similar to TikTok or Instagram Reels. The difference is that all videos in the Sora app are generated by AI, with no human intervention other than the prompt or an uploaded cameo.
Cameo: a small theatrical role usually performed by a well-known actor and often limited to a single scene.
Cameo lets you upload a short clip of yourself with voice verification (to avoid identity abuse), and AI-generated content will be based on your uploaded cameo or any publicly accessible cameo. I like how they named this feature, as the definition is actually very accurate.
First of all, I would say this app is well above my expectations. There are fewer brain-rot videos than on other platforms; with invite-only access, the early adopters are building a meaningful community and content before the majority gets on it. You can see people re-create movies in different styles, such as Interstellar with a Minecraft setup, or a presidential debate with Pokémon characters. Without AI, that kind of content would take weeks, if not months, to produce—not to mention the cost. And the creativity is astonishing when a new direction pops up in your feed. This could be a new era for producers or directors to generate clips and test them out before spending major dollars on a new movie.

Interstellar movie in Minecraft style generated by Sora

Interstellar movie in Minecraft style generated by Sora
Then for the damage, I can't tell whether it's greater or less than other AGI content that has happened already.
There is some AI confusion for me, but in a very subtle way. The onboarding process makes it clear that all videos are AI-generated, and all exported videos have the Sora watermark. Also, most of the videos are so unreal that your brain can process them as AI-generated or fake without taking in any information for granted. For now, I don't think there wouldn’t be as many concerning discussions as with deepfakes.
However, after maybe 30 minutes of using it, and maybe after watching a constant stream of unreal videos, a very realistic police body-cam-style video popped up in my feed. For three seconds, I was a bit confused about whether I was still consuming AI content, and then I realized I was still on the Sora app. That was a scary moment. You would think you’d need to block people’s senses, like in Ready Player One, with a full-body suit and a VR headset to trick people’s brains—but a simple, constant video feed with some contrast can already have a similar result.

Body Cam style video generated by Sora (not the one that got me confused)
There are also a bunch of videos featuring JFK or other historical figures talking about things they never said. Some are obviously AI, but others can be confusing. I wonder what the long-term effects of consuming false information like that will be, and whether people’s brains can still tell what’s real and what’s not when they try to recall that info in the future.

JFK giving a speech generated by Sora
So here we are, just two years after people got the most knowledgeable assistant in their hands—one that boosted productivity more than ever—everything seems to have gone back to “Amusing Ourselves to Death” again. I wonder, when more people get onto Sora, how things will develop.
One more thing: several OpenAI employees, including Sam Altman, have released their cameos and made them publicly accessible, which means you can create content featuring Sam Altman doing virtually anything—like stealing NVIDIA GPUs from a Best Buy store or asking people to download Sora. Beyond introducing the Cameo feature, I wonder whether this is an intentional way to flood the internet with fake content about him, so that when negative content surfaces in the future, no one (including him) can definitively confirm its authenticity.

Sam Altman behind the bars generated by Sora