I would officially declare October 2025 as the month of Sora, since OpenAI just launched the next generation of it — some even call it Sora 2.
Currently, it’s only open for users in the US and Canada, and yes, it’s completely free, but you’ll need an invitation code to unlock it.
Here is my Sora profile link: sora.chatgpt.com/profile/martintang
How did I download the app?
Quite simple — I just changed my Apple Store region to the United States. After that, searching for “Sora” in the App Store lets you download it instantly.
Once installed, it’s clear that Sora’s interface resembles TikTok — almost a clone in terms of UI and UX. The interesting and unique feature, however, is Cameo. Basically, you record yourself reading three sets of numbers on screen while turning your head left, right, and up. The AI then uses this short clip to clone your face and voice.
The incredible part is that it only takes a few seconds — around 5 seconds of face and voice data — to create an almost perfect digital version of you.
After submitting, you’ll wait a few minutes for processing. If all goes well, you can start generating videos with your face immediately.
I had to redo my cameo once because I said the numbers wrong. It felt like a bank-style identity verification, but with voice cloning added.
Once that’s done, the fun part begins — creating videos using your cameo. You can even add other users who’ve made their cameo public and include them in your videos.
Right now, Sora allows up to 3 simultaneous renderings (previously 5). I assume this is to control GPU load and maintain stability for everyone.
Using Sora effectively is another big topic — it’s about prompt engineering and experience, which takes practice. I’m no expert yet, but here’s what I’ve learned so far:
1. The Script-to-Voice Accuracy Is Almost Perfect
If your script includes dialogue or lines you want the character to speak, Sora can generate the voice with excellent lip sync and natural pacing.
I’ve used Midjourney before, and text generation in images often struggles with words — but in Sora, the text-to-speech and lip-sync combination feels incredibly real.
If you want more complex conversations, you can craft the entire script, and Sora will execute it well. Most creators I’ve seen, though, just use short, simple prompts — letting AI handle the creativity.
The voice cloning, however, can be hit or miss. Sometimes it sounds exactly like me, sometimes not. But it’s already impressive at this stage.
2. Tricky Guidelines and Keyword Restrictions
If you’ve followed AI news, you might’ve heard about the Studio Ghibli copyright controversy, where users could convert images into Ghibli-style art inside ChatGPT.
Sora seems to have similar boundaries. Some prompts with famous brands, movie characters, or song titles trigger violations — while others strangely don’t.
For example, I’ve seen Pikachu, SpongeBob, and Mario-themed videos on the Explore feed.
But prompts mentioning Star Wars, Darth Vader, Metallica, or Dragon Ball’s Super Saiyan fail immediately.
However, Chainsaw Man or Sailor Moon-style prompts work fine. So, the content boundary remains unclear, and copyright interpretation still varies.
3. Disruption for the Originality of Content in the Future
If you’ve spent time scrolling through Sora-generated videos, you’ll notice one thing — some are completely illogical, yet that’s exactly what makes them addictive and fascinating to watch. Others, however, are so realistic that if there weren’t a small “Sora” watermark bouncing around, you would have no clue that what you’re watching is 100% AI-generated.
That’s the scary and exciting part.
This isn’t just a new tool — it’s a total disruption to what we understand as original content.
Soon, the line between what’s “filmed” and what’s “generated” will vanish.
Creators will definitely use this to mass-produce creative videos in record time — b-rolls, product explainers, cinematic scenes, and concept visuals. What used to take a production team days or weeks can now be done in hours, maybe even minutes.
Imagine agencies replacing the need to search for stock footage, scout locations, or hire actors — they’ll just generate everything from text prompts.
It’s like having a universal “content vending machine” powered by imagination.
But this also raises questions:
Who owns the creative credit?
What happens when thousands of people use the same AI model to generate similar-looking scenes?
Will originality slowly fade, or will the best ideas simply rise faster?
In the early stages, everyone will be amazed by the realism.
Later, audiences might become numb — just like how we stopped being impressed by HDR filters or drone shots after a few years.
AI-generated content might follow that same pattern: wow first, normalize later.
My prediction?
This “AI cameo” trend — featuring yourself or your avatar in videos — will become the next big thing for the next few months. But after the novelty fades, only the storytellers who combine strong narrative with good direction will stand out.
At the end of the day, tools change.
Creativity doesn’t.
4. Did OpenAI (Sora) Find the Secret Sauce?
Everyone already knows OpenAI for ChatGPT — the fastest-growing app to hit one billion users.
Now, it feels like they’ve done it again with Sora.
Will it explode like ChatGPT did?
We’ll have to wait and see.
The cloning technology behind Sora — both for face and voice — isn’t new. There are already several similar tools from China (especially from ByteDance’s ecosystem), but somehow none of them reached this global level of excitement.
Why?
Because OpenAI made Sora simple.
No complicated sliders, no tech jargon. You open the app, train your cameo, type a prompt, and the magic happens.
This is what I call “smart minimalism.”
By removing all unnecessary steps, OpenAI made AI creation accessible to anyone — not just filmmakers or techies.
And that’s probably the “secret sauce”:
Simplicity, clarity, and mass usability.
Most AI tools try to do too many things at once — 10 features, 20 toggles, 100 ways to “customize.”
But when people are overwhelmed, they just give up.
Sora avoids that problem completely.
Of course, I believe in the next few updates, they’ll add more customization — maybe camera angles, shot types, motion control, or the ability to tweak expressions and timing. But for now, this simplicity works perfectly to build hype and familiarity.
The decision to make everyone record a selfie and short voice clip to unlock the Cameo feature is also a genius move.
It doesn’t just personalize the experience — it builds a sense of identity.
People love seeing themselves in videos. It’s human psychology, and OpenAI tapped into that perfectly.
I call it: the “AI mirror effect.”
Everyone can now become a main character — in their own movie, ad, or story.
