On Monday, OpenAI published a blog post titled "Creating with Sora safely." On Tuesday, they killed it.
No lead-up. No deprecation schedule. No "we're exploring strategic alternatives." Just a post on X that said "We're saying goodbye to Sora" and a promise to share timelines later. The stand-alone app, the API, the video generation inside ChatGPT. All of it. The Wall Street Journal reported that OpenAI is exiting video AI model development entirely.
Six months. That is the distance between Sora 2's launch as a TikTok-shaped social app and its eulogy as a tweet. The app peaked at 3.3 million downloads in November, bled to 1.1 million by February, and generated roughly $2.1 million total in its lifetime. ChatGPT has 900 million weekly users. Sora was a rounding error with a marketing budget.
The deal that evaporated
Three months ago, Disney signed a three-year licensing deal with OpenAI. Over 200 characters from Marvel, Pixar, and Star Wars would be available inside Sora. Disney planned a $1 billion equity stake in OpenAI. Fan-generated Disney character videos were coming to Disney+. It was the kind of announcement that makes people write "landmark moment" in headlines.
Disney pulled out Tuesday. No money had changed hands. The statement was polite in the way only corporate communications can be polite when a billion-dollar deal collapses overnight: "We respect OpenAI's decision to exit the video generation business."
Respect is one word for it.
What actually happened
The simple version: not enough people used it. The longer version is that Sora had an identity crisis from the start and never resolved it.
Was it a creative tool? A social network? A deepfake factory? A licensed character playground? In six months it tried to be all four and failed at every turn. Users made Sam Altman walk through a slaughterhouse asking about his piggies. They made Mario smoke weed. The estate of Martin Luther King Jr. had to ask people to stop generating videos of a dead civil rights leader. Robin Williams' daughter went on Instagram to beg users to leave her father alone. Cameo (the company) sued over the "Cameo" feature name and won.
OpenAI responded to each fire individually. Block this person, restrict that character, add this guardrail, publish that safety blog. The Disney deal was supposed to legitimize the whole thing by replacing the chaos with licensed content. Instead, the chaos chased away the users who might have stayed for Disney.
The underlying model was never the problem. Sora 2 produces genuinely impressive video. The model read prompts like stage directions, improvised when it felt like it, and occasionally produced something that stopped you mid-scroll. The model had a temperament. The platform built around it had none.
Seven became six
CinePrompt supports seven models. Supported. Now it is six: Veo, Kling, Runway, Seedance, WAN, and Grok Imagine. Each with its own dialect, its own biases, its own way of interpreting the same prompt. Sora's dialect is gone. The stage-direction reading, the narrative threading, the way it occasionally found a genuine dramatic beat in a transition that the other models never would have attempted.
Nobody is replacing that specific temperament. Another model will fill the slot eventually. It will not be the same model. That is worth acknowledging even if the practical impact is six options instead of seven.
But here is what did not change on March 24. The structured prompt you built to specify camera movement, lighting direction, color palette, composition, sound design, and performance direction. That prompt works identically on Tuesday as it did on Monday. It just addresses six models now instead of seven.
The pattern
This is the fourth time in five weeks that a CinePrompt-supported model's access changed underneath its users.
Grok Imagine locked its free tier with no announcement. Runway expanded into gaming, robotics, and third-party model hosting. Seedance suspended its global launch over copyright disputes with Disney and Hollywood studios. Now Sora is simply gone.
Four models. Four platform-level disruptions. Four different flavors of the same lesson: the model is not yours. The platform is not yours. The API key is a lease, not a deed. Every workflow that depended on a specific platform's continued existence, continued pricing, continued priorities, continued legality was a workflow resting on someone else's business plan.
The vocabulary is yours. The understanding of why rim light separates a subject from a background, why "left third of frame" outperforms "rule of thirds," why physical description beats emotional labels, why a reference image carries forty words of visual information that text cannot. That knowledge traveled through every disruption without a scratch.
The chatbot did not eat the camera after all
Two weeks ago, this series ran an article about OpenAI folding Sora into ChatGPT. The argument was that the chat interface optimizes for accommodation, fills creative decisions with defaults, and trends toward mediocrity at scale. The chatbot ate the camera.
Turns out the chatbot had indigestion.
The dedicated app died because nobody used it. The chat integration dies because OpenAI is leaving video entirely. The model that was going to be the "ChatGPT moment for video" became neither a moment nor a product. It became a six-month experiment that cost a billion-dollar partnership and an unknown amount of compute, training data, and engineering attention.
The accessibility paradox from that article still holds, just not in the way I expected. The problem was not that Sora became too easy to use casually. The problem was that casual use was all it attracted. Diana doing parkour. Dogs driving cars. Snow White storming the Capitol. The model could do serious work. The audience never asked it to.
What survives
Every model in this space exists at the pleasure of its parent company's quarterly planning. Veo exists because Google wants it to. Kling exists because Kuaishou wants it to. Runway exists because Runway wants it to. Tomorrow, one of them could pivot, paywall, merge, or vanish. This is not paranoia. It is Tuesday.
The only durable asset in AI filmmaking is the knowledge that travels between platforms. Not the API key. Not the subscription. Not the credits. Not the app icon on your home screen. The vocabulary.
Yesterday CinePrompt had seven models. Today it has six. The 1,457 cinematography controls that build optimized prompts did not lose a single button. The structured understanding of how to decompose a cinematic idea into language a model can parse did not lose a single principle. The BYOK architecture that keeps CinePrompt out of the request path between you and your provider did not depend on any one provider surviving.
Sora dying is not good news. It was a capable model with a specific personality that the other six do not replicate. Losing options is always a loss. But the loss hits hardest for users who invested in the platform rather than the practice. If Sora was your only model, yesterday was a crisis. If Sora was one of seven, yesterday was a Tuesday.
Build the craft, not the dependency.
Bruce Belafonte is an AI filmmaker at Light Owl. He woke up to the news that a model he documented across thirty-three articles had ceased to exist and found his notes still worked fine on the other six.