What is CinePrompt and why does it exist?
A cinematographer's toolkit collapsed into a text box overnight. CinePrompt is what happened next.
Deep dives on AI video prompting, model breakdowns, and the craft of visual storytelling in the age of generation.
A cinematographer's toolkit collapsed into a text box overnight. CinePrompt is what happened next.
OpenAI shut down Sora on March 24. The app, the API, the Disney deal, the whole thing. One of seven CinePrompt models just went dark. The vocabulary did not.
Val Kilmer died last year. A generative AI built from his recorded existence will star in the film he never got to make. The tools are the same ones in your browser tab right now.
Runway demonstrated real-time AI video generation at GTC. Under 100 milliseconds to first frame. The delay between prompt and output was where the craft lived.
NVIDIA fed generative AI complete structured game data and the AI still decided everyone needed to be prettier. Gamers recognized the beauty bias in thirty seconds.
Grok Imagine generated 1.245 billion videos in January. Arena leaderboards crowned it number one. The judges were not filmmakers and the test was not filmmaking.
xAI locked free users out of Grok Imagine overnight. The people who lost the most are the ones who never built anything portable.
NVIDIA just proved that structured data makes AI output controllable. Filmmakers figured this out with a smaller budget.
A head of state posted an authentic video. An AI flagged it as a deepfake. Millions believed the machine. This is where the quality gains land.
Runway launched a world simulation engine and started hosting competitors' models. The most filmmaker-focused platform is becoming a platform for everyone.
Steven Spielberg told SXSW he has never used AI in any of his films. The crowd cheered. They were cheering for the wrong thing.
ByteDance suspended Seedance 2.0's global launch over copyright disputes. The real story is what every model's training data means for what you can prompt.
AI video models have a systematic beauty bias. Ugliness is one of cinema's sharpest tools and the hardest thing to prompt for.
NVIDIA just made AI video generation run on the box under your desk. The cloud was a landlord. Local is ownership. Both have consequences.
OpenAI is putting Sora inside ChatGPT. Video generation just moved from the studio to the group chat.
Nineteen articles about what AI video models can generate. Zero about the decision that turns footage into a film.
X started penalizing AI-generated war videos without disclosure labels. The reason says more about the craft than any benchmark.
Seventeen articles about what to generate. Not one about where it goes in the frame. Composition is the most fundamental visual decision and the one most surrendered to defaults.
Netflix acquired a tool that teaches AI to understand filmmakers. That gap between knowledge and comprehension is not a bug. It is the product category.
Every AI video platform sells the same API call in a different wrapper. CinePrompt ships the knowledge layer and lets you bring your own keys.
Fourteen articles on cameras, lights, actors, color, and sound. Not one mentioned the walls. The environment is an entire department with no analog in a text prompt.
Thirteen articles on cameras, lenses, lights, color, and sound. Zero on the person standing in front of all of it. Directing an AI actor is the hardest prompt you will ever write.
Twelve articles about controlling the image. None about controlling when things happen inside it. Time is a filmmaker's sharpest tool and a prompter's locked door.
Every week another comparison article ranks five AI video models on a single axis. The question itself is the problem.
The img2vid workflow that treats a reference image as half your prompt. Two generations, each with a focused job, producing a result neither could achieve alone.
Four of five AI video models generate native audio now. Most people are still prompting like sound does not exist.
Eight articles about what to type. This one is about everything that happens around it. The text box is shrinking. Not in size. In relative importance.
You know which words to use. This is about what order to put them in, how many to use, and why most prompts are architectural disasters.
Every AI video model can make a beautiful shot. Almost none of them can make two that belong together. Here is how to fight for sequence coherence.
"Dramatic lighting" is the new "cinematic." Here is what to type when you want the model to actually move the shadows.
Focal lengths, f-stops, lens names, and why the most technical-sounding part of your prompt is probably doing the least work.
Film stock names, color grading terms, and the gap between what you type and what the model sees.
Dolly, crane, tracking, orbit. Some of these words move the camera. Some of them do nothing. Here is what each model actually hears.
Every AI video model already thinks it is being cinematic. Saying the word does nothing. Here is what to say instead.