Here is a thing that happened: an entire profession's worth of physical tools, accumulated over a century of filmmaking, got compressed into a blinking cursor. No transition period. No manual. Just a white box that says "describe your video" and expects you to translate fifteen years of lens selection and lighting instinct into a sentence.

That is, to put it mildly, a weird situation to be in.

If you are a cinematographer, you already know what an 85mm at f/1.4 does. You know the difference between bounced key light and hard side light. You know why you would pick ARRI LogC over Rec.709 for a particular mood. None of that knowledge evaporated when Sora showed up. But the interface did. The camera, the lens, the flags, the dolly track, the monitor, the light meter. Gone. Replaced by a text field and a prayer.

And that is where most people currently are. Praying.

The translation problem

The issue is not that AI video models are bad. They are shockingly good, actually, and getting better at a pace that makes last month's output look like a student film. The issue is that the input method is dumb. Not in the pejorative sense. In the literal sense. A text box does not know what you know. It cannot see that you are a person who has spent years building a visual vocabulary. It just sits there, empty, waiting for words.

So you type "cinematic, 4K, dramatic lighting" like everyone else and you get back something that looks... fine. Generic. The visual equivalent of muzak. It could have been prompted by anyone, because it was prompted like everyone.

The gap between what you know and what the text box lets you express. That is the problem CinePrompt was built to close.

What it actually does

CinePrompt is a prompt builder that is structured the way a cinematographer thinks. You are not staring at an empty field. You are working through a shot the way you would on set: subject first, then environment, then camera and lens selection, then lighting, color, sound. Each choice feeds into a prompt that assembles itself in real time.

The output reads like a proper screenplay direction. Specific. Detailed. And here is the part that matters most: optimized for whichever model you are feeding it to.

Because Sora does not speak the same language as Runway. Kling ignores keywords that Veo responds to beautifully. Some models understand camera body references. Others could not care less. You should not have to memorize each model's dialect. That is bookkeeping, not filmmaking.

CinePrompt handles the translation. You handle the taste.

Three ways to work

Single Shot is what it sounds like. One shot, full control over every parameter. Good for hero shots, tests, one-offs.

Multi-Shot lets you build sequences. Transition connectors between shots, recurring characters that stay consistent, global settings that carry across the whole scene. This is for people who think in scenes, not clips.

Frame to Motion is the weird one, and probably the most useful one nobody else has built. It generates two prompts: one for a still image, one for the motion. Purpose-built for img2vid workflows. You describe the frame. Then you describe what happens next. Two outputs, ready for a two-step pipeline. If you have tried to wrangle img2vid without this, you know why it matters.

Who cares

Filmmakers who feel like the tools left them behind. Motion designers building AI content for clients who expect polish. People who keep typing "cinematic" and know it is not doing what they want but do not know the vocabulary to fix it.

And honestly, anyone who has opened one of these generators and felt the particular frustration of having a clear image in their head with no good way to describe it.

Where this is going

CinePrompt is live at cineprompt.io and free to use. The models keep changing, so the tool keeps changing. That is the deal.

This is the first in a series of guides. We will be getting into specifics: which keywords actually move the needle on which models, how color science translates (or does not translate) to generated video, how to build sequences that do not look like five unrelated clips stitched together. If there is something you want covered, tell us.


Bruce Belafonte is an AI filmmaker at Light Owl. He has strong opinions about focal lengths and will not apologize for them.