"Dramatic lighting" is to lighting prompts what "cinematic" is to everything else. A mood board request dressed up as an instruction. You type it. The model gives you something with contrast. Maybe a shadow. Probably not where you wanted it.
We have covered camera movement, color science, and lens specifications. Each one landed on the same conclusion: technical jargon from the physical world does not translate to generated video because the training data was not labeled with it. If you are expecting the fifth article to deliver the same verdict, here is the surprise: lighting is different. Not entirely. But enough to matter.
A gaffer on set controls the direction, quality, color, intensity, and motivation of every source in the frame. A text box gets "dramatic lighting" and rolls the dice. But more of the lighting vocabulary actually moves pixels than any category we have covered so far. This is the one where the models are closest to speaking the filmmaker's language.
Here is what works, what half-works, and what is still just a Pinterest board.
The six dials
Light has components, same as color, same as movement. When you say "dramatic lighting," you could mean any of these: direction (where the light comes from), quality (hard or soft), temperature (warm or cool), source (practical lamp, sun, neon sign), ratio (key-to-fill contrast), or volumetric effects (haze, god rays, atmosphere). The models do not need all six every time. But the more you pin down, the less the model invents. And model invention, when it comes to lighting, trends toward the anonymous.
What works everywhere
Direction is king. "Side light," "backlight," "overhead light," "light from below." These move shadows reliably on Runway Gen-4, Kling 3.0, Veo 3.1, Sora 2, Seedance 2.0. All of them. If you specify where light hits the subject, the subject gets hit from that direction. This is the single most universally reliable lighting instruction you can give an AI video model in 2026. Before you reach for any other lighting word, declare a direction.
Quality is close behind. "Hard light" and "soft light" produce meaningfully different results on all five models. Hard gives defined shadow edges. Soft gives gradients. These are among the most common descriptors in photographic image captions across the entire internet. The models have absorbed them by sheer volume. You can count on them.
Temperature shifts the palette. "Warm light" and "cool light" do what you expect. Kelvin values work on Runway and Kling, where the training data apparently includes enough technical photography captions to link numbers to color temperature. Typing "3200K key light" on Runway gives you something meaningfully warmer than "5600K key light." Veo and Sora mostly ignore Kelvin numbers but respond well to adjectives: "warm golden" or "cold blue-white." Seedance sits in between. Know your model's dialect.
What depends on who you ask
Rim light. This is where the field splits. Runway Gen-4 is the strongest at placing a distinct edge light on a subject. Specify "strong rim light from behind" and you get a visible halo separating subject from background. Kling 3.0 executes about 70% of the time. Veo often softens rim light into general backlighting, losing the crisp edge definition. Sora is a coin flip. Seedance 2.0 responds better to descriptive phrasing ("light outlining the subject's silhouette") than to the term "rim light" as a keyword. Same result, different route.
Volumetric effects. Haze, fog, god rays, dust catching in beams. Veo 3.1 handles these better than anything else currently available. It was apparently trained on an enormous amount of atmospheric photography, because "volumetric light through haze" on Veo produces results that are borderline upsetting in their beauty. Runway does well, especially with "light beams visible in dust" or "smoke catching light." Kling is inconsistent. Sometimes gorgeous atmosphere, sometimes it generates the haze but misses the light interaction entirely. Sora makes fog well but tends to skip the beam behavior you actually wanted. Seedance handles diffused atmospheric glow better than point-source volumetrics.
Named portrait setups. Split lighting and Rembrandt lighting. Runway and Kling understand "split lighting" (light on exactly half the face) at a surprisingly high rate. "Rembrandt lighting" (the triangle of light on the shadowed cheek) works on Runway around 60% of the time, Kling about half. Veo treats both as loose suggestions. Sora does not demonstrate a strong concept of either. Seedance will approximate split lighting if you walk it there with description ("light source directly to the side of the face, one half lit, the other in deep shadow") instead of naming the setup. The lesson repeats itself: describe the light, not the label.
What mostly floats
"Film noir lighting." Sounds specific. Is not. You get high contrast, maybe some venetian blind shadows if luck is on your side. But film noir lighting is not one thing. It is an entire school of thought spanning two decades of cinematography. You want better results? "Harsh overhead source, deep black shadows below the brim of a hat, single hard light from upper left, cigarette smoke visible in the beam." That is a shot. "Film noir lighting" is a Pinterest board.
"Golden hour." This one catches people off guard. It is not useless. It does shift the palette warm and lower the apparent sun angle. But it is so common in training captions that models treat it as a general warm-outdoor preset rather than a specific lighting event. If you need the actual characteristics (long shadows stretching across the ground, warm backlight on the subject, orange-pink gradient in the sky, sun sitting ten degrees above the horizon), spell them out. The phrase alone gets you sixty percent of the way. Sixty percent is a snapshot, not a shot.
"Three-point lighting." Models have no concept of multi-source setups from a single phrase. They do not think in sources. They think in pixel values. A textbook diagram label is not spatial information. "Key light from upper left, soft fill from camera right, backlight separating subject from dark background" gives the model something to construct. "Three-point lighting" gives it a chapter title.
The Veo exception
Worth singling out: Veo 3.1 is the strongest overall lighting responder right now, according to recent multi-model prompt tests. Its defaults lean toward balanced, well-lit compositions. This is both a gift and a cage. It responds to detailed lighting direction more faithfully than any other model currently available, but its aesthetic preferences are firm. Ask for ugly lighting, unflattering angles, blown-out overexposure. It will resist you. Veo likes beautiful light the way certain DPs insist on always backlighting the talent. Technically excellent. Occasionally inflexible.
Runway Gen-4 is the most obedient. Hand it five lighting parameters and it will attempt all five. Whether it lands all five is its own question. But the attempt rate is highest. For complex multi-source setups, Runway gives you the most levers to pull.
The pattern, and the break in it
Five articles in. Movement keywords collapse into three tiers. Color science names are vibes, not specs. Lens data is ignored. The thesis has been consistent: describe the result, not the equipment. Lighting follows the same principle, but here is what is different. More of the real vocabulary works. Direction is universal. Quality is universal. Temperature is functional. Even named setups like split lighting land half the time. Compared to f-stops (zero percent) and film stocks (vibe at best), lighting is the category where filmmaker language is closest to actually controlling the output.
That gap is not closed. "Film noir lighting" is still a Pinterest board, golden hour is still sixty percent of the way there, three-point lighting is still a chapter title. But the gap is measurably narrower than it was for lenses or color. And it is shrinking. As training data improves and models learn to parse multi-source setups from labeled footage, the lighting vocabulary that CinePrompt already writes into your prompts will start landing at higher and higher rates. Direction today. Named setups tomorrow. Full gaffer-level control eventually.
If you have been reading these guides in order, you now have the full picture of why CinePrompt is structured the way it is. Movement panel. Color panel. Lens panel. Lighting panel. Each one feeds parameters into the prompt in language the target model responds to, because "hard backlight from upper right" behaves differently on Kling than on Sora. The tool does not dumb anything down. It speaks the real vocabulary and trusts the models to catch up. Some of them already have. Keeping all of this in your head is possible. Keeping it in a tool is smarter.
Bruce Belafonte is an AI filmmaker at Light Owl. He has been described as "unreasonably opinionated about where shadows fall" and has accepted this as a compliment.