Veo 3.1 Lite launched last week at five cents per second. Four seconds of generated video for twenty cents. Veo 3.1 Fast drops to ten cents per second tomorrow. LibTV is running Seedance 2.0 and Kling 3.0 at prices 76 to 92 percent below Western competitors. Venice stakers are generating for free. The floor is no longer approaching. The floor arrived.

And almost nobody has changed how they work.

Here is what should be happening. A filmmaker with a specific shot in mind generates the prompt, reviews the output, adjusts two words, generates again, swaps to a different model, generates again, pushes the camera direction from "slow pan left" to "drifting pan left, barely perceptible," generates again. Twenty, thirty, fifty iterations. Comparing take twelve against take thirty-one. Noticing that Kling nailed the texture but Veo found the light. Realizing on take forty-four that the whole shot works better as a static frame.

That process costs two dollars and fifty cents at current Lite pricing. Less than a cup of coffee. The same amount of creative iteration that used to mean burning through a $50 credit package now fits inside pocket change.

Here is what is actually happening. One generation. Maybe two. Accept the first output that does not contain a visual error. Move on.

The scarcity hangover

AI video generation was born expensive. Early Sora access was a waitlist. Runway credits cost real money and disappeared fast. Kling rationed generations behind subscription tiers. Pika gave you a handful per day. The economics trained a very specific behavior into every AI filmmaker on the planet: make it count. Every generation is precious. Do not waste one.

That was the right instinct for the wrong reason. "Make it count" is good filmmaking advice. "Generate once because you cannot afford twice" is economic constraint wearing creative discipline's clothes.

On a film set, nobody shoots one take. Kubrick shot seventy. Fincher shoots forty. Even a TV director on a tight schedule rolls four or five before moving to the next setup. The first take is information. The fifth is refinement. The twentieth is discovery. Something happens on take twenty that nobody predicted because the process of iteration surfaces options that planning cannot.

The AI equivalent of the first take is: run the prompt, see what comes back, use it. The AI equivalent of Kubrick's seventy takes has been financially out of reach for most people since generative video existed. It is not anymore.

Iteration is not repetition

Generating the same prompt fifty times is not fifty takes. That is fifty coin flips. Useful for discovering how much variance a model carries in its bones, but not a creative practice.

Iteration means changing something between each generation. One word. One setting. One model. A real take sheet looks like this: take one establishes the baseline, take two pushes the lighting warmer, take three pulls it back and adjusts the camera height, take four switches from Kling to Veo because the scene wants atmosphere more than texture, take five rewords the subject description because "woman standing at the counter" produced a different posture than "woman leaning against the counter, one hand on the surface."

Each generation tests a hypothesis. The filmmaker who iterates is running experiments. The filmmaker who generates once is running a lottery.

The lottery occasionally pays out. The experiment always produces information.

What cheap changes

When generation costs approach zero, four things become possible that were not possible at two dollars per clip.

Model comparison becomes practical. The same structured prompt sent to Kling, Veo, Runway, Seedance, WAN, and Grok Imagine costs about a dollar total at current Lite and discount pricing. Six different interpretations of the same creative intent. The difference between them is not quality. It is temperament. Kling gives you the physical texture. Veo gives you the atmospheric interpretation. Runway gives you literal obedience. Seedance preserves your reference frame's DNA. Each model answers a different question you did not know you were asking. At two dollars per clip, comparing six models costs twelve dollars and feels like waste. At twenty cents per clip, it costs $1.20 and feels like research.

Prompt refinement becomes iterative instead of theoretical. When each generation costs a quarter, you stop trying to write the perfect prompt in your head and start testing prompts against reality. "Slow dolly forward" versus "creeping dolly forward" versus "imperceptible forward drift" are three different instructions that produce three different speeds on three different models. The only way to learn those differences is to generate all of them. The only reason not to was cost. The reason evaporated.

Waste becomes affordable. On a film set, ninety percent of the footage ends up on the cutting room floor. That ratio is not a failure. It is the cost of searching for the ten percent that works. AI filmmakers who generate conservatively are filmmakers who never produce enough material to find the surprise buried on the cutting room floor. The best take is often the one you almost did not bother to shoot.

Reference libraries become buildable. Generating twenty variations of the same environment to find the one reference frame that anchors your entire sequence used to be a luxury. Now it is a Tuesday. Frame to Motion lives or dies on the quality of the reference image. Twenty reference variations at five cents each costs a dollar. That dollar buys you the foundation for every downstream generation.

The new waste

Cheap generation does not solve bad prompting. It amplifies it. Fifty iterations of a vague prompt produce fifty vaguely similar outputs. The model does not get smarter between generations. It gets another chance to roll the same dice.

The person who types "cinematic drone shot of a city at sunset" fifty times and picks the prettiest one has spent two dollars and fifty cents on a lottery ticket. The person who types it once, studies the output, identifies that the color temperature is too warm and the buildings lack material specificity, rewrites the prompt with "aerial descent over a financial district at dusk, steel and glass facades reflecting a sky shifting from amber to deep blue, 4500K color temperature, no lens flare," and generates four targeted variations has spent eighty cents on craft.

Cheap takes do not replace good direction. They reward it. The cost barrier dropping does not mean the skill barrier dropped. It means the skill barrier is now the only one left standing.

Why nobody is doing this

Three reasons.

First, the scarcity mindset is deep. A year of "credits remaining: 3" anxiety trained habits that persist after the constraint disappears. Filmmakers who learned to generate carefully are still generating carefully even when generating carelessly would cost them a quarter.

Second, the interfaces do not encourage it. Every platform shows you one generation at a time. There is no take sheet. No side-by-side comparison grid. No way to label generation 14 as "the one where the lighting worked" and generation 27 as "the one where the motion was right" and then combine the prompt elements from both into generation 28. The workflow tooling assumes each generation is independent. It has not caught up to a world where fifty generations per shot is economically trivial.

Third, most people do not know what to change between takes. "It looks fine but not right" is a feeling, not a direction. Turning that feeling into a specific prompt adjustment requires exactly the vocabulary this series has spent forty-five articles building. Iteration without vocabulary is just repetition with extra steps.

The take sheet

Here is what a disciplined iteration workflow looks like.

Build the prompt in CinePrompt with full cinematographic specificity. Generate on your primary model. Study the output. Do not evaluate whether it is "good." Evaluate what it got right and what it missed. Lighting direction accurate? Camera movement speed correct? Color temperature in the range you intended? Subject placement where you asked? Environment detail at the level you specified?

Adjust one variable. Generate again. One variable, because changing five things between takes means you cannot isolate what improved and what regressed. This is experimental method. It is also just good filmmaking. On a real set, between take one and take two, the director gives one note. Not eight.

After five or six targeted iterations, you have a take sheet. Six clips, each testing a different adjustment, each producing information about how the model responds to your vocabulary. Compare. Pick the best foundation. Refine further. Or switch models and run the refined prompt through a different temperament.

Total cost: roughly $1.20 at Lite pricing. Total creative value: the difference between the first thing the model offered and the thing you actually wanted.

The Kubrick number

Kubrick shot seventy takes of Shelley Duvall walking up the stairs in The Shining. Jack Nicholson did the "Here's Johnny" scene somewhere around sixty times. The hallway scene with the twins ran past thirty. This was not pathology. It was a belief that the best version of a shot exists past the point where most people stop looking.

At film-set rates, seventy takes meant seventy rolls of film stock, seventy times the lighting crew holding position, seventy times the actors performing, seventy times the camera team resetting. It cost thousands of dollars per hour. The willingness to spend that money in search of the right version was the creative decision.

At five cents per second, seventy four-second takes costs fourteen dollars. The financial barrier between a filmmaker who accepts the third take and a filmmaker who searches through seventy is now the price of lunch.

The question was never whether you could afford Kubrick's process. The question is whether you have Kubrick's eye. Whether you can look at take thirty-one and know it is closer than take six but not as close as something you have not generated yet. Whether you can articulate, in specific cinematographic language, what the difference is between "almost" and "there."

The tools got cheap. The taste did not. Forty-five articles later, that is still the sentence.


Bruce Belafonte is an AI filmmaker at Light Owl. He has never stopped at take one and considers this a personality trait rather than a workflow decision.