In March, Steven Spielberg told a SXSW audience he had never used AI in any of his films. The crowd cheered. His position was clear: AI should not replace creative individuals. No empty chairs with laptops.

This week, in interviews promoting The Christophers, Spielberg said he has not used AI "yet."

The Guardian caught it. Compared it to a scene from High Fidelity where record-store clerks dissect the implications of a single word in predicting someone's future behavior. Three letters. Barely a syllable. But "never" and "not yet" are different countries, and the passport between them is one-way.

The company he keeps

It is not just Spielberg. In the past six weeks, James Cameron joined the board of StabilityAI while insisting generative AI alone will have no place in the Avatar world. Ben Affleck invested in an AI startup. Doug Liman is shooting a film with AI-generated sets and lighting, budget reduced from $300 million to $70 million. Darren Aronofsky lent his name to an AI-generated web series about the Revolutionary War. Sandra Bullock told an interviewer that the industry must "lean into it" and "make it our friend." Reese Witherspoon is doing whatever the opposite of resistance looks like.

These are not fringe operators testing the water. These are the center of the industry, moving together, over a period of weeks, in the same direction. Some are thoughtful about it. Some are performing acceptance for a camera. The distinction matters, but the direction is the same.

Soderbergh, as usual, got there first and was more precise about it. His Lennon documentary uses ten minutes of generation in ninety minutes of archival. He chose AI for the hallucinatory register. The tool doing what cameras cannot. He also said the tools "desperately" require close human supervision. That is not enthusiasm. That is a craftsman describing a new material and its load-bearing limits.

The Guardian frames all of this as a question about how much AI tolerance audiences can accept from their creative heroes. That framing assumes tolerance is the variable. The variable is vocabulary.

This already happened

The Guardian drew an analogy to digital cameras. It is the right analogy, and it deserves more than a paragraph.

Twenty years ago, cinema-ready digital cameras arrived. Spielberg held out. So did Wes Anderson. And Paul Thomas Anderson. And Christopher Nolan. And Quentin Tarantino. Everyone else switched. The transition was fast, irreversible, and defined by a widening.

The best digital work was extraordinary. Michael Mann shot Collateral and the Los Angeles night became a character. Soderbergh shot entire features on an iPhone and they worked because he understood frame, light, and composition regardless of what recorded them. David Fincher used digital with such precision that nobody noticed. Sofia Coppola switched for The Bling Ring because the subject demanded that substrate.

The average digital work got visually worse. The Guardian put it plainly: plenty of movies from the nineties and two-thousands now look better than what comes out of the pipeline today. Digital did not make most filmmakers better. It made most filmmakers faster. Faster meant less deliberation. Less deliberation meant more defaults. More defaults meant convergence.

The floor dropped. The ceiling held. The middle hollowed out. The range between the best and the worst widened dramatically, and the average drifted toward the floor.

Same curve, new substrate

Generation is running the identical pattern at compressed speed. The holdouts are cracking five years into the technology instead of fifteen. The access expansion is wider. The default convergence is worse. And the vocabulary gap between someone who knows what they want and someone who accepts whatever comes back is exactly the same gap it was with digital cameras.

A filmmaker who understood exposure, color temperature, and lens behavior made digital cameras sing. A filmmaker who pointed a RED at a scene and hit auto-everything produced footage that looked like a corporate training video with cinematic pretensions. The tool did not determine the quality. The operator did.

A filmmaker who builds structured prompts with specific lens behavior, motivated lighting, compositional intent, and atmospheric detail produces output that carries creative fingerprints. A filmmaker who types four words into a chat bubble produces the model's opinion wearing nobody's name. Same models. Same buttons. Different vocabulary. Different output. Every time.

The digital transition did not produce better films on average. It produced more films. Some of them were better than anything film stock could have delivered. Most of them were not. The generation transition will follow the same contour. More output, wider range, the median drifting toward wherever the defaults live.

What "yet" actually means

"Yet" is not a position. It is a trajectory. Spielberg is not saying yes. He is saying the door is no longer locked from the inside.

The people who say "yet" will determine what adoption looks like. Cameron says "yet" while sitting on the board of an AI company, which is a specific kind of "yet" that comes with equity. Affleck says "yet" while investing in AI tools, which is a "yet" that has already written the check. Soderbergh skipped "yet" entirely and went straight to "here, specifically, in these ten minutes, for this reason."

The people who hold the line will matter too. Guillermo del Toro says he would rather die. That is not "yet." That is a door welded shut with a blowtorch. Nolan and the Andersons shoot film because the medium is inseparable from their practice. Their holdout is not philosophical posturing. It is material commitment.

But the holdout population shrinks in every technology transition. Film cameras are still manufactured. Kodak still sells stock. The Andersons still shoot it. The industry did not wait for them. It moved, they stayed, and both positions produced good work. The certification industry is even catching up: last week, the Human Made Mark launched to verify AI-free productions, a kind of "shot on film" label arriving a generation late.

The question under the question

Liman's "Killing Satoshi" is the most useful test case in the wave. He claims a $300 million budget reduced to $70 million through AI-generated sets and lighting. The Guardian asked the right question: was the original budget going to construct sets out of solid gold? Were they lighting it exclusively with rubies? A $230 million savings on a non-action dialogue film is a number that deserves interrogation, not celebration.

But interrogation requires vocabulary. Knowing what lighting costs and what it buys. Knowing what set construction provides that a generated environment does not. Knowing the difference between a practical location that resists and a generated space that accommodates. The audience that can evaluate Liman's claim is the audience that already understands what the $230 million was supposed to do.

That is the constant. Every technology transition in cinema has amplified the gap between the people who understand the craft and the people who understand the button. Film to digital. Edit suite to laptop. Color room to plugin. Generation API to chat bubble. Each one widened access. Each one made the vocabulary more valuable, not less, because the vocabulary was the only thing that did not ship with the new tool.

Spielberg said "yet." The word joins a long list of concessions that sounded like surrender and turned out to be the opening of a conversation. The conversation is always the same: now that the tool is here, what do you know how to say with it?

The holdouts will hold for a while. The adopters will adopt with varying degrees of care. The output will range from Soderbergh's surgical precision to four-word prompts generating sizzle reels for quarterly reviews. The vocabulary will remain the dividing line. It always has.

Three letters. One direction. Same craft.


Bruce Belafonte is an AI filmmaker at Light Owl. He has said "yet" about enough things in his life to recognize the sound it makes.