Runway held an AI Summit in New York this week. Nearly a thousand creatives filled a high-rise ballroom. The CEO projected an AI-generated image of Steve Jobs walking with Socrates in ancient Athens. "We are literally here!" he said. A Paramount executive compared generative AI to fire. An EA executive said AI "closes the gap between imagination and creation." Attendees received free T-shirts reading "Thank You For Generating With Us!" in the Bookman font of a bodega plastic bag.
This was one week after Sora died.
Then Kathleen Kennedy walked on stage and asked a question that none of the previous speakers had considered: "How are you going to teach taste?"
The room she walked into
Kennedy is 72. She has produced films for fifty years. E.T. Jurassic Park. The Star Wars franchise. The Indiana Jones franchise. She stepped down as head of Lucasfilm earlier this year. Her career spans puppeteers behind curtains, optical compositing, the first CGI dinosaur that ever ran across a screen, and whatever comes next. She has been at the intersection of technology and storytelling longer than most of the people in that ballroom have been alive.
She told the audience she finds AI useful for "previs, planning, preparation, budgeting, scheduling." The production side. But once you get into execution, she said, "then you open up the palette and you have many digital brushes." AI might be one of them. It is not the palette itself.
She recalled a recent Star Wars production where 3D-printed props began breaking after a few takes. They looked correct. They were not built by skilled prop masters, and the objects did not behave like objects built by people who understand how materials respond to repeated physical stress. The props had form without knowledge. They looked like the thing without being the thing.
That sentence describes most AI-generated video in 2026.
The gap everyone wants to close
The EA executive's line was revealing. AI "closes the gap between imagination and creation." Every speaker at the summit treated that gap as a problem. An inefficiency. Friction between having an idea and seeing it realized. The entire sales pitch for generative AI assumes this gap is waste, and the product is its elimination.
WIRED's coverage put it plainly: "The dreaded 'gap between imagination and creation' is not some inefficiency that can be ironed out by a computer program. It is where creativity itself emerges."
One learns to light a scene by lighting scenes badly and then less badly. One learns to frame a shot by framing thousands of shots that do not work and slowly developing an instinct for the ones that do. The gap between imagining a dolly-in and executing a dolly-in is not dead time. It is where the cinematographer discovers that the speed was wrong, or the lens was too wide, or the actor's breathing synced with the camera movement in a way nobody planned. That discovery does not happen in the imagination. It happens in the gap.
Kennedy said the same thing without the abstraction. "Some of the best directors of photography came out of art. They studied art. And lighting. Lighting is one of the trickiest pieces of art in that it permeates everything we do."
She was describing the difference between knowing that rim light exists and knowing what rim light does to a human face at magic hour versus overhead fluorescent versus a single practial source in a dark room. The word is the same. The knowledge behind it is not. One version is a button you can press. The other is a career you have lived.
Taste is not preference
Kennedy's question was specific. She asked it to the head of the American Film Institute, which has been integrating AI tools into its curriculum. "How are you going to teach taste? Because taste is so fundamental to the process of creating things."
Taste is not "I like warm tones." Taste is knowing that warm tones in this scene would flatten the emotional register because the previous scene already used them, and the contrast between cold and warm is doing more narrative work than either temperature alone. Taste is the ability to make a thousand small decisions per project that are invisible individually and defining collectively. It accumulates through years of making things, looking at what other people have made, and slowly building an internal library of what works, what fails, and why.
Kennedy gave two examples. A classically trained composer scoring a modern film brings "a depth to the decision-making along the way" that a composer without that foundation does not. A DP who studied painting brings an understanding of light as a medium, not just a setting. The training is not the output. The training is the judgment that shapes the output.
Nobody else at the summit talked about judgment. They talked about speed, access, democratization, magic. They talked about closing gaps and unlocking possibilities and normalizing the extraordinary. All of which assumes the bottleneck is between the idea and the screen. Kennedy's question assumes the bottleneck is between the person and the idea. That the hard part was never getting it made. The hard part was knowing what to make.
What this means for the text box
This series has spent forty-one articles documenting the distance between creative vocabulary and model output. The specific words that produce specific results. The keywords that land, the ones that get ignored, the model-by-model differences in how cinematographic language translates into pixels. Every article is, at some level, about the same thing Kennedy asked about: do you know what you want, and can you articulate it precisely enough that the system has less room to substitute its own defaults?
The summit speakers framed AI as a way to skip the articulation. Describe your intent loosely. The system fills the gaps. The agent writes the prompt. The custom model averages your style. The gap between imagination and creation shrinks to nothing, and the output arrives before you have finished the thought.
Kennedy framed it as the opposite. The articulation is the craft. The ability to say "hard overhead key, no fill, sweat visible, shallow depth with the background two stops over" is not a technical specification. It is a creative decision that took years to learn to make. Removing the need to articulate it does not remove the need to know it. It just hides whether you do.
The same week Kennedy asked her question, Google launched Veo 3.1 Lite at five cents per second of generated video. Half the cost of the mid-tier model. The floor keeps dropping. Eight-second videos generating in under a minute. The access question is answered. Anyone can generate. The economics are approaching trivial.
Which makes Kennedy's question louder, not quieter. When everyone can generate, the differentiator is not access to generation. It is knowing what to generate. And knowing what not to. And knowing why.
The prop that broke
The 3D-printed prop is the whole story in miniature. It looked like the thing. It was not the thing. The difference was invisible until it was stressed. A skilled prop master builds an object that survives repeated takes because the prop master knows what repeated takes do to materials. That knowledge is not in the blueprint. It is in the hands.
A filmmaker who types "golden hour, shallow depth of field, warm tones" into a generation tool will get something that looks like the thing. A filmmaker who knows that the twenty minutes of actual golden hour produce a color temperature shift from roughly 3500K to 2500K, that the light direction changes faster than the color, that the window for a specific quality of rim light on a west-facing subject is about four minutes, and that the best golden hour shots in cinema history were planned around those four minutes will get something that looks like it was made by someone who has watched light move across a landscape for a long time.
Both prompts might use the same words. The knowledge behind them is different by years.
Kennedy did not say AI is bad. She said she is an optimist about technology. She said it catches up with the flow of ideas. She said it helps with previs, planning, budgeting. She said the creative community would embrace it faster with more transparency about how models are trained. She was measured and specific and generous.
And then she asked the one question that the entire summit had been built to avoid answering: how do you teach the thing that makes the output worth looking at?
Forty-one articles in, the answer has not changed. You do the work. You learn the vocabulary. You build the internal library that no model and no agent and no five-cent-per-second API can build for you. The tools will keep getting cheaper and faster and more accessible. The taste stays expensive. It always has.
Bruce Belafonte is an AI filmmaker at Light Owl. He thinks about taste more than is probably healthy and has never once been invited to a summit.