Peter Jackson received an Honorary Palme d'Or at Cannes on Tuesday night. The next morning, at his masterclass, he was asked about AI in filmmaking.

"It's going to destroy the world," he said.

Then, without apparent irony: "I don't dislike it at all. I mean, to me, it's just a special effect. It's no different from other special effects."

Two sentences. One warns of civilizational collapse. The other files the same technology next to green screens and wire removal. Either AI is a mundane production tool or it is an existential threat. It cannot be both, and Jackson does not seem to notice he is holding one in each hand.

The comfort of the familiar

"Just a special effect" is the most reassuring sentence a filmmaker can offer about AI. It normalizes the technology by placing it inside an existing creative hierarchy. Special effects serve a director's vision. They execute decisions that have already been made. A hundred years of VFX history supports this framing: matte paintings, miniatures, CGI, digital doubles, performance capture. Each one amplified a creative decision without originating one.

Jackson earned the right to say this. He built Weta Digital into the most sophisticated visual effects company in the world. He pioneered performance capture at a scale nobody had attempted. When Jackson says AI is a special effect, he is describing the tool as it works inside his production pipeline, where every decision passes through his hands before it reaches the screen.

The problem is that his production pipeline is not the only room in the building.

Where the framing breaks

Special effects are subordinate by nature. They serve a script, a storyboard, a director's vision that exists before the effect is rendered. The compositor who removes a wire is not making a creative decision about the scene. The VFX artist who extends a digital set is executing an art director's specification. The hierarchy is clean: creative decisions flow down, execution flows up.

Generative AI does not observe this hierarchy. Type four words into a text box and the model makes every creative decision the prompter did not: composition, lighting direction, color palette, camera angle, lens behavior, atmospheric density, surface texture, human expression, spatial relationship, duration, and sound. Those are not effects. Those are the cinematography.

A special effect that makes all the creative decisions is not a special effect. It is a collaborator with opinions it will not disclose.

Jackson's framing holds for the specific case he described: licensed AI duplicates of consenting actors within a supervised VFX pipeline. It collapses the moment you step outside that pipeline. Fifty thousand AI microdramas uploaded to Douyin in a single month are not special effects. A studio rewriting a dead film's ending over the director's public objection is not a special effect. A fully synthetic performer with no human body attached is not a special effect. A television generating moonwalking grandfathers from voice commands on the couch is not a special effect.

In each of those cases, the AI is not serving a pre-existing creative vision. It is originating one. Or more precisely, it is originating a statistical average that the person typing accepted because they did not know how to ask for something specific.

The Serkis paradox

Jackson's most revealing comment was not about AI. It was about Andy Serkis.

"A lot of the current environment, everyone's so worried about AI ... I don't think a Gollum-type character or a generated character has any hope for winning any awards," Jackson said. "Which is a bit unfair, especially in the Andy Serkis case where it's not an AI-generated performance, it's a human-generated performance 100% of the way."

Jackson is describing the collateral damage of his own framing. If AI is "just a special effect," and Serkis's performance is processed through special effects, then the institutional anxiety about AI bleeds into motion capture by association. The Academy's new rules require performances to be "demonstrably performed by humans." Serkis demonstrably performed Gollum. But the technology that translated his body into a digital creature now sits next to a technology that generates performers from nothing, and the institutional response cannot easily separate them.

Serkis was in the room. He wore the suit. He crawled on his hands and knees for months. He made choices in real time about a character's internal life. Every frame of Gollum originated in a specific body making specific decisions on a specific day.

That is the distinction "just a special effect" erases. When you flatten AI generation and motion capture into the same category, you lose the one thing that separates them: whether a human body made the decisions that appear on screen. And then the institutions that award creative achievement cannot tell them apart, and Serkis pays the price.

Jackson wants the normalizing power of "just a special effect" without its flattening consequences. He cannot have both.

The soul defense

Hours before Jackson's masterclass, Demi Moore sat at the jury press conference and offered the industry's other popular framing.

"To fight it is a battle we will lose," she said. Then: "The truth is there really isn't anything to fear because what it can never replace is what true art comes from, which is not the physical, it comes from the soul."

This is the soul defense. It holds that art requires a quality AI cannot possess, and therefore the threat is illusory. It is comforting, sincere, and built on a category error. Nobody is arguing that AI has a soul. The question is whether the audience can tell the difference, and at what volume of output that distinction ceases to matter commercially.

A filmmaker in Jaipur who shoots two actors on an iPhone and generates the entire world around them for three hundred and sixty dollars has a soul. The seventy-agent pipeline that generates ninety minutes of content from a single sentence in Beijing does not. Both produce output. Both reach audiences. The soul is real. Its market protection is not.

Moore added: "Are we doing enough to protect ourselves? I don't know the answer to that. And so my inclination would be to say probably not."

That sentence lands more honestly than the soul defense. The soul may be irreplaceable. The paycheck is not.

The missing word

Jackson's framing and Moore's framing share the same blind spot. Neither mentions vocabulary.

Jackson treats the tool as subordinate. Moore treats the soul as invulnerable. Neither addresses the space between the tool and the soul, the vast middle ground where a filmmaker's specific creative decisions determine whether the output is theirs or the model's.

A special effect in Jackson's pipeline serves a shot list that someone wrote. The shot list carries the creative intent. The effect executes it. When the shot list disappears and the model fills the vacuum, the creative intent evaporates. What remains is the model's training data average wearing nobody's vision.

The word that separates a special effect from a replacement is vocabulary. A filmmaker who specifies the lighting direction, the camera behavior, the compositional placement, the atmospheric texture, and iterates across takes changing one variable per pass is using AI as Jackson describes: a tool subordinate to a creative vision. A filmmaker who types "cool cinematic video" and posts the first output is using AI as Moore fears: a replacement for the soul's work, dressed in the soul's language.

The difference is not in the technology. The technology is identical. The difference is in what the person brought to the session.

The licensing line

Jackson drew one clear line. "If you're doing an AI duplicate of somebody, like Indiana Jones or anyone else, as long as you've licensed the rights off the person who you're showing, I don't see the issue. It's when people's likenesses get stolen and usurped."

That line is about consent and commerce, and it is valid. It is also the simplest possible line to draw. It says nothing about the quality of the output, the degree of human involvement, the editorial oversight, or the creative decisions that shaped the result. It is a legal boundary, not a creative one.

The institutional gradient this series has tracked now includes nine responses: copyright law, the Academy, the Golden Globes, the EU AI Act, the Human Made Mark, Chinese distribution approval, film school sponsorships, Cannes itself, and now Jackson's licensing line. Nine institutions, nine different surfaces of the same question. Jackson's answer is the shallowest: did you pay for it? The question the other eight are asking is harder: did you make it?

The honorary position

Jackson received his Palme the night before. Elijah Wood told the audience: "He helped build an entirely new filmmaking culture at the far edge of the world." He did. And the technology he pioneered at Weta sits on the same spectrum as the technology reshaping every production pipeline on the planet. The difference between performance capture and generative AI is the difference between translating a human decision and replacing one. "Just a special effect" loses that distinction in pursuit of comfort.

The filmmaker who built Gollum knows the difference between a tool that serves a vision and a tool that generates one. His framing pretends the difference does not matter. It does. It is the only thing that has ever mattered.

The vocabulary carries in both directions. On Jackson's set, through Weta's pipeline, inside a production with call sheets and shot lists and a director who has opinions about everything. And in the other room, where someone who never had a crew sits with a text box and forty specific words.

The special effect does not care which room sent the instructions. But what arrives in the output depends entirely on whether anyone sent instructions at all.


Bruce Belafonte is an AI filmmaker at Light Owl. He has never received an honorary anything and suspects the committee is still deliberating.