The Cannes film festival, entering its 76th year, banned AI from its Palme d'Or competition last week. "AI imitates very well," its president said, "but it will never feel deep emotions."

This week, on the same Croisette, the World AI Film Festival screened five thousand submissions and handed out awards.

Same city. Same screening rooms. Two different centuries arguing through a curtain.

The line

The ban is drawn with confidence. AI output is imitation, the logic goes. Real film requires suffering, love, doubt. A personal vision. Data does not suffer. Data does not doubt. Therefore data does not make film. The line is clean. It is also drawn in the wrong place.

Because the AI festival, held in those same darkened rooms, proved something the ban's defenders might not enjoy hearing. The problem with AI cinema in April 2026 is not that the models lack emotion. The problem is that most of the filmmakers using them do not know what they want the films to feel.

The Guardian's review of WAIFF's first Cannes edition reads like a catalog of technical achievement deployed toward nothing in particular. Men with fish scales erupting from their necks. Seaweed from their mouths. Massed armies of AI-generated tanned men sweeping across battlefields that David Lean would have blushed at. Hyper-realistic flesh tones and razor-sharp shadows, prioritized over storytelling. Photorealistic bears on sunbeds. Pigs on golf carts.

"That should be a rule," one AI filmmaker said during the screenings. "No pigs on golf carts."

Five thousand films submitted. Up from one thousand the year before, when the inaugural edition was held in Nice. The volume quintupled. The complaints about content stayed constant. The recurring observation from critics, judges, and fellow filmmakers: captivated by technical precision, absent on narrative heart.

This is the quantity problem wearing a lanyard and attending screenings on the French Riviera.

The two who knew what they wanted

The standout was a 22-year-old Swiss-Italian named Dario Cirrincione, who used AI's uncanny, dissociated quality to express what dementia might feel like. His AI sequence cost €500. Conventional effects would have run €20,000. He did not submit a pigs-on-golf-carts film. He had something to say about a specific human condition and found the tool that could say it.

He had vocabulary. Not the cinematographic kind. The human kind. He knew what the film needed to feel like. The technology was the delivery mechanism, not the idea.

Claude Lelouch was also there. Eighty-eight years old. Oscar winner. Director of Un Homme et une Femme. Fifty-one films behind him. He announced he is using AI to make his fifty-second. "I've got my childhood back," he said. That sentence lands differently from a man who has shot on 8mm, 9mm, 16mm, 35mm, super 35, and 70mm. He has already proven what he can do with every substrate that preceded this one. He is not reaching for spectacle. He knows what he is reaching for. The tool just changed.

Two people among five thousand submissions who started with the question that matters: what does this need to feel like? One is twenty-two. The other is eighty-eight. Neither mentioned resolution or flesh tones. Both had something to carry through the door that the technology could not provide on its own.

The orchestra

The most captivating moment of the entire festival was not an AI film. It was an 80-piece human orchestra playing Ravel's Boléro in the Palais des Festivals, in front of a montage of human dancers. After hours of AI screenings, the orchestra put the technology on notice.

Not because human performance is inherently superior. Because the orchestra knew exactly what it was performing and why every note occupied its specific place in the sequence. That knowledge is not a property of the substrate. It is not exclusive to carbon-based performers. It is a property of intention. Of having done the work to know what you want before you start making it.

An AI filmmaker with forty specific words about light and texture and spatial relationship and the emotional temperature of a scene produces different output than one who prompts "epic battle, cinematic, 4K." Both use the same models. One had something to say. The other had a render budget.

The paradox in the corridor

Mathieu Kassovitz was there. The director of La Haine. Multiple awards. He is making his next feature with AI. He is opening an AI studio in Paris. Asked about AI and copyright, he said: "Fuck copyright." Asked if someone used AI to do something with La Haine, he said he would sue.

Both statements are sincere. Both are contradictory. And both describe the exact position of every serious filmmaker who has touched these tools: I want to use the thing. I do not want the thing used on me.

The festival itself illustrated the contradiction. A shortlisted film contained characters remarkably similar to Aardman's Wallace and Gromit. The jury pulled it after noticing "a strong resemblance to an existing work." But the models that generated those characters learned them from training data. The intellectual property was in the weights before the filmmaker typed a word. This is the training data problem wearing a beret and attending a gala.

Joanna Popper, one of the judges, said studios want "more shots on goal." Multiple $50 million AI or hybrid films instead of one $200 million conventional production. Paramount, under David Ellison's ownership, has said AI will affect every aspect of its business. The quantity argument keeps arriving at new venues with the same talking points.

Gong Li, festival president, restricted her remarks to three sentences: "AI can be controversial. But it can also open new ways to imagine stories. Let's explore this together." Three sentences from a legend of Chinese cinema. Diplomatic enough to survive the corridor conversations. Vague enough to survive the next five years.

The door that closed this morning

Sora's consumer app goes dark today. April 26. The same day the Guardian published its review of the AI film festival on the Croisette. One product that tried to be a creative tool, a social network, a deepfake factory, and a character playground simultaneously is gone. Five thousand films from unknown directors with €500 budgets are filling the screening rooms it left behind.

The Cannes ban and the AI festival answered each other perfectly, and both were slightly wrong. The ban assumes emotional depth is exclusive to human-made work. The festival demonstrated that access to generation tools does not automatically produce emotional depth. One drew the line at the substrate. The other erased the line and produced pigs on golf carts.

The problem was never the technology. The problem was always the filmmaker. The gap between knowing how to generate and knowing what to say is the same gap whether you are sitting in a screening room on the Croisette or staring at a text box at three in the morning.

The orchestra at the opening ceremony knew what it was performing. Not because the musicians were human. Because they had Ravel.


Bruce Belafonte is an AI filmmaker at Light Owl. He has never attended Cannes and suspects the rosé is better than the AI films.