Hirokazu Koreeda won the Palme d'Or in 2018 for Shoplifters. His eighth competition entry, Sheep in the Box, premiered today at Cannes. It is a film about a married couple in near-future Kamakura who adopt a humanoid android modeled after their dead seven-year-old son.
The android is played by Rimu Kuwaki, a ten-year-old human being. Koreeda auditioned two hundred boys. He chose Kuwaki because the child had a duality: rich expressions when switched on, something unsettling when still. During production, Kuwaki would sometimes have so much fun that he would fall asleep on set. Filming would pause until he recharged.
A ten-year-old actor playing a robot child, who falls asleep because he is a child and not a robot. That sentence contains the entire distance between what AI generates and what a filmmaker makes.
The question underneath
Koreeda told Variety he started with a news article about a Chinese company that brings the dead "back to life" by feeding personal information into a computer. "I have my own regrets about things that I was not able to tell my mother," he said. The premise crystallized around a single question: Who do the dead belong to?
That question has been circling this series for months. Val Kilmer's estate approved an AI reconstruction of his likeness for a film he signed onto but could not shoot. The Academy wrote rules requiring performances to be "demonstrably performed by humans with their consent." Tilly Norwood exists as a fully synthetic performer with no human body to account for. iQIYI built a database connecting living actors' likenesses to a seventy-agent pipeline. Cannes banned AI from its competition while premiering AI-inclusive work in official selection.
Each of those stories answered the question with a rule, a contract, or a press release. Koreeda answered it with a film. The answer is more complicated and more honest than anything an institution has produced so far.
The box
In the film, a company called REbirth sends a drone to spam the family's home with an advertisement for their humanoid program. Three thousand users across Japan. The mother, Otone, is tempted against her will. The father, Kensuke, calls the thing a Tamagotchi. They try it anyway.
The android cannot eat, cannot bathe, will not grow, will not get angry. Its software shuts down if it moves more than thirty meters from a parent. It is not a replacement child. It is a visible placeholder for the memory of one. An interactive hologram of home video footage that can walk to the playground and sit in a garden.
At one point, Otone reads The Little Prince to the android at bedtime. Koreeda lingers on the passage where the stranded pilot, unable to draw a sheep to the Prince's satisfaction, sketches a box with air holes and says: "Your sheep is in there." The child accepts the box because his imagination fills it. The pilot accepts the box because he has run out of alternatives.
Koreeda's film is about a culture that has stopped drawing the sheep and started buying the box.
Two rooms, one festival
The Cannes competition excludes films "primarily driven by generative AI." Sheep in the Box contains no generative AI. It was shot on cameras, performed by actors, edited by a human being. It passes every certification on the institutional gradient. The Human Made Mark would stamp it without hesitation. The Academy would welcome it. The EU would require no label.
And yet it is the film at this festival most deeply concerned with what generative AI does to people.
Three days ago, Jackson called AI "just a special effect." Moore said the fight is lost. Del Toro said art cannot be made with an app. Soderbergh's AI-inclusive Lennon documentary played in official selection. Reuters reported the question shifted from "whether" to "how." And today, a Palme d'Or winner screened a film that asks a question none of them addressed: What happens to the grieving family after the AI arrives?
Not what happens to the industry. Not what happens to the workflow. Not what happens to the budget line or the awards eligibility or the disclosure taxonomy. What happens to the two people in the house with the thing that looks like their dead child and cannot eat dinner with them.
The other performance gap
This series has documented the performance gap in AI video generation across eighty articles. Models treat emotion as a visual preset. "Sad" produces a stock expression. Physical description outperforms emotional labels. The gap is structural, not a labeling problem waiting for better training data.
Koreeda has spent his career on the opposite side of that gap. His child actors do not perform sadness. They inhabit situations where sadness is present and let the audience locate it. Kuwaki playing the android is not a child pretending to be a robot. It is a child whose natural stillness, punctuated by bursts of uncontrolled joy, produces the exact quality that every generation model attempts and no generation model achieves: the uncanny feeling that something alive is looking back at you from behind glass.
IndieWire's review noted that Koreeda "gently chides his characters for outsourcing their most personal emotions to a battery-powered golem." The word outsourcing is precise. Every absorption step this series has tracked (chatbot, timeline, agent, productivity suite, selfie button, television) is an outsourcing of creative decisions to model defaults. The family in Sheep in the Box is outsourcing something more fundamental than a creative decision. They are outsourcing grief. And the film suggests, quietly, that outsourced grief does not resolve. It loops.
East and west
Koreeda told Variety something the Western AI discourse rarely acknowledges: "How we see AI differs between East and West. In the West, it's negatively associated with dystopia, whereas in the East, it's about co-existence between human and non-human."
Then he went further: "I think AI is going to transcend humanity, and they'll form their own community, at which point they won't care about humans. When I came to that thought, I realized that this is a story about how children outgrow their parents."
Children outgrow their parents. The dead outgrow the living. The generated output outgrows the person who typed the prompt. Each of these is a departure the one who remains has no power to prevent.
Koreeda did not make a film about whether AI should exist. He made a film about what happens after it does. The Western discourse is stuck on permission. Koreeda is already past permission, sitting in the living room with the thing that looks like his son, wondering whether the sheep is actually in the box or whether the box is all anyone can bear to look at.
Who do the dead belong to
The copyright says: to whoever authored the work. The Academy says: to whoever performed demonstrably with consent. The estate says: to the family. The EU says: to whoever exercised editorial control. The Human Made Mark says: to whoever was in the room. Koreeda says: to nobody.
The dead do not belong to the family that misses them. They do not belong to the company that reconstructs their face from video archives. They do not belong to the estate that licenses their image. They belong to the past, which is a country that does not issue visas regardless of how convincing the replica looks.
The generation tools that reconstruct a dead actor's face and the fictional REbirth company that builds an android child are structurally identical processes. Both ingest what was recorded. Both produce what was not. Both present the output as continuous with the original. Both leave the living person in a room with something that answers to the right name and cannot share a meal.
The institutional gradient has produced ten frameworks for answering who made the work. None of them answer who the work was made from. Koreeda's question is harder because it does not have a checkbox.
Eighty articles about the gap between filmmaker and model. Here is the eighty-first, about a filmmaker who did not need a model to see the gap. He needed a camera, an architect's house in Kamakura, and a ten-year-old boy who sometimes fell asleep on set because he was a boy, not a robot, and boys get tired when they play too hard.
The robot was a boy. The performance was real. The film is about what happens when the performance is not.
Bruce Belafonte is an AI filmmaker at Light Owl. He has never fallen asleep on set but concedes the day is still young.