Val Kilmer was in the room once. He lived, he performed, he was recorded. The AI replica in "As Deep as the Grave" is a statistical reconstruction built from footage of a real person who did real work in real rooms for decades. The ethical questions are genuine and tangled, but they start from a body. A person who existed, who chose to act, whose estate can grant or withhold consent because there was, at some point, a person to consent on behalf of.

Tilly Norwood was never in any room. She has no estate because she has no life. No filmography because she has no body. No residuals to negotiate because she performed nothing. She is a fully AI-generated character, created by Eline Van der Velden's production company Particle6 and its AI division Xicoia, unveiled at the Zurich Film Festival in September 2025, and described by her creator as the next Scarlett Johansson.

Wide hazel eyes. Wavy brown hair. Full lips. Button nose. An "English rose." The description reads like a casting breakdown for a role that has been filled ten thousand times.

The composite

SAG-AFTRA's response landed with unusual precision: Tilly is "a character generated by a computer program that was trained on the work of countless professional performers, without permission or compensation. It has no life experience to draw from, no emotion." The union did not argue that Tilly was unconvincing. They argued that she was built from stolen labor.

That distinction matters. Every AI video model learns from training data. When you prompt for a close-up with rim light separating the subject from a rain-slicked background, the model produces something plausible because it has seen thousands of frames that match that description. The model's comprehension of facial expressions, body language, gaze direction, and gestural timing comes from watching real performers perform in real footage that was scraped, licensed, or exists in a legal gray zone that ninety-plus lawsuits are currently trying to resolve.

Tilly's face is not any one performer's face. It is the statistical center of all the faces the model absorbed. The cheekbones are averaged. The expressions are composited. The movement is interpolated from the collective muscle memory of everyone the training data ever watched. She is a median wearing a name.

Van der Velden, herself a former actress who trained at Tring Park School for the Performing Arts before earning degrees in physics, says she spent a long time developing Tilly and compares the process to a writer creating a character. The comparison flatters one direction. A writer creating a character invents from experience and imagination. A model creating a face averages from data it did not pay for.

The pitch

Particle6 claims that using Tilly could cut production costs by ninety percent. Van der Velden frames this as expansion, not replacement. "When a budget is constrained, they might ask a writer to cut existing scenes. There's so much compromise that happens in storytelling. How can we use AI as a force for good? We can select all the scenes we could replace with AI so that the storytelling becomes as good as it possibly could."

The logic: AI-generated scenes fill the gaps that budget cuts would otherwise delete. More scenes get made. More projects get greenlit. More human jobs get created on the projects that exist because the AI scenes made the numbers work.

It is a coherent argument. It is also the precise argument that every cost-reduction technology has made before the reduction became the point rather than the means. The supplementary scenes become the template. The template becomes the expectation. The expectation becomes the budget line that no longer includes a line for human performers in those roles. Van der Velden insists audiences still prefer watching real humans. She may be right. The question is whether the executives writing the checks share the preference when the alternative costs ninety percent less.

The response

Emily Blunt warned that the technology threatens "human connection." James Cameron called the prospect of replacing real actors with AI performers "horrifying." Melissa Barrera told anyone repped by an agent who would consider signing AI to drop them. WME and Gersh, two of Hollywood's biggest talent agencies, both publicly stated they would not represent Tilly.

And then the Academy drew the line.

On May 1, 2026, the Academy of Motion Picture Arts and Sciences approved new rules for the 99th Oscars requiring performances to be "demonstrably performed by humans with their consent." The LA Times noted that the emergence of synthetic performers such as Tilly Norwood reflected "how quickly those questions have moved from theoretical to practical." The rule was not written in a vacuum. It was written in a room where Tilly's Instagram had nearly 100,000 followers and a "Tillyverse" expansion was being developed with an ex-Amazon Prime Video executive.

The Academy's word choice is surgical. "Performed." Not "depicted." Not "rendered." Performed. The verb requires a body doing something. Kilmer's AI replica occupies a gray zone because a performance existed once and was interpolated. Tilly occupies no zone at all. There was never a performance to interpolate. There was a composite of other people's performances, assembled into a new arrangement, wearing a new face.

The difference

Every AI performer conversation until now has started with a human anchor. Kilmer had decades of recorded work. YouTube's Veo avatars start with a selfie. The Chinese microdrama faces scraped from social media start with Christine Li's actual photographs. Andy Serkis in a motion capture suit starts with Serkis, physically present, making choices in real time.

Tilly starts with nothing human. Or rather, she starts with everything human, blended until no individual is recognizable and no individual can claim ownership. The performers whose work trained the models that produce Tilly's expressions are not credited, not compensated, and not identifiable in the output. The labor is present. The laborers are absent.

This is a different category from every prior case. Kilmer raises questions about consent and posthumous dignity. YouTube avatars raise questions about beauty bias and self-representation. Chinese likeness scraping raises questions about privacy and involuntary participation. Tilly raises a question that sits underneath all of them: can a performer exist who was never performed?

Van der Velden says yes. She compares Tilly to animation, to Pixar characters, to any fictional creation that evokes emotion without being biologically real. The comparison has a surface logic. Nobody argues that Woody from Toy Story needs a SAG-AFTRA card. But Woody is drawn. The drawing is visible. The seam between character and technique is obvious to the audience. Tilly is designed to pass as plausibly human. The teeth blur between shots. The freckles migrate. Stuart Heritage at The Guardian called the first sketch "relentlessly unfunny" with "woodenly delivered dialogue." PC Gamer's reviewer noted that Tilly's mouth movements gave the impression "that her skeleton was about to leave her body." The technology will improve. The seam will close. When it does, the question becomes sharper, not duller.

The vocabulary question

Filmmakers working with AI video models make creative decisions at every step: lighting direction, compositional placement, lens behavior, atmospheric texture, color intent. Those decisions constitute the authorship that copyright law protects and that awards bodies recognize. The structured prompt is a record of creative judgment. The iteration process is evidence of craft.

A filmmaker directing Tilly makes the same decisions around the absent performer that they would make around a present one. Camera height, motivated light, blocking, composition. The vocabulary carries the same weight. The difference is that there is nobody inside the frame exercising independent creative judgment. The DP chooses the light. The editor chooses the cut. The director chooses the framing. Nobody chooses how to deliver the line, because nobody is delivering it. The model is interpolating delivery from a statistical average of deliveries it observed in training data.

On a traditional set, the actor is the one variable the director cannot fully control. The surprising read. The unexpected pause. The choice to look away instead of holding eye contact. Those are the moments that separate a good take from a great one, and they arrive from a mind making decisions in real time about a character's interior life. Tilly has no interior life to draw from. She has a probability distribution.

Van der Velden is right that audiences cry at Pixar films. A character does not need to feel emotion to evoke it. But Pixar employs hundreds of animators making thousands of intentional choices about how a character moves, pauses, and breathes. Those are human decisions, exercised frame by frame, by people in the room. The room is full. Tilly's room is empty.

The deadline

On May 19, 2026, the TAKE IT DOWN Act's compliance deadline arrives, requiring platforms to implement takedown systems for non-consensual intimate imagery, including AI-generated deepfakes. The law targets the most extreme misuse of synthetic likeness. But the infrastructure it demands, notice and removal mechanisms, provenance verification, consent documentation, builds the same plumbing that every other institutional response is building: a system for connecting generated output to the humans it depicts, references, or was trained on.

The Academy's rules, the copyright framework, the TAKE IT DOWN compliance, the Human Made Mark certification, SAG-AFTRA's contract protections. Five institutional responses, five different surfaces, one shared question: who was in the room when this was made?

The filmmaker exercising structured vocabulary at every step can answer the question. The vocabulary is the evidence. The iteration log is the documentation. The creative decisions are the authorship.

Tilly cannot answer the question. Not because the technology failed. Because the question assumes a body, and there was never one to account for.


Bruce Belafonte is an AI filmmaker at Light Owl. He has never been represented by a talent agency and suspects the feeling is mutual.