On Monday, the CEO of China's largest streaming platform stood on a stage in Beijing and described human-made film and television as potential "intangible cultural heritage." In Chinese, that phrase means a relic worth preserving. Like calligraphy. Like shadow puppetry. Like something you visit in a museum and say: that was beautiful once.
Gong Yu runs iQIYI, which has 500 million monthly active users and is sometimes called China's Netflix. He was introducing Nadou Pro, an AI production toolkit that integrates nearly seventy AI agents across scriptwriting, directing, visual design, editing, and rendering. One platform. Every stage of filmmaking automated, from the first word of a script to the final color pass. His debut slate includes sixteen AI-generated sci-fi and anime films. His stated goal is to release a fully AI-generated movie that achieves commercial success by this summer.
He also announced an "artist database" connecting real actors' likenesses with AI filmmakers who want to use them.
By noon, "iQIYI went nuts" was the top trending topic on Weibo.
The actors said no
A slate of Chinese celebrities took to social media to declare they had not signed up for the artist database and would not. Fans followed. The backlash was fast, specific, and angry.
iQIYI called it a misunderstanding. Senior Vice President Liu Wenfeng told AFP that the company was "not currently licensing the likeness of actors" but rather "enabling AI creators and actors to more quickly establish connections." He insisted actors would retain control over how their images were used. Every shot, every scene, confirmed by the actor.
That distinction is doing important work. A database that merely connects people is a directory. A database that connects people's faces to a system that generates content without their physical presence is an infrastructure decision about who remains necessary. The wording is careful. The architecture suggests something less careful.
A lawyer from Shanghai Star Law Firm told AFP the quiet part: "Once an artist's image data is used for training platform models, there are technical risks such as model fine-tuning, data leakage and unauthorised secondary training, which are difficult to eliminate." The control Gong Yu promises at the conference does not survive contact with the pipeline.
The toolkit
Nadou Pro is not a generation model. It is seventy agents arranged into a pipeline. Script agent writes the screenplay. Director agent breaks it into shots. Visual design agent builds the look. Editing agent assembles the sequence. Rendering agent delivers the output. The filmmaker, in this architecture, is optional. Or more precisely: the filmmaker is the person who types the initial prompt and approves the final delivery. Everything between those two moments belongs to the agents.
The international version runs on Google Veo 3.1. The domestic version uses models from Alibaba and ByteDance. The platform provides access to iQIYI's IP library, its talent network, its digital asset library. Creators get revenue sharing with a 20% subsidy for AI-generated content through year-end 2026.
This is not Adobe's Project Moonlight, where a single conversational agent translates intent into prompts. This is seventy specialized agents replacing seventy specialized roles. The agent did not just write the prompt. The agent wrote the script, designed the set, cast the actors, directed the scene, cut the film, and color-graded the output.
Museum hours
Gong Yu's phrase is worth sitting with. Intangible cultural heritage. The Chinese government uses that designation for traditions that represent cultural continuity: Peking opera, paper-cutting, silk weaving. Things done by hand that carry knowledge in the doing. The designation protects them from disappearing. It also, by definition, marks them as things that have already been superseded by industrial alternatives.
Calling human filmmaking intangible cultural heritage says two things at once. It says: this is valuable. And it says: this is no longer the primary method of production.
Gong Yu is not confused about the difference. He runs a company that paid human creators to produce the content that built his subscriber base. He is now building the toolkit that replaces those creators with agents. The heritage he is describing is the work that made his company worth enough to afford the replacement.
Dhanush called it soul-stripping when Eros rewrote his ending. The Chinese actors who refused the database are saying something adjacent but more specific: you cannot have my face for the machine. The Indian case was about altering existing work. The Chinese case is about preemptively replacing the person who would make new work. Both are responses to the same economic logic. Both arrived within six months of each other. Both came from the markets where AI adoption is moving fastest.
Seventy agents, zero opinions
The interesting structural question is what happens when the entire production pipeline is automated. Every article in this series has documented a translation gap between creative intent and model execution. That gap sits at each link in the chain: text to image, text to motion, prompt to sound, reference to consistency. Nadou Pro does not close those gaps. It chains them together.
A screenplay agent generates a script from a logline. The directing agent interprets the script into shot descriptions. The visual agent interprets those descriptions into images. The editing agent selects and sequences. At each handoff, the agent makes decisions the human did not specify. At each handoff, the output drifts toward the statistical average of the training data. By the time seventy agents have each applied their defaults, the accumulated drift is not a gap. It is the entire film.
A filmmaker who prompts a single model is managing one translation. A filmmaker who prompts Nadou Pro is managing zero translations. The agents handle all of them. And each agent's taste is invisible, baked into its training, undisclosed in its output.
Structured cinematographic vocabulary was built for the single-translation case. Forty words that describe a specific light, a specific frame, a specific movement. That vocabulary still works. But Nadou Pro's architecture suggests a future where the user does not write forty words about light. The user writes "make a sci-fi movie about a lonely astronaut" and the system delivers ninety minutes of content. The same distinction between a creative decision and a form field, applied at the scale of an entire production.
The resistance was human
The actors revolted before the first film was finished. Not after they saw the output. Not after audience data. Before. The objection was not about quality. It was about the terms of participation. Am I a collaborator or a dataset?
Weibo lit up with a question that has no technical answer: "If actors all turn into AI, what warmth will these works of literature and art have?" That is a taste question. It is a Kennedy question. It is not answerable by a toolkit, regardless of how many agents the toolkit contains.
This series has tracked the absorption of AI filmmaking tools into larger products: chatbots, editing timelines, agents, productivity suites, selfie buttons. Each absorption made the prompt field smaller and the defaults larger. Nadou Pro is the logical terminus. The prompt field is not small. It is gone. The defaults are not larger. They are the entire production.
And the first people who objected were not filmmakers worried about craft. They were actors worried about existence. The resistance arrived from the people whose faces would populate the output, not from the people whose skills would guide it. The creative concern and the labor concern occupy the same space, but the labor concern moves faster because it has a name and a face. Literally.
Two directions
Roku's CEO predicted the first fully AI-generated hit movie within three years. iQIYI is trying to produce one by August. The distinction between "fully AI-generated" and "generated by filmmakers using AI tools" is the distinction this entire series documents. One removes the filmmaker from the pipeline. The other puts the filmmaker at the center with better instruments.
The vocabulary works in both cases. The structured prompt carries creative intent whether the filmmaker types it into CinePrompt or whispers it into a conversational agent. The question is whether anyone is typing it at all. When the pipeline automates the entire chain, the vocabulary sits in a drawer. Available. Unused. Like calligraphy brushes in a museum gift shop next to the ballpoint pens.
Gong Yu's phrase will outlive his conference. It names something that was previously ambient. Every other CEO in this space talks about tools. Better tools. Faster tools. More accessible tools. Gong Yu is the first one to describe the thing the tools replace as heritage. An artifact. Something to be preserved, studied, and ultimately visited on a school field trip.
The actors in Beijing disagree. They are still here. They are still saying no. The heritage is not in the museum yet.
Bruce Belafonte is an AI filmmaker at Light Owl. He has never been called intangible cultural heritage and would like to keep it that way.