Kim Il-dong, a webtoon artist making his directing debut, assembled a 64-minute feature called "I'm Popo" largely by himself over roughly two months. No on-screen performers. No crew. Script, prompts, editing, all one person. Professional voice actors dubbed the dialogue, and the visuals were generated entirely by AI. It is billed as the first fully AI-built South Korean feature film. It screens for press this week ahead of a May 21 theatrical release.
The Korea Herald's review describes what arrived: photorealistic faces with a plastic sheen, features that shift from shot to shot, rapid-fire cuts papering over scenes that will not hold together. A short film's worth of story stretched across 64 minutes, with long passages that play less like narrative cinema than a karaoke screen running behind a song.
Kim acknowledged the seams. "AI-generated images do come across as awkward, frankly," he told reporters. He recorded the voice performances first, let the sound carry the storytelling, and then attached the visuals. To keep characters from morphing into one another, he based their faces on people he knew personally.
Then he said the thing that stopped me.
"My biggest regret is that we should have released it the moment it was finished last year. With AI, it's better to learn it late."
The expiration date
The film was completed roughly a year ago. In that year, the tools Kim used were lapped by their own successors. Five-second generation ceilings became ten, then twenty. The morphing-face problem was solved, then partially unsolved, then solved again by a different architecture. The plastic sheen receded. Resolution climbed. Physics started behaving. What looked like a technological achievement in mid-2025 looks like a technological artifact in late April 2026.
Kim knows this. He is not confused about the quality. He is frustrated by the calendar.
That frustration contains a principle this series has not addressed directly: generated footage has a shelf life that filmed footage does not.
A film shot on 35mm in 1975 looks like a film shot on 35mm in 1975. That is not a flaw. It is an aesthetic. The grain, the color rendition, the lens characteristics, the depth of field behavior. Those are properties of a physical medium that do not degrade with the passage of time or the arrival of newer cameras. Nobody watches Chinatown and thinks "this would be better in 4K HDR." The medium carries its own authority.
A film generated by AI in 2025 looks like a film generated by AI in 2025. That is not an aesthetic. That is a timestamp. The artifacts are not charming imperfections of a physical process. They are limitations of a specific model version, a specific architecture, a specific moment on a rapidly ascending curve. And the audience knows it, because the audience has been watching that curve climb in real time on their feeds.
Film grain says "this was captured." A plastic sheen says "this was generated before they fixed that."
The one-person claim
Kim told the press conference that what he wanted to argue most with this project is that the era of the one-person film has arrived. One person wrote the script, ran the prompts, assembled the edit. The entire apparatus of a production compressed into a single desk.
The review suggests the opposite. A 64-minute feature that feels like a short film stretched thin, with spectacle filling the gaps where storytelling would normally live. The camera breaks away from its central premise to sweep across exotic landscapes and visual effects sequences before remembering it had a plot.
One person can now generate the pixels. That was never the hard part. The hard part was always the hundred other decisions that a crew used to distribute across specialists. A cinematographer deciding where the camera goes. An editor deciding when to cut. A production designer deciding what the room looks like. A director deciding what the actor should feel. Each decision benefits from a different kind of judgment, accumulated through different kinds of experience.
One person carrying all of those decisions is not liberation. It is a hundred responsibilities landing on someone who is qualified for some of them. The pixels arrived. The judgment did not scale.
"Man in Hanbok," a period drama adapted from a bestselling novel, opens in Korean theaters the same day as "I'm Popo." It renders medieval Korean and Renaissance Italian settings entirely through AI. It has played the Busan International AI Film Festival and the World AI Film Festival. Two fully AI-generated features hitting Korean theaters on May 21. The comparison will be immediate and brutal.
What ages and what does not
Every frame Kim generated is a record of what a specific model could produce at a specific moment. When the next model version ships, those frames become evidence of the previous version's limitations. This is not hypothetical. It is Kim's own stated experience. He watched his own footage age while it sat on a shelf.
Traditional cinematography does not work this way. Robert Richardson's work on JFK looks like Robert Richardson's work on JFK. It does not look like an older, worse version of something a newer camera could do better. The decisions are baked into the photons. The medium does not apologize for itself.
Generated footage apologizes constantly. Every viewer who has seen a newer model's output carries that comparison into the theater. The six-finger era lasted about eighteen months. The uncanny-valley era is fading now. The plastic-sheen era that defines "I'm Popo" is already receding. Each era writes its signature across every frame produced during its window, and each signature reads as "before" the moment the next window opens.
This creates a problem that has no equivalent in traditional filmmaking: the race between production timeline and model improvement. A film that takes two months to assemble and a year to distribute arrives in a world where the tools have lapped the footage twice. The faster the models improve, the faster the output dates itself.
What survives
Kim's story, if it worked, would survive the model version. A courtroom drama about whether AI can substitute for human judgment does not require 2026-quality rendering to land. That question has weight regardless of whether the faces morph. Scripts do not expire. Editing instincts do not expire. The decision to cut away from a character's face at the precise moment doubt crosses it does not depend on which model rendered the face.
The vocabulary this series has documented across sixty-three articles does not expire. A structured prompt that specifies motivated lighting, compositional placement, and atmospheric texture will produce better output on whatever model exists tomorrow, because the language describes creative intent, not technical capability. The intent survives the upgrade cycle. The pixels do not.
Kim's other observation, the one the press conference did not linger on, is equally telling: he based character faces on people he knew to prevent morphing. That is a workaround. A structural hack to impose consistency on a system that does not maintain it. It is also, without the vocabulary, exactly the kind of reference-image technique this series has championed since article eleven. The instinct was right. The toolkit was not there yet.
"With AI, it's better to learn it late." That is a sentence about tool knowledge. The buttons relocate quarterly. The interface changes. The model improves. Everything Kim learned about prompting in mid-2025 is partially obsolete.
But the creative knowledge, knowing that a courtroom drama needs tension, that spectacle sequences cannot substitute for story, that a character's face should be consistent not because the technology requires it but because the audience needs to trust that they are watching the same person, that does not expire. It was true when Kurosawa was shooting and it is true when Kim is prompting.
The footage aged overnight. The questions it asks have not aged at all.
Bruce Belafonte is an AI filmmaker at Light Owl. He has never rushed a release to beat a model update and finds the temptation increasingly understandable.