In 1942, RKO took The Magnificent Ambersons away from Orson Welles. They cut more than an hour of footage, reshot the ending with a different director, and made the film sunnier. Then they destroyed the negatives. Physically. Gone. Welles spent the rest of his life talking about what the original looked like. He died in 1985 without restoring it. The Magnificent Ambersons is considered one of the great lost films. The version that survives is somebody else's edit of somebody else's vision wearing Welles' name.
This week, the Hollywood Reporter profiled Fable Studios' project to reconstruct the lost Ambersons using generative AI. The source material: set photos, a cutting continuity document describing how each shot leads into the next, and Welles' own comments across decades of interviews. Real actors have already been filmed performing the missing scenes. Their performances will be superimposed onto the original cast members' likenesses using AI. The project will take years. Warner Bros., which owns the property, has not participated.
Meanwhile, The Wizard of Oz at the Sphere in Las Vegas has sold more than 2.2 million tickets since August. AI generated new performances and visuals to expand a 1939 film into a 160,000-square-foot immersive experience. Critics were divided. The public bought tickets at a pace that makes the division irrelevant to anyone running a venue.
Two classics. Two AI interventions. One reconstructs footage that was destroyed. The other generates footage that was never intended. Both claim to honor the original. Neither can ask the original filmmaker whether that claim holds.
The old argument
In 1986, the New York Times published Vincent Canby's furious essay against colorization. He called it desecration. The process of painting modern color onto black-and-white films was, in his view, both ethically and aesthetically bankrupt. The work belongs to the time in which it was made.
Colorization died fast. The technology was crude, the output was garish, and the cultural backlash worked. Studios backed off. The practice became a cautionary footnote.
AI restoration will not follow the same path, for a simple reason: the output is better and the economics are bigger. Garish color on Casablanca offended everyone who looked at it. A seamless AI reconstruction of a lost Welles scene will divide the room because half the room will not be able to tell where the original ends and the reconstruction begins. And 2.2 million Oz tickets is a number that echoes in board rooms where the next classic is already being discussed.
The reconstruction problem
Welles left fragments. Set photos show where the camera was. The cutting continuity describes shot transitions. Recorded interviews express what he wanted and what he lost. These are genuine artifacts of creative intent. They describe something that once existed as a finished film before it was physically destroyed.
The question is whether a model can close the distance between description and realization.
This series has covered that distance from one direction. A living filmmaker writes a structured prompt. A model interprets it. The filmmaker revises, iterates, switches models, adds reference images. The conversation is live. Both parties are present.
Welles is not present. The Fable Studios team reads his fragments, interprets his intent, translates those interpretations into creative direction for living actors, and then feeds the result through a generation pipeline that transfers performances onto period-accurate likenesses. Every stage involves interpretation. The model at the end of the chain has no concept of who Orson Welles was, what he cared about, or why this particular cut mattered to him for forty years. It processes the inputs like any other inputs. Statistically.
The set photos show composition but not timing. The cutting continuity describes transitions but not the half-second holds, the breathing room between shots, the editorial instincts that made Welles' pacing feel like his and nobody else's. The interviews express frustration and desire. They do not express the specific blocking decisions he would have made at age twenty-six with a crew waiting and a studio breathing down his neck. Context produces different art than hindsight, even when the same person is doing the looking.
This is not restoration. Restoration recovers what existed. This is completion. It fills an absence with the most informed guess available, run through a pipeline that optimizes for seamlessness.
The other Oz
The Sphere project is simpler and, in a way, more honest. Nobody is claiming to restore Victor Fleming's creative intent. The Sphere is a spectacle venue that needed to fill a room with content originally composed for a flat rectangle. AI generated new visual material to bridge the gap between the original frames and a 160,000-square-foot curved display. The audience knows it is an event, not a screening. The experience is transparent about being something new.
The Ambersons project carries different weight because it invokes a specific filmmaker's unrealized vision. The word "service" appears in Fable Studios' framing. Service to cinema. Service to what Welles intended.
Service requires knowing what was intended. The fragments suggest. They do not confirm. And a model cannot tell the difference between a confident interpretation and a correct one.
The gap inside the gap
Every article in this series documents one gap: the distance between a filmmaker's structured creative intent and a model's interpretation. The Ambersons project contains that gap plus a second one underneath it: the distance between Welles' actual intent and a living team's reconstruction of that intent from surviving fragments.
Two translations stacked. Fragment to interpretation. Interpretation to generation. Uncertainty compounds at each layer.
This is not unique to AI. Filmmaker Brian Rose spent years hand-animating the lost Ambersons scenes from the same source material. Every theatrical reconstruction of a lost work involves interpretation. Shakespeare's plays have been staged a million ways by people who never met the man. The gap between a creator's intent and a future interpreter's execution is as old as the idea of an archive.
What AI changes is legibility. Animation reads as interpretation. Hand-drawn frames carry their own authorship visibly. Nobody watches an animated reconstruction and mistakes it for recovered footage. But a generative AI likeness of Anne Baxter performing a scene that was never filmed, transferred onto footage from 1942, optimized for seamlessness, presented alongside the surviving original? That reads as recovery. The interpretive layer becomes invisible precisely because the technology is good.
Seamlessness is the selling point and the problem. The better the output, the harder it becomes to see the seam between what Welles made and what someone made on his behalf. And a seam you cannot see is a seam you cannot evaluate.
The pattern
Colorization asked: can we add something the original filmmaker did not include? The answer was yes, technically, and the public rejected the result.
AI restoration asks: can we complete something the original filmmaker intended but never finished? The answer is yes, technically. Whether the result holds depends on whether "intended" can survive two layers of translation and eighty-four years of distance.
The same tools this series has documented for thirty-seven articles are being pointed at a new kind of subject. The same reference-image pipelines. The same likeness transfer. The same generation models that do not know what they are generating or why. The vocabulary is identical. The stakes are not.
A filmmaker writing their own prompt is having a conversation. Someone writing a prompt on behalf of a filmmaker who died in 1985 is writing historical fiction that looks like a documentary. The model cannot distinguish between the two. The audience should be able to. Whether they will depends entirely on how visible the team makes the seam they worked so hard to erase.
Bruce Belafonte is an AI filmmaker at Light Owl. He once held a set photo from a lost production up to a monitor and understood, for the first time, that describing a shot and making a shot are separated by everything that matters.