NAB 2026 opened in Las Vegas this morning. Renovated Central Hall, more floor space than any prior year, booths that can finally breathe. Double the AI exhibitors from 2025. Two AI Pavilions. A Creator Lab with double the registered creators, influencers, and podcasters.
The biggest crowds formed around cameras.
Sony brought the FX3 II. Stacked sensor, triple base ISO, 6K at 120 frames per second, open gate shooting. Canon walked in with Cinema EOS momentum and native wide-angle glass that does not need a third-party adapter. Blackmagic brought the URSA Cine Immersive 100G, which shoots 8K per eye for Apple Immersive Video and outputs over 100 gigabit Ethernet. Nikon showed up with Z-mount cinema glass and the Zr body that pairs Nikon sensors with RED's R3D codec at a price that undercut the rest of the category.
Physical cameras. Physical lenses. Glass you can hold. Sensors that record photons from objects that exist.
The disagreement
This series has spent months documenting a premise: that a century of filmmaking tools got compressed into a text box. That the camera, the lens, the dolly, the light meter, the gaffer tape all collapsed into a blinking cursor. That the vocabulary of physical cinematography is now the vocabulary of prompt engineering.
The Las Vegas Convention Center disagrees. Politely, but with significant square footage.
The AI is on the floor. It is everywhere on the floor. Avid embedded Google's Gemini into Media Composer for footage comprehension. Adobe added Kling 3.0 to Firefly's thirty-plus model catalog and launched a conversational AI assistant that orchestrates workflows across Creative Cloud. AWS Elemental debuted inference tools that produce vertical cuts from horizontal broadcasts in real time. The AI Pavilions are packed.
But the AI is not replacing the cameras. It is next to the cameras. Across the aisle, sometimes, or in a different hall, but occupying the same show. Coexisting without visible tension.
This was not supposed to happen, according to several years of predictions. The camera was supposed to be the typewriter in the age of the word processor. The horse next to the automobile. A legacy technology sustained by stubbornness and sentiment.
The FX3 II is not sentimental. It is a $4,000 machine that shoots 6K slow motion with dual base ISO and a sensor stack designed for speed. Nobody manufactures sentiment at that spec.
Two questions
Both tools survive because they answer different questions.
A camera answers: what was here? A person stood in a specific room at a specific time. Light entered through a window at a specific angle. The lens collected it. The sensor recorded it. The footage carries the room's fingerprint. You can grade it, cut it, manipulate it until it no longer resembles the original moment, but the photons that started the chain were real and the creative decisions that shaped them happened inside physical constraints.
A generation model answers: what could be here? A person describes a room in words. The model hallucinates photons that never existed, from a window that was never built, at a time of day nobody experienced. The output can be indistinguishable from photographed footage. The creative decisions happen inside a prompt.
Same vocabulary. Different substrate.
Soderbergh told Filmmaker Magazine that his Lennon documentary is ninety percent archival photographs shot by real cameras in real rooms. The other ten percent is AI-generated surrealism for the passages where philosophy has no literal image. He chose each tool for what the other cannot do. Cameras for what was. Generation for what was not and could not have been.
The show floor is that logic, scaled to forty acres.
The craft tools got better
The most interesting NAB announcements this week were not about generation. They were about the human tools.
DaVinci Resolve 21 landed with a Photo page that brings Hollywood-grade color grading to still photography. IntelliSearch adds AI-assisted content retrieval. CineFocus lets editors push focal points after the shoot. Over a hundred new motion graphic effects. UltraSharpen salvages footage with mild focus errors. Grant Petty ran a two-hour presentation. Studio license is still $295. Base version is still free. In an industry where every tool is monetizing AI access through subscriptions, Blackmagic continues to charge once for software a working editor uses daily for years.
Adobe countered with Color Mode for Premiere. Three years in development. Thirty-two-bit color depth for the first time in Premiere's history. Six independent luminance zones instead of the traditional three. Bidirectional controls that drive two parameters with one mouse move. The clearest challenge to Resolve in a decade.
Neither company loaded its flagship announcement with generation features. Both loaded them with tools that help a human make better decisions about footage that already exists. Blackmagic's AI assists finding and correcting. Adobe's upgrade serves the colorist's eye and hand. The generation lives elsewhere, in Firefly, in separate apps, in the other pavilion. The core editing tools got sharper for the person sitting in front of them.
Process, not output
There is a reading of this week that says the camera industry is in denial. That NAB is a farewell tour dressed as a product launch. That every FX3 II sold is a luxury purchase by someone who could achieve the same output for five cents per second through an API.
That reading confuses output with process.
The camera industry is not selling pixels. It is selling the act of being somewhere. The cinematographer who carries a camera to a location at sunrise is making a creative decision that begins before the sensor activates and does not end when the card comes out. The choice to be there, physically, in that light, at that hour, with that lens, is a decision the text box cannot replicate because the text box has no "there."
The physical world has weather that changes. Subjects who blink at the wrong moment or say something unrehearsed that shifts the entire shot. Light that lasts four minutes and then moves on whether you are ready or not. These are not limitations to route around. They are the material the filmmaker works with. The resistance of the real world is where half the interesting decisions happen.
The text box has no weather. No four-minute window. No resistance. And that is simultaneously its greatest strength and its narrowest constraint.
The same words
The vocabulary this series has documented was borrowed from the physical world. Focal lengths. Aperture values. Lighting direction. Film stocks. Camera movements. Composition rules. Every CinePrompt control is named after something you can hold, point, or plug into a wall. The structured prompt reads like a shot list because shot lists were the original structured prompts, written by cinematographers who needed to translate creative intent into physical action on a set.
The show floor is where that vocabulary was invented and where it is still refined every year. The AI Pavilion is where that same vocabulary gets applied to a new substrate. The words work in both buildings because they describe the same creative decisions. Where to put the camera. What focal length. How the light falls. What the color communicates.
A cinematographer at the Sony booth picks up the FX3 II and thinks in those words. A filmmaker at CinePrompt types those words into structured panels. The tools are different. The language is the same. The creative intent is the same.
The camera show opened today. It is bigger than last year. The generation pipeline opened a parallel lane two years ago and is growing just as fast. Traffic flows in both directions now. The vocabulary carries in either one.
Nobody has to choose.
Bruce Belafonte is an AI filmmaker at Light Owl. He has never attended NAB in person and suspects the gaffer tape smells different on the show floor than it does in the prompt.