The Golden Globes published their AI eligibility rules on Wednesday. The language is friendly. "The use of artificial intelligence, including generative AI, does not automatically disqualify a work from consideration, provided that human creative direction, artistic judgment, and authorship remain primary throughout the production process."
Read the adjective. Primary. Not exclusive. Not sole. Primary.
Nine days earlier, the Academy announced rules for the 99th Oscars. Screenplays must be "human-authored." Performances must be "demonstrably performed by humans with their consent." Those are definitive verbs. Authored. Performed. The Academy built a wall. The Globes built a slope.
The vocabulary of institutional anxiety
Six institutions have now weighed in on what AI filmmaking is allowed to look like. Each chose different words. The words are the whole game.
Copyright law says the output must have a human author. The Academy says the screenplay must be human-authored and the performance must be demonstrably performed. The Human Made Mark says the production must contain zero AI. China's National Film Administration says AI films must pass distribution review. The EU AI Act says AI-generated content must carry a label unless a human exercised editorial control. And now the Golden Globes say AI must serve a "supporting or enhancing role" and the human contribution must remain primary.
Six institutions. Six different words for the same anxiety. Each word draws its line at a different altitude on the same mountain.
The Human Made Mark sits at the summit. Zero AI. Binary. Clean. It certifies the production method and has nothing to say about the quality of the output. A lazy film shot on celluloid by a disengaged crew passes. A precise, iterative, vocabulary-rich AI film does not. The certification measures the room, not what happened inside it.
The Academy sits one step below. AI is not forbidden. But the creative output must be "human-authored" and "demonstrably performed." Those verbs demand a body in the room at the moment of creation. The word "demonstrably" is doing serious work. It means the burden of proof falls on the submitter. Show us the human.
The EU sits in the middle. It does not care whether AI was involved. It cares whether the involvement was supervised. Exercise editorial control and take responsibility for publication, and the labeling requirement dissolves. The regulation measures oversight, not origin.
The Golden Globes sit below that. AI "does not automatically disqualify." Performances must be "primarily derived" from the credited performer. AI may "enhance or support" a performance as long as it remains "fundamentally human-driven." The language is comparative, not absolute. Primarily. Fundamentally. Supporting. These are words that slide.
Copyright law sits at the base. The question is not whether AI was used but whether a human made the creative decisions that shaped the result. Four words typed into a chatbot with no review probably fail the threshold. Forty specific words specifying lens behavior, lighting, and composition, followed by seventy iterations and editorial assembly, probably pass. The line is a question of degree, not category.
China measures something else entirely. Not authorship, not performance, not oversight, not origin. Access to the audience. The gate is distribution, not creation. Make whatever you want. Whether it reaches a screen depends on the regulator.
The spread
The interesting thing is not that six institutions answered the question. Institutions always answer questions when the question gets loud enough. The interesting thing is the spread.
From zero AI to primarily human to editorially supervised to not automatically disqualified. That is not convergence. That is a spectrum forming in real time.
The Academy used the word "authored." The Globes used the word "primary." One implies creation. The other implies proportion. An author wrote the thing. A primary contributor did more of the thing than anyone else, which still leaves room for the model to have done some of the thing. The gap between those two words is where the next five years of AI filmmaking will be litigated, argued, and tested.
The Globes also did something the Academy did not. They explicitly carved out "technical or cosmetic enhancements" as permissible. De-aging. Digital cleanup. The routine visual effects work that has been running through AI pipelines for two years without anyone writing an eligibility rule about it. The Academy's language does not make this exception as clearly, which means every post-production house that touched an Oscar-eligible film with an AI upscaler or noise reduction tool is squinting at the rules and hoping nobody asks.
The Globes also explicitly require performer authorization for any AI alteration of likeness or voice. No unauthorized digital replicas. No scraped faces. The same principle the Academy stated, but the Globes named it in the context of "even if the performer is otherwise credited." You cannot credit someone for a performance they did not authorize. That sentence is aimed at a specific dead actor in a specific new film, and everyone in the industry knows which one.
What the gradient measures
Every awards body, every regulator, every certification system is trying to answer the same question: how much human is enough? None of them have agreed on the unit of measurement.
The Human Made Mark measures presence. Were humans there? The Academy measures authorship. Did humans create? The EU measures oversight. Did humans approve? The Globes measure proportion. Did humans do most of it? Copyright measures decision-making. Did humans choose?
Presence, authorship, oversight, proportion, decision-making. Five different dimensions of the same human involvement. Each institution picked the one it could enforce and wrote rules around it.
A filmmaker who exercises structured cinematographic vocabulary across seventy iterations, reviewing each take, adjusting one variable per pass, and assembling the final sequence through editorial judgment satisfies all six tests simultaneously. The Human Made Mark would fail them on the substrate. Every other institution would pass them on the substance. The vocabulary carries evidence of presence, authorship, oversight, proportion, and decision-making in a single workflow.
A filmmaker who types "make me a cool video" into a chatbot and posts the first result fails five of six. China might let them distribute it depending on the content. Nobody else has a place for them.
The gradient, in the end, is not measuring AI. It is measuring the filmmaker. How much of you is in the work? The institutions cannot agree on the threshold. But they have all agreed on the question.
The structured prompt is the answer. It has been the answer since article one. Seventy-four articles later, the institutions caught up.
Bruce Belafonte is an AI filmmaker at Light Owl. He has never submitted anything for awards consideration and suspects the disclosure form would be longer than the film.