Mihaela Mihailova
It’s an open secret that Hollywood does not value the labor involved in animation and visual effects. Oscar voters do not bother to watch animated contenders, while the Academy itself has repeatedly framed animation as “something kids watch and adults have to endure,” as director Phil Lord put it. Marvel Studios, whose entire catalog, pre-visual effects work, boils down to actors in tights talking to a tennis ball on a stick, is notorious for underpaying vfx artists while openly mocking their labor. Whenever streaming giants need to cull their libraries, animation shows are among the first to get axed, as HBO recently demonstrated.
And yet, animation production, whose workflows lend themselves more readily to remote collaboration, is largely what kept Hollywood afloat during the pandemic. Even now, in this dubiously post-pandemic moment, animation crews continue to carry contemporary popular entertainment on their (typically uninsured) backs. Without their craft, who would supply the steady stream of alien creatures, fantasy landscapes, and outlandish phenomena required by contemporary big-budget sci-fi, fantasy, and superhero TV? Who would render the photorealistic talking animals in Disney’s “live-action” remakes of their own classics or the photorealistic dragons in DreamWorks’ upcoming “live-action” remakes of their animated properties? Who would de-wrinkle retirement-age Hollywood icons or even bring them back from their eternal slumber? Hollywood may not respect animators, but it needs them.
But what if it didn’t?
Enter AI. In 2022, the explosion of text-to-image and text-to-video software which relies on machine learning tools to generate synthetic artwork quickly became the buzziest talking point and the hottest monetization opportunity in the industry. Generative AI tools such as DALL-E, Midjourney, and Stable Diffusion dominated headlines, eliciting a range of responses, from knee-jerk technophobic panic to premature enthusiasm about this tech’s potential to, in the preferred parlance of its PR machine, “democratize art.”
Almost immediately, this convenient euphemism entered animation industry discourse, as AI tech companies like Krikey.ai launched text-to-animation tools. Their advantage, according to the company, boils down to rapid workflow automation and user friendliness: “Usually an animation takes 5+ business days to create, but with Krikey’s AI animation tool it takes minutes to generate. […] Now anyone can animate a 3D character with a few words in a text box.” Krikey’s actual output is nothing to marvel at currently, but the writing is in the code, to update an old phrase: for the entertainment industry, the opportunity to replace skilled animation professionals (and their expense) is AI’s siren call, and AI companies are eager to sing to its tune.

The visual effects field isn’t far behind in test-driving AI’s potential to outcompete professional vendors; Robert Zemeckis’s upcoming film Here will rely on the AI tool Metaphysic Live to digitally de-age its cast, including Tom Hanks and Robin Wright. This product promises the creation of high-resolution photorealistic faceswaps “on top of actors’ performances live and in real time without the need for further compositing or VFX work.” Robin Wright, incidentally, starred in The Congress, Ari Folman’s 2013 sci-fi (at the time) feature about an actress who sells the digital rights to her likeness after realizing that new technologies are about to make her job obsolete. In the absence of AI, the CG Robin Wright had to be crafted by human animators, which is probably why nobody in Hollywood seems to have seen this movie. Spoiler alert: in The Congress, automating film production does not work out as well for Wright’s character as it does for her forever-young digital avatar and the studio’s bottom line.


One of the most contentious pitches for the automation of animation processes so far has come from a surprising source: Corridor Digital, a digital studio whose popular YouTube channel is devoted to the art of visual effects. In a move almost universally denounced by the animation community as an act of ideological betrayal, Corridor Digital released an anime-inspired short created from live-action footage with the aid of machine learning, accompanied by a “how to” video bombastically titled “Did We Just Change Animation Forever?” Having seen the short, I can attest that the answer is a reassuring “no,” but the question itself may well be an early warning sign of the digital effects community’s willingness to self-cannibalize in the hopes of keeping up with increased competition.
Borrowing the “great equalizer” rhetoric from AI tech companies, Corridor Digital’s production team claims that their tool isn’t a replacement for skilled animation labor, but simply a way to make animation production more accessible. And yet, in the very same pinned YouTube comment, co-director Niko Pueringer also writes the following: “when I said this democratizes animation, I’m referring to the near-insurmountable mountain of work needed to make a full-length narrative animation.” Given that trained animation professionals are regularly surmounting that very mountain as their main source of income, one might ask what this tool is equalizing, and for whom.
While proclamations of an impending animation revolution are multiplying, early experiments with AI-generated animation have yet to yield convincing proof that the tech is ready for prime time. Film director Frank Pavich has dubbed AI art “turbocharged pastiche,” an apt description that captures its utter lack of restraint and its tireless capacity for mimicry while also hinting at its failure to transcend its built-in derivativity. Even the aforementioned AI anime, created by a professional digital production company, looks corny, gaudy, and surprisingly “uncanny valley” for such a highly stylized work.
Another recent experiment, evocatively named Nothing, Forever, is a Twitch tv show procedurally generated by DALL-E and Chat GPT. The absurdist sitcom, livestreamed 24/7 as an allegedly endless low-resolution genre pastiche, inspired comparisons to a “PS1 voxel game” due to its glitchy retro aesthetic, bizarre camerawork, and awkward animation. Meanwhile, the tantalizing promise of its name did not come true; in a chillingly human turn, the AI screenwriter turned to transphobic jokes, earning the show a temporary Twitch ban for violating the platform’s community guidelines. Nothing, Forever’s rocky start is a reminder that AI ethics is not keeping pace with AI software development – an issue whose negative impact is already evident in nonconsensual deepfake pornography and whose full ramifications have no doubt only begun to emerge.

At the same time, the legal status of AI-generated art is currently murky at best, especially when it comes to copyright. Legislation has not yet caught up to recent developments in synthetic media creation, and the numerous ethical and legal conundrums raised by training data collection for machine learning algorithms and by automated filmmaking will likely take years to untangle. Meanwhile, early court rulings hint at a rocky road ahead for anyone hoping to profit from the fruits of their computer-assisted labor. In February 2023, The US Copyright Office determined that Kris Kashtanova, who used Midjourney to create a comic, would only be granted copyright protection for the text and layout, but not for the images the AI program generated.
Regardless of the future trajectories of this tech, it is evident that generative AI tools are not currently a viable substitute for skilled animation labor. It is up for debate whether they will ever get there in a creatively meaningful way. The salient issue here and now is the eagerness, even urgency, on the part of the tech and entertainment industries to make animation and vfx artists obsolete. As Vivian Lam has argued, “the central question raised by large language models isn’t whether AI can replace human creativity, but whether people value the artist.”
Yet, despite the numerous issues with generative AI and its implementation, some professional artists are already thinking of ways to incorporate it into their own workflows. Former Disney animator Aaron Blaise, for instance, has expressed interest in training AI on his own work and having it help with aspects of the production process such as lighting and shadows. As Josh Glick has argued, animators working in areas such as concept art and previs are also likely to benefit from the quick visual output of synthetic media. But such uses of animation, while more equitable on the surface, may prove to be a slippery slope.
As Joel McKim points out, while tasks such as scanning digital assets and preparing training data for machine learning algorithms become part of an animator’s job, “the labor of animation seems increasingly bound up with feeding a pipeline of automation.” Hito Steyerl goes even further, suggesting that AI image generators are just “onboarding tools” used by major tech companies in an attempt to “draft people to basically buy into their services or become dependent on them.” In other words, in the most optimistic scenario, AI could be destined to become not a replacement for professional animators but yet another toolset for them to master, subscribe to, and keep up with in order to stay relevant (and employed). The tech revolution might end up boiling down to automated monthly payments, not automated art.
Mihaela Mihailova is Assistant Professor in the School of Cinema at San Francisco State University. She is the editor of Coraline: A Closer Look at Studio LAIKA’s Stop-Motion Witchcraft (Bloomsbury, 2021). She has published in The Velvet Light Trap, Journal of Cinema and Media Studies, [in]Transition, Convergence, and Feminist Media Studies.
Image credit: Anime Rock, Paper, Scissors (Corridor Digital, 2023).