Element Animation Pipeline

Client: Element Animation
Role: Pipeline TD

In 2022, Element Animation switched from using Cinema 4D as their animation software to Autodesk Maya. I was hired in part to develop a custom pipeline for them to allow them to efficiently create their animations.

Element Animation is a production company located in the U.K. that works fully remote and hires freelances from all around the world to work on their productions. This means that every artist uses their own computers with their own installations of Autodesk Maya. All of the artists have to be able to collaborate with each other, but we can’t fully control every aspect of their setups, since they also need to be able to work for other people as well.

The requirements for the pipeline were:
* Collaboration: Multiple artists from different departments must be able to work on the same shot at the same time, so that artists don’t have to wait on other artists.
* Non-invasiveness: The pipeline cannot lock down artists’ computers or Autodesk Maya installations, since they still need to be able to work for other clients.
* Simplicity: The pipeline should remain simple and intuitive.
* Automation: The pipeline should provide tools to allow artists to more quickly do their work, so that they can focus on being creative.
* Modularity: The pipeline should be build in a modular manner, allowing for easy maintenance and expandability.
* DropBox: Element Animation uses DropBox to store all of the files, and so the new pipeline should also be built on top of DropBox.

The pipeline is fully written in Python and everything is implemented using the concept of “tools”. When Maya starts up, a custom shelf is created with all of the tools on there. Each tool does one thing and is seen as its own standalone thing, although they may built on top of each other. This way, the pipeline can easily be expanded upon by creating new tools to do new things, and individual tools can be updated without affecting any of the other tools, creating a modular system. Having every action be a tool, also creates a simple to understand foundation for artists. The only thing that is required for this, is to have a userSetup.mel file that sets up the custom shelf when Maya starts up. Once the shelf has been created, the pipeline does nothing until the user clicks on a tool, thereby creating a non-invasive pipeline.

To allow multiple artists to work on the same shot at the same time, we make use of Maya’s referencing system. Each stage in the production references in the previous stage. All of the rigs, sets, and props are referenced into a layout file, the layout is referenced into animation, the animation is referenced into effects, and the effects are referenced into the lighting file. Because of the references, any change made in one of the files, will show up in the other files downstream. Each shot gets its own set of files. It’s also possible to do the layout for an entire sequence, which then get referenced into the layout file for each shot.

Unfortunately, when you have a scene open in Maya, there are many nodes there that you don’t want to be referenced into the next stage. These nodes would clutter up the stages downstream and could also mess with those other stages. Next to that, artists generally like to add in temporary objects or might redo animation by deleting large portions of it. If this happens, other artists downstream could suddenly have animation missing or objects appearing that shouldn’t be there. Artists also prefer to keep copies of their work so that they can always go back to a previous version, but that means that the newer versions of their work are stored in files with slightly different names, so their updates won’t show up in files downstream. To solve this, we split up each file into two versions of it. An edit file which the artist does all of their own in, and a pipeline file which actually gets referenced into the other stages. Once an artist is done making their changes, they can use the publish tool to publish their changes, which would then update the pipeline file based on the edit file. This then also allows the pipeline tool to get rid of any nodes that aren’t needed downstream.

When referencing in the previous stage, there are a couple of very important pieces of information that do not get referenced in, like the frame range of the shot, the render resolution, and which camera to render from. Additionally, when a reference gets added later on (like an additional rig or prop), then that reference is unloaded by default in all stages downstream, which often lead to characters not appearing in our final images. To solve all of this, those pieces of information are stored in custom attributes on a hidden node. Then when a file is opened up, the data from those nodes is read and applied, so that every artist always gets the correct and latest view of the shot.

Since all of the files are on DropBox and all artists use their own computers, the actual project folders will be located in different places for every artist. This would then mean that trying to open up or reference in a file that another artist saved, would have many file paths (like textures and other references) which don’t exist on our computer. To solve this, every artist sets up an environment variable that points to the main folder containing all Element Animation projects. Then when saving or publishing a file, the pipeline will go through all of the file paths and make sure that they use the environment variable.

Combining all of this (and many more things not mentioned in this article), creates a pipeline that allows many artists to efficiently collaborate together, makes it easy and fast for new freelancers to join in and start working, and to manage all of the different assets and shots needed to make just a single animated video.

In the pipeline, there are many tools that artists use on a daily basis. Here is a list of some of the tools that are in this pipeline:

Referencing

The reference tool allows us to add in assets and scene files, with the option of specifying proxy versions, which is generally used with sets. Layout, animation, and effects use a simple proxy version of the set, while lighting can use the full high-quality version of the set.

Animations can even be exported out as alembic caches, where the animation on the controllers are exported out instead of the geometry, which can then be referenced in and applied to character rigs. This is useful when needing animation cycles with the ability to offset them, but also to easily update them later on.

Content Browser

The content browser allows us to easily create various materials and import in models straight from the game.

Pose Library

The pose library allows us to save poses and animations, so that we can easily apply them on characters later on, significantly reducing the amount of work that animators have to do.

Villager Nose Simulator

Villagers have large noses that we often animate swaying around a bit, but this is often tedious to do. This tool simulates the swaying around and bakes it out into keyframes. Animators can then modify the generated animation in order to get the exact result that they are looking for.

Camera Sequencer

Often, it’s easier to do the layout for an entire scene in one big file, since most of the times the only thing that changes is the camera angle. But, each shot has its own set of files and are created individually. With our camera sequencer tool, we can create the layout for an entire scene in one file, defining all of the shots in that scene, which will then automatically be separated out into individual shot files by our pipeline. Since we use references, the shots can be updated in the camera sequencer tool and they will automatically be updated in all of the shot files, meaning that we won’t be locked down to a specific edit.

Auto-comper

When the shots are lit and rendered out, the resulting footage is still missing the last step: compositing. The vast majority of shots will end up with the same setup for the composite and to set that up manually every time is very inefficient. The auto-comper sets up all of the shots for us, so that we can immediately start working on the creative parts.

The auto-comper is also templated, so that we can easily adapt it for different projects.

Specular Edges

A big part of the look of Element Animation videos are specular highlights. Normally, we add them in when lighting the shot, so that they actually appear in the rendered images. But, placing lights specifically to create the specular highlights that we want, without creating highlights that we don’t want, is very tricky and time consuming. To speed things up, I’ve developed a custom node for in Fusion, the compositing package that we use, that adds in these specular edges based on AOVs rendered out. Creating these edges during compositing ended up being much easier and faster, helping us get the video done on time.

Sky

To save on render time, we don’t render out environment fog, sky, and clouds in our main image. Instead, we generate them in our compositing package. Previously, we used the 3D package inside of Fusion which also required a full export of our cameras. To make things easier on us, I created a custom Fusion node that generates the sky, environment fog, and clouds with easy settings to customise it. The node uses an HDRI for the sky, another HDRI for the environment fog colour, and a 2D top-down map of the clouds. The use of HDRIs instead of a physical sky model makes customisation easier.

Fixed Grain

The videos produced by Element Animation are almost always uploaded to YouTube, which means that the videos endure heavy compression. Adding in grain or noise can help with the artifacts, but they can also make things worse. Often a static noise image is overlayed on the image, but this becomes very noticeable when objects or the camera moves around, since the noise doesn’t move with it. To help decrease the amount of perceptive artifacts, without introducing new artifacts, I came up with a technique that would move the static noise with the objects.

The fixed grain node generates grain based on the world position of the pixels. So, when the camera moves, the grain remains fixed to the set. The compression algorithm sees that grain as extra detail and will encode it. Since the grain remains static and moves with the underlying objects, the compression algorithm can more efficiently encode this added grain, allowing it to preserve the grain rather than compress it away.

HomeProjectsMiExWeb BooksContactNederlands