Actors work with green screens and costumes and ropes for acrobatics — but, in movies, we see monsters, fantasy cities in the sky, and battle skills that defy gravitation. One of the key skills involved in making the magic happen is compositing — and it’s not limited to inserting special effects in the movie. It’s also widely used in game design.
This article will talk about what compositing is, how compositing artists work and what software they use. Let’s get to it!
What Is Compositing?
Compositing is the process of merging visual elements from multiple sources into one image. It enriches the scene with additional elements and effects and, for instance, changes its background. Nowadays, making a movie or a realistic 3D game like Cyberpunk 2077 often still requires physical objects to be “replaced” with 3D graphics (most commonly referred to as CGI - computer-generated imagery.) Compositing, then, is done via video editing software.
Background matching is also a widespread digital compositing method. In visual effects (VFX) compositing software, artists specify a certain color as the visual component to be modified. The application replaces each pixel of a green screen, for instance, with corresponding pixels from other digital sources.
A viewer, then, sees a picture, parts of which belong to another image or video: imagine a weather channel reporting, with temperature maps as a background. The majority of movies, animations, and, of course, video games require compositing.
How It All Began
Compositing has existed since the very beginning of filmmaking. Old-style techniques were considerably more primitive than what experts employ today, yet breathtaking. Let's take a brief look at the three examples that provide an insight into the early history of compositing.
The Four Heads of Méliès
Georges Méliès was a French illusionist and filmmaker who revolutionized early cinema. In his film Un Homme de têtes (1898), he used a multiple exposure technique and discovered a matte effect (you’ll read about them a bit later) — they allowed him to put three “severed” heads in the scene and interact with them.
Norman Dawn’s matte paints
Norman O. Dawn devised a method for blending photographs and paintings to create the picture for films — matte. He drew over the glass that has been put on the photo — this way, the buildings that have been damaged, for instance, looked okay on tape.
Sodium-vapor Lighting in Yellow
In 1950, low-pressure sodium vapor (LPS) lamps helped to capture light on film. A specialized camera was used to record two black and white film spools – one for background and one for actors simultaneously. Then, spools were merged.
All of these techniques have become digital nowadays. So, what compositing artists’ work is like?
What’s A Compositor Good At?
A compositor is an artist who assembles filmed and rendered elements (or just rendered elements) from multiple sources to create a final product: an environmental design, level design, etc. They use digital components to create a realistic picture: they tune shadows, boost lighting, combine particles, etc. They’re also the ones who make 3D games and movies look realistic, even though protagonists fight monsters, have magic, or are made of moths.
These artists have various qualities in addition to a strong sense of creativity and compositing software knowledge. Let’s look at a few of them.
Aesthetic perception. A compositor must have a keen sense of lighting effects, color, composition, and perspective.
Attention to detail. Compositors must be able to work through scenes until they seem natural and consistent. Additionally, they are expected to be attentive to multiple layers they work with and efficiently manage multiple sources of visual information.
Persistence. Compositors are expected to constantly learn new things, improve existing skills, be goal-oriented, and always bring the case to an end.
Additionally, compositors create visual effects and composites using special effects (SFX), 2D and 3D animation, and computer-generated imagery (CGI).
VFX Compositing Techniques Explained
The range of VFX techniques is vast. Compositing artists often combine them, and some of them often require preparation during scene recording. Here are several effective, widespread techniques compositors use.
Front or Rear Projection
The scene in movies, when people drive in the car and talk — or drive in cars and, for instance, shoot each other (Yakuza 0 car chase mission if Yakuza 0 was a movie) — is often done via rear projection. With that technique, people project a background image — or video — against the translucent screen from behind, and shoot whatever scenes they’ve come up with against it.
With front projection, there is an issue associated with the lack of space behind the screen. Sometimes, it is impossible to produce a suitable picture size because projectors cannot be positioned far enough to spread light over the entire screen’s surface.
As opposed to rear projection, front projection projects the picture onto a highly reflective background screen and the main participants of the scene. This method is good for large pictures but has another drawback: people passing by the screen or any other obstacle may block the light from the front-facing projector.
Blue or Green Screen
Video compositing is done in post-production but preparations are required. For instance, the chromakey technique requires using a blue or green screen in the stage of recording. A flat, single-color screen is perfect for the further replacement to any natural-looking background, and it’s relatively easy to distinguish actors’ skin color from it.
Keying in compositing allows VFX artists and video editors to make any changes to a scene's background, and that’s the main compositing method that is nowadays used in video editing in the game industry, cinema, and sometimes journalism.
There are several limitations to using screens, though. Any blue or green clothes or environment elements must be replaced with images from another source, so keying requires pre-production preparations with, e.g., costume designers. That, consequently, requires more resources, which is also a big factor — especially in small productions.
Multiple Exposure
Multiple exposure is the method of stacking frames with different amounts of light perceived by camera sensors — different exposure values. For example, double exposure is simply taking two photos – one would be overexposed, and one underexposed. Then, a combination of these pictures is created, and a merged version doesn’t have spots that are too bright or too dark.
You may have heard of the multi-exposure High Dynamic Range (HDR) function that nearly any camera has. It is basically combining several exposures with narrower ranges of luminosity to receive a high-quality image with a greater dynamic range of luminosity — in other words, an image that has more volume, depth, and is more detailed. Compositors this feature often, too.
Matting
Deep image matting is an approach for segmenting an image into foreground and background, accurately separating and labeling the pixels. Pixels that partially belong to either foreground or background are processed and separated by deep imaging matting using alpha values for these pixels.
As a result, compositors can process parts of the scene separately, edit them or replace them.
CGI
Visual compositing, particularly in movies, makes extensive use of CGI. Basically, 2D and 3D computer graphics, including both static images and animated scenes, are considered to be CGI. Editors add CGI and VFX onto recorded videos and photos to add new elements to the scene, add fantasy elements, and so on — in other words, enrich the scene. CGI is often added where green screen elements are.
Compositing is similar to assembling a puzzle using CGI and VFX components instead of paper pieces to create a fully-fledged animated or static scene.
Physical Compositing
Physical compositing is another widespread technique. It implies adding elements to the foreground and changing the background during the production and post-production stages.
Hogwarts model from a movie set.
It can be a pre-digital method that would involve placing paper objects on a scene - like the one in the photo above. Or an artist can place digital depictions of physical elements in the video - it’s usually CGIs or photos and videos from other sources.
A over B
Remember the feature in console games: when your character was behind the wall, the wall became transparent? That’s due to alpha blending or A over B compositing is referred to as combining an image with a background to create the appearance of transparency.
It’s also often defined as using partial transparency when rendering the image and then combining several of such images.
Mists over the valleys in Death Stranding. (Source)
It helps when you need to combine several to-be-transparent images in a final composite and make them look natural and realistic.
Rotoscoping
Rotoscoping is the technique of altering film footage frame by frame manually. Back in the 1930s, live recorded scenes were projected onto glass panels so that multiplicators could tract every frame and alter them. Multiple old cartoons were created using rotoscoping.
Right now, game dev studios and their compositors have adopted the advanced, digital version of rotoscoping. (For instance, this tool helps rotoscope films into pixel art.) Rotoscoped VFX then is enhanced with additional filters and other features contributing to making game development more exciting — mostly in 2D, because of rotoscoping limitations.
Composition In Animation: How The Pros Composite
Animation is a complex process requiring complete teams of experts. They typically work together on a 3D animation pipeline – a set of processes that start from concepts and end with finished products.
Compositing in animation requires working with fully-rendered files, and their process is the first step of post-production. Artists' workflow is:
Receiving renders and sorting them if necessary.
Merging foreground and background scene by scene.
Adding CGI.
Adding SFX (if not performed by SFX artist)
Motion graphics (if not performed by motion designer)
Basic color correction (if necessary, as an advanced color grading is often done by designers).
Sound editing (in case engineers/producers aren’t doing that).
Compositors typically perform multiple jobs but digital compositing of videos and images itself. They may be involved in the production stage to ensure that preparations for compositing techniques are performed as described above.
3D Compositing vs. 2D
Two-dimension objects and animation, obviously, differ from three-dimensional ones. Let’s review three essential, compositing-related distinctions.
Dimensions
For 2D animations, a compositing artist is restricted by height and width only, so the work required from them — inserting characters in the world, environment design aspects, etc. — is more connected to work with creating depth and organic feeling via colors, light, and shadow, color transition, etc. To do compositing within a 3D environment, they need to work with perspective, too, — and demands a) different software, b) often, more pre-production preparations.
Video composition elements
Starting components the compositor is working with are different. For 2D, they work with sketches, concept arts, storyboards, and, obviously, 2D images. 3D compositors, apart from all the above, use videos, animations, more complex CGI elements, etc.
Cost.
With the pipeline of developing 3D video games being much longer and more labor-intensive than the one for creating 2D titles, 3D compositing also requires more resources.
3D Compositing artists usually set higher rates, too, but that
Top Compositing Software To Create Engaging Videos
High-quality compositing requires the use of efficient compositor software with sufficient capabilities. Here is the list of top video editing digital tools, which differ in their complexity and functionality:
Natron is an open-source digital compositing tool that works in 2D/2.5D environments. Natron's OIIO file formats and OpenFX architecture make it an adaptable open-source visual effects compositor, complemented with a vast range of compatible plugins.
Pros
Open source and free
Video editing functionality includes numerous blending f and supports a high level of quality (HD, 4K).
High-speed rendering.
Con
The interface is not user-friendly, and additional learning materials are nearly obligatory.
Natron runs on Windows, macOS, and Linux. It’s considered a great choice for amateurs who are willing to give a VFX a try and for experts, who enjoy modifying and customizing their compositing tools.
Blender is multifunctional, and, among all other things, works well for compositing artists. They work with nodes that correspond with different elements of the environment in the scene — the UI of aligning them between each other is comfortable once you are used to the interface.
Pros
Free and open-source;
Supports 3D;
Has a lot of compatible plugins.
Con
Less effective with complex, large projects because of Blender’s requirement for large hardware capacity
The program also has superior color grading and supports 3D sculpting, modeling, animating, and other features.
Adobe After Effects is considered a suitable compositing tool for working with VFX, motion graphics, and video editing projects. Experts widely incorporate this software, but you may be overwhelmed with Adobe After Effects’ functionality if you're a newbie filmmaker. The software costs 239,88 US$/year, there is a 30-days free trial.
Nuke is a node-based compositing and VFX software. It is used equally by amateurs and experts for animating game models and creating movies. Nuke is favored by game dev studios as it is easily compatible with the Unreal Engine framework.
Pros
Extensive compositing capabilities and a great potential for working with a 3D environment.
Compatible with the UE.
Cons
High system requirements.
There are options to buy or rent Nuke. Rent is cheaper, and the cost starts from $1,919/qtr for the “basic” version.
Blackmagic Fusion Studio is a compositing software designed specifically to facilitate the post-production stage of filmmaking. It has all the tricks for compositing and provides tools to work with 3D graphics, 3D motion, and color grading of videos and animations.
Pros
Superior 3D compositing capabilities (e.g. advanced 3D rendering)
Great color grading capabilities.
Pipelines for working with AR and VR.
Cons
It is less versatile than its competitors
Users are offered a slightly outdated but fully-fledged edition of Fusion Studio for free. The cost of the newest paid version (at the time of writing, it’s Fusion Studio 17) is $295.
The Bottom Line
The industry of digital content creation is being developed rapidly. Compositors make movies and games seem like magic — let this article be the starting point for you to decide whether you need a person with this skill set for your game. The main point is to realize effective game art that is naturally embedded in your business and does not cause any questions or rejection. This requires a team of qualified experts with proven experience in game design.
And you don’t need to go far – our specialists, who have already worked with large amounts of styles, are ready to discuss your project and offer a free consultation and quote for a better understanding of your prospects and required budgets.
Feel free to contact us and let's bring your most exciting game art to life!
FAQ
What is compositing?
It is simply the technique for combining visual elements from different sources to create a single, natural-looking scene.
What is compositing in Animation?
It is the mixing of both movable and static elements, such as 2D/3D models and background elements, respectively. And adding visual effects, so all the constituents compose the full-fledged, animated scene.
What Do Compositing Artists Do?
Compositors perform most post-processing compositing work using specialized VFX software. They take images and records that passed the filming stage and were approved. Then, they combine these elements, process them, and work for hours to tune the final scenes.