While on production, we had to ensure that all the shooting variables worked in favour of the visual effects process that would only be done after. These variables included tracking markers, ISO, lighting, shooting in LogC, recording resolution and format as well as choice of lens and T-stop.
Tracking markers are implemented for motion tracking in post-production. The aim of providing an ample amount of tracking markers is to effectively create a reverse-engineered camera movement (in post-production) as close as possible to how the camera was moved on set. The reason we want this replicated movement is to ensure that the background plate that will be keyed into the green screen moves naturally with all other elements in the shot.
Furthermore, shooting in LogC helps the editing software easily identify the tracking markers during motion tracking. LogC, unlike Rec709, has a wider colour space which means that it retains more blacks and shadows and whites and highlights. Therefore, in post-production, we do not lose any tracking markers in the shadows or highlights.
LogC Rec709
As for recording resolution and format, we were instructed to shoot in 4K UHD in Apple ProRes 4444. Shooting in 4K UHD instead of, 1920 x 1080 for instance, means there are more pixels in the frame. Therefore, more information is retained for accurate green screen keying, motion tracking of tracking markers and finally keying out those tracking markers – this gives us better quality. Additionally, using a recording format of ProRes 4444 (12-bit data rate) instead of, ProRes 422 (10-bit data rate) for example, means we are shooting at a higher data rate. This means there is more colour information stored in the shot and this allows for a better and more precise chroma keying of the green screen.
In terms of ISO, we shot at 400 ISO instead of the Arri Amira’s native 800 ISO. Shooting at a lower ISO reduces noise. Therefore, when shooting at 400 ISO, the blacks in frame are more favoured and this reduces the particles visible on the green screen. This makes keying out the green screen in post-production more precise. However, a suitable ISO alone does not guarantee this – lighting also plays a huge part.
Lighting the set and green screen is done with the end result in mind. Firstly, for the set, the way it is lit depends on the environmental lighting of the background plate that will be keyed into the green screen. As it was meant to be night-time with cool lighting, the set was dimly lit and CTBs were used to create a bluish tint. Secondly, for the green screen, we had to make sure that it was evenly lit and that there were no creases on it. Having a gradient in the light reaching the green screen might get in the way of keying it out in post because the green might not seem pure.
Choosing an appropriate lens and T-stop was also of importance in order for the shot to match with the depth of field of the background plate. We used a 35mm lens as its depth of field best matched the look of the background plate.
Also, it is essential to allow for separation between the green screen and set/talents to prevent a spill of green. If there were to be green tints reaching the set/talents, chroma keying in post would impose of things other than the green screen. In which case, we would have to work around that by creating masks – this might take up time and could have been saved on in production. What did you find most challenging about both the filming and editing processes? I think the most challenging part about the filming process was understanding the importance of each shooting variable without having done this before. It is simple to follow instructions and do what needs to be done but the actual logic behind these actions was slightly difficult to wrap my mind around. The only point at which I started to value the importance of all these techniques was when we headed into post.
In post-production, it felt as though we were breaking up the shot into all the steps we took during production. This was very helpful in allowing me to understand the techniques and why they are necessary in shooting with a green screen. We made use of both After Effects and DaVinci Resolve in our post process and even added a 3D object in Resolve. Below, I will outline the steps we took in each software and how the workflows contrasted each other. Adobe After Effects - Green Screen Compositing
In After Effects, the method of creating visual effects or in this case, compositing, is done through layering different effects in a specific order to arrive at your final product. It almost looks like a stack of pancakes.
1) Import footage into AE
2) Create composition with preferred take
3) Add an ‘Apply Color LUT’ fx to the footage In the effect, select LUT and load in LCC.cube (LogC low contrast LUT)
4) Add a ‘Keylight(1.2)’ fx to the footage Using the dropper tool, select the green on the green screen In the effect, select ‘Intermediate Result’ as the viewing option In the effect, adjust ‘Clip Black/ Clip White’ in the dropdown options under ‘Screen Matte’ While doing this, switch in and out of the Alpha channel in your playback window to check for spillage
5) Create keyframed masks around moving subject(s) in shot
To do this, draw a mask around the subject(s) and keyframe the mask path
6) Create a precomposition out of the footage layer
7) Add a ‘3D Camera Tracker’ fx to the precomp
In the effect, add a tick to ‘Detailed Analysis’ In the effect, add a tick to ‘Auto-delete Points Across’ Hit ‘Analyze’
8) Create ‘Track Solid’ layer Place this layer above ‘3D Camera Tracker’ layer Tick box to make this layer a ‘3D Object’
9) Add in backplate.jpg to composition Place this layer as bottom-most layer Parent pick-whip ‘Track Solid’ to inherit its movement from the 3D Camera
10) Add ‘Adjustment Layer’ Place this layer as top-most layer Add a ‘Glow’ fx In the end, your whole layer order should look like this:
DaVinci Resolve - Green Screen Compositing and 3D Object
In DaVinci Resolve, the method of creating visual effects or in this case, compositing and adding a 3D object, is done through the use of nodes, built in a specific order to arrive at your final product. It almost looks like an electrical circuit.
(keying out green screen)
1) Import footage into DaVinci Resolve
2) Create timeline with preferred footage
3) With clip selected in the timeline, head into 'Fusion' tab
4) Output from MediaIn into a new FileLUT(FLUT) node
Load LCC.Cube into FLUT
5) Output from FLUT into a new CleanPlate node
Use dropper tool to key green screen
Toggle the Erode scale to bring back some detail that may have been lost
6) Output from FLUT into a new DeltaKeyer node
Output CleanPlate into DeltaKeyer
This sends through the attributes from the CleanPlate (key out the green screen) and allows us to make adjustments through the DeltaKeyer
In 'Matte' tab, adjust Threshold range and Clean Foreground
Ensure that final view mode is 'Final Result'
7) Output from DeltaKeyer into a new Merge1 node
(adding in backplate) 8) Output backplate into a Defocus node
Output Defocus node into an ImagePlane3D node Output ImagePlane3D node into a Renderer3D1 node 9) Import nuke file containing wall, floor, gnome solids/planes and camera tracking information Output Merge3D into previously mentioned Renderer3D1 Output Image Plane3D node into previously mentioned Merge3D1 node
Output Renderer3D1 into previously mentioned Merge1 node
(adding in 3D object - dwarf)
10) Output Merge1(for backplate) into Merge2(for gnome)
COPY nuke node arrangement and output Merge3D1_1 into a new Renderer3D2 node
Output Renderer3D2 node into previously mentioned Merge2
11) Create Transform3D1 node
Import Dwarf_2_LowMat node and output into Dwarf_2_Low
Output Dwarf_2_Low into previously mentioned Transform3D1
Import bump (dwarf dimension) and output into new BumpMap1 node and output into previously mentioned Dwarf_2_LowMat
Import colour (dwarf dimension) and output into previously mentioned Dwarf_2_LowMat
Import spec (dwarf dimension) and output into previously mentioned Dwarf_2_LowMat
(lighting 3D object)
12) Create AmbientLight1, DirectionalLight1 and SpotLight1 nodes and output all three into previously mentioned Merge3D1_1 node
Adjust each light to create an ambient light, moon light and spot light
13) Output previously mentioned Renderer3D2 node into a new Defocus node
Output Defocus node into a new FilmGrain node which then outputs into previously mentioned Merge2 node
(removing subject that passes frame - Jackson) 14) Output previously mentioned Merge2 into a new polygon node By moving a few frames forward at each time, create keyframes that allow polygon to follow Jackson's movement Turn on invert for polygon
(removing tracking markers) 15) Output previously mentioned Merge1 into new polygon nodes for each tracking marker By moving a few frames forward at each time, create keyframes that allow polygons to follow each tracking marker Output tracking marker polygons into a new bitmap node and output bitmap node into Merge1
Turn on invert only for bitmap
In the end, your whole node map should look like this:
Most Efficient Workflow
At the end of the day, to each his own. For me, through the visual effects processes practised in both After Effects and DaVinci Resolve, I very much preferred working with fusion on DaVinci Resolve. However, I must say that without my prior knowledge in After Effects, learning fusion would not have been as manageable for me. Therefore, I must give credit to After Effects for my preference for DaVinci Resolve.
Working with nodes provides the user with a very versatile workflow because it is easier and more possible to isolate certain effects without worrying that they were added to a layer in a precomposition that might take you ages to find. Being able to view your whole workflow in the form of an open-faced graph/map is very helpful. I find it to be a bit of a hindrance for the user when your effects are hidden in little drawers in the case of After Effects.
I must make it clear though, that I believe it is important to continue to sharpen my skills in After Effects because it really proves useful to understand and know how to create visual effects the old-school way - only then can you truly appreciate the user-friendliness that DaVinci Resolve has to offer.
Here is the final result of the green screen and 3D object composite done through DaVinci Resolve:
- Vin
Comments