Week 1
This week we are getting used to the 3D viewport and aspects of Nuke
When we create cameras we should consider
- Resolution
- Sensors
Adjust horizontal apeture.
Press tab in viewport to switch between 2d and 3D. Colour bars doesn’t show as it’s a 2D node, the transform it into a 3D node we place it on a card which is essentially a plane in Maya. This puts the plane in 3D space. We can move it around now that it’s on a card.
The workflow for using 3D in Maya requires us to use scenes. To the scene node we feed in our camera and scene.
After this we add a Scanline render node to convert it back to 2D, from here we can manipulate it as we are used to, making any changes like colour correction etc.
Colour space
Colour space
sRGB is the standard colour space used on JPEG images.
For our projects we use acescg as it gives us a deeper colour range.
![](https://campuspress.uwl.ac.uk/rachel/files/2024/02/colour-space-fdaf50d9ab3bcd8b.png)
This image shows the larger range of colour provided by ACEScg vs sRGB. Particularly in the green range
This corresponds to Mayas framework too so the colour space will be compatible.
Display is sRBG but renders are in ACEScg.
A note on scale
Nuke doesn’t interpret scale. Need to have photogrammetry to know the true scale of the shot to then input that data into 3D equaliser or Nuke. If photogrammetry isn’t available, you can create the layout in Maya which gives the data scale that can then be transferred to Maya
Because we need to increase the scale of Maya to meters instead of centimeters our camera came back very small. To avoid scaling the camera in the channel box controls (as this would ruin the shot parameters) we can use the object display panel which changes the scale of the displayed camera but doesn’t affect the shot parameters
Making a animation in Maya to transpose into Nuke
To test this out we created an a small animation in Maya setting the scale of the project to meters creating a cube to represent a character at 180cm tall and moved the camera through 100 frames.
We then exported this using an alembic export. Making sure to check the preferences “write UV” and “world space so that the Uvs and scale will translate correctly.
Once we had this file we move it into Nuke using a ‘ReadGeo’ node and linking the new camera and geometry to the scene.
Now that the only camera linked is the camera from Maya, we can view it in both 2D and 3D. I did have to change the resolution of nuke so that it matches.
Maya animation playblast
Nuke comparison
Here you can see that the files match exactly in both Nuke and Maya
Render in Maya
Render in Nuke
Week 2 – Learning about lens distortion and camera tracking in Nuke
This week we looked at lens distortion and learnt more about tracking in Nuke. One of the concept we learnt was that of overscan. With our footage, we reformatted it to have an overscan. This means that more pixels are generated outside the bounding box, it’s a similar concept to padding. The reason that we do this is so that when we distort the image, the result of that distortion will still fit within the bounding box. It effectively gives us more of the image to work with when distorting so that we don’t lose any of the image or resolution.
In the above images you can see the output of the image distorted and undistorted
These nodes show how the undistortion and redistortion are created, using ST maps and reformatting
![](https://campuspress.uwl.ac.uk/rachel/files/2024/02/LensDistortNode_that_would_come_from_3DE-736c2e4b9992bc24.png)
Len Distortion node – another way of distorting an image which has undistorted and redistort functions within it
Lens Distortion is another node that helps us add or remove distortion from an image. This node has multiple functions within it, allowing us to undistort or redistort all in the same node
As an aside, we also learnt about shuffle nodes. Shuffle nodes allows us to redistribute the output of the colour channels. For example our original footage doesn’t have an alpha channel. If we wanted to move the result of the red channel into the alpha channel we can do this within the shuffle node by dragging the output of the red into the input of the alpha channel. The red output is not being given to two channels, the red and the alpha
This is a visual demonstration of that process, you can also see the RGBA values of a node by looking at the colour swatches underneath their name
This is a grid that represents the level of distortion applied to distort the image
This is a grid that represents the level of distortion applied to undistort the image (the are the opposites of each other).
Our next step for this exercise was to add a camera track to track the movement in this scene, we did this by adding a camera tracker node
This is the result of the track and below is how this looks in the 3D viewport
We then used our camera tracking data to create a scene in Nuke and added a scanline render node, this means that we can link our track back to the original footage and view it in the 3D and 2D viewport, as well as add elements and render it out
Through this initial method we made a good track however a lot of the points were not useful as they are in the water which is moving too much and in opposing directions to the overall scene. Therefore we wanted a way to tell Nuke that we didn’t want to create any tracking markers in the water.
We can delete these manually however this is a time consuming process that isn’t really necessary.
Instead we created a mask of the buildings (the most static objects, and therefore the most useful to track), and then created a camera tracking node that only looked at the masked area
This was the result of the tracking with the mask, there is a lot less error here and a much better ratio of useful markers do ones we would ignore. This was a really good technique to learn as it saves a lot of time and an easier way to get better results. You oo have to align the settings of the camera track to look at the mask, that is an important step of linking them correctly. I made the mask have alpha and then choose the “mask inverted luminance” option for the mask the camera tracker used.
After exporting the camera tracking data and generating a scene from it, I added a scanline render node to allow is to be able to view the track in 2D and 3D space
Our final step was to add a CG element to our tracked scene. For this we selected two tracked points that most closely aligned with the ground plane of our scene and told Nuke that we wanted the to align with the Z axis of our world space (this places the scene on the ground and matches up with Maya). After this we selected a tracking point and added a card to the scene, we orientated it to match the angle of most of the building tracking markers. I decided to add a colour wheel image to mine to make it a little more interesting.
The result is a colour wheel in the scene that move along with the tracking points. In the first video I left the other tracking points visible but in the one below I render it out without them and lowered the saturation of the colour wheel to try and better match the scene.
This exercise was really useful and it gave me a good introduction of tracking and lens distortion before we move into 3D Equaliser.
Week 3 – Learning 3D Equaliser
This week we were introduced to industry standard tracking software 3D Equaliser, the way that it works is fairly similar to tracking in Nuke except this can achieve much more accurate results and is a dedicated tool for tracking.
I found it interesting to know that this software only accepts image sequences, however those can be easily generated from captured video footage in software like premiere or Nuke.
This is what the workspace looks like when footage is imported. We also have to remember to cache our footage so that it plays correctly which is the option that I am choosing in this window here
The next step is to program in our camera settings. When this is used in industry you are always provided data on the lens of the camera, the sensor size, the focal length etc. This is so that you can match your CG assets exactly with the ones you have on film. In this case we simply found the data online and used that, so we updated our focal length and sensor size here
The next thing we did was set up a whole bunch of shortcuts that we could use in 3D Equaliser, at first this was a bit overwhelming as I was new to the program but it did make tracking an awful lot easier when we came to it. Within only a couple of hours I was remembering and relying on the shortcut keys.
Then it was time to move on to tracking and setting the markers. With the shortcuts set up this was a fairly simple process however it did take me some time to get used to knowing what would track well as well as figuring out the interface.
We need 40 at least 40 points to generate a track from 3D equaliser but more can always be added. For this text we stuck to 40.
After the track was done we ‘solved’ the track by calculating the points. We also selected 3 points and made them our guide for the XZ axis in world space. As well as selecting one point on the now floor and setting that as the origin. This is similar to how we align our 3D space in Nuke.
With that done we added locators to our points and scaled them down.
To increase the accuracy of our tracking we can lower the solved value by editing the parameters of a camera to add in lens distortion and a minor wobble in vocal length. We can get these values by using the parameter adjustment windows and making those values flexible. This makes our solve cleaner.
After the solve was completed we learnt how to add a 3D shape to the scene. We made the geometry within 3D Equaliser and then snapped the cube to one of the tracked points.
We then went about rotating and scaling it so that it would fit in the scene more seamlessly
This shows some of the different windows in 3D equaliser. On the left is the list of tracking points, middle is our 2D viewport and the top pane is the deviation browser were we can see our calculated tracking points and our solve number.
Working on assignment 01
I have been working on the beginning stages of assignment 1 for this unit.
This was the data sheet provided for this scene giving us specifications of the camera and element of the scene. This is an example of a spec sheet that would be provided for all industry work in this software.
This is the parameter adjustment window where I calculate my lens distortion and apply the effect of that on to my solve
This is the result of my track for assignment 1 with 40 tracking markers created, locators added and a cube aligned to the scene.
I’m happy with this result and it was great for me to practice using the software again. I can tell that I am getting better at it and more used to it the more I practice. In future weeks we will add more to this scene ready for the actual submission.
Week 4 – The importance of lenses and focal length
In this lesson we expanded our knowledge of lenses and sensors so that we can be better and more accurate 3D match move artists.
We learnt the importance of knowing the sensor size and focal length of the camera. We looked at some very informative infographics which allowed us to understand how the focal length and sensor size impact each other. We saw that even if you use the same camera (meaning the sensor size stays the same), the focal length can drastically change the result of the shot. The higher the focal length the closer to the subject the image will be
These first two images depict what happens when the focal length is changed but the sensor size stays the same. It can be seen that the field of view narrows the higher the focal length is, but due to the higher number things further away can be seen in higher detail
These photos show the range of sensor sizes and the different formats of resulting pictures. It can be seen here that these parameter vary greatly between cameras this highlights the importance of being aware of this before filming but especially when started post production on the shot. The art of match moving is making sure the live action plates and computer generated content match and can merge seamlessly and these parameters are key to making this work.
For the second half of the lesson we had a refresher on tracking in 3D equaliser. This was really nice as I got to practice with the software again and learn some new elements.
Week 5 – Filming for assignment 2
This week we filmed our content for assignment 2. We learnt how we can make our own HDRIs by taking pictures at different angles and with many different apertures per angle. We then combine these images in photoshop. First we combine the data for the same angle at different apertures to make one picture will a lot of light data. Then we combine the high detailed pictures of the different angles into a 360 view of the scene, this therefore makes a HDRI.
The shoot was well produced, we each had roles to implement the same kind of process that a professional shoot would be under. For example we had a shot list and had different people directing and supervising
We put all our footage into shotgrid so that we could all access it from one database. We then individually downloaded the clips that we like, made it into an image sequence in adobe premiere and then imported this into Nuke. From here my first step was to make a very basic track just to see the workflow through. My next step will be coming up with an idea for the shot and deciding how I want to track it. If I stick with just enhancing it in 2D (ie. 2D assets and grading) then I can just stay in Nuke. However if I want to add anything in 3D cgi elements from Maya for example, I will then need to track it in 3D equaliser
One important thing that we had to make sure of was that we were carrying the information of the camera into my nuke file, especially the focal length and film back size (which we are more familiar with calling the sensor size). This is very important for making a good match move, making sure that all our elements will be accurate and blend together seamlessly
Week 6 – Moving data from 3D Equaliser into Nuke
This week we learnt how to take data from 3D Equaliser and move it into Nuke
 Small experiment I did on my own
I wanted to practice the techniques so I took a small amount of footage and tried to track it on my own
I ran into problems here where the points didn’t like up very well, I’m still not entirely sure why this was the case. I think it is either just a bad place to track or there is not enough information in the pattern for 3DE to recognise.
Back to working on Assignment 1
Deviation browser shows an average of 0.69, which is acceptable and a sign of a good track. I have successfully aligned the scene to the world by creating a ground plane and origin point and aligned a cube.
My next task was to export this data and send it to Nuke. To do this I first baked the scene (being sure to save before and after as this is an undoable operation) and then from that baked scene I extracted 4 things
- The camera data (using nuke preset)
- The locators (as obj)
- The cube (as obj)
- The lens distortion (using nuke preset)
Adding in a distance constraint
While thinking through my assignment I remember that I completed the tracking before we were taught how to add in distance constraints to make our track more accurate. Therefore I decided to go back to my unbaked tracking file and add in a distance constraint that I had knowledge of thanks to the data survey image provided with the source footage.
Because this impact the size and scale of the track I then had to realign the scene by setting up the ground plane and the origin again. After this I could export my data again the same way as before and then end up with the track in Nuke.
Trying to figure out the scene in Nuke
This is my scene setup in Nuke
Here are some videos of the object (cube) and locators being shown in the 3D viewport in Nuke, clarifying that our track is the same as the one in 3DE.
This is a more advanced way of styling our scene as we have now introduced lens distortion and overscan to deal with that. This means that we are expanding the area of our picture to account for lens distortion and then we are redistorting it when before we render it out. This is what the reformat nodes are accomplishing.
The next task was to add clean up to out scene by painting out the fire escape sign. As a test I just got rid off the icons on it. It is important that we did this on the undistorted version as we want the most accurate data when combining it with out track. We used a black outside node to show the whole area including the overscan.
Here is the clean up isolated
These are the first set of nodes that are involved in cleanup, here you can see that we undistorted the footage and then added our cleanup nodes on a frame hold. The clean up itself was done using roto and roto paint nodes (and within that the clone tool). We then premulted this so we could merge it into the scene.
For a reason I can’t figure out the clean up wasn’t aligning properly, I did try and troubleshoot this in a lot of ways (particularly closely examining the reformat nodes and making sure the frame holds were in the same frame), trying to refer back so I could find the root of the problem however I struggled to find what was wrong
Reverting back to my previously exported content before adding the distance constraint and trying the workflow again.
In order to try and fix the problem I thought it would be worth trying out a previous export from 3DE. Due to the fact that I’m moving my work between the uni computers, I thought maybe I had made an accidental mistake during my export of the second set since I was a bit of a hurry due to it being before another lesson.
Therefore I decided to try the workflow with the original export that I had more confidence was done correctly and I’m pleased to say that the workflow worked as it was supposed to this time. So I managed to get the desired result
This is a video showing the completed process.
I’m very glad I got this to work as I was feeling a bit deafeted from my last trial. I know that quality of the clean up could be improved however this was a test primarily to make sure the workflow worked this time. Because the node setup is solid in this project I can always come back to the project and improve the patch without it being destructive to the project.
In the class the followed me completing this I learnt that I had done the cleanup on the wrong section. Fortunately it was easy to correct this thanks to Nuke’s non destructive workflow. I also added a blur node to the roto to try and make it even more seamless
Assignment 1 render
Short breakdown
Week 7 – Experimenting with the next shots for assignment 2
As I was finished with assignment 1 I decided to practice tracking the footage we shot with Charles. I chose one of the scenes that had a lot of movement and a lot of interesting things that could be tracked
A render of a first pass of the scene in Nuke
It was really valuable for me to go over this again, particularly because I hadn’t tracked in 3D Equaliser for a few weeks, so I enjoyed reinforcing my knowledge of the process and this was a fun piece to work on
Week 8
This week we learnt how to add lens distortion to our 3dE track and has a refresher on how to use 3dE.
My first question this week was to get help on my track from the previous week as I notice when looking at it that the markers were not right and something had gone wrong with my reformatting and lens distortion nodes. After asking and troubleshooting with my lecturer I managed to fix the problem by removing the reformat nodes and adding the lens distortion at the end of the node tree, therefore applying it to the finished image
The tracked shot lined up
Adding lens distortion to a new track
Throughout this lesson we worked on a new track which was a panning shot of the outside of a building. This meant we could refresh our skills in 3DE and also learn how to add in lens distortion.
After bringing in our scene. tracking points and solving the track. We then add lens distortion by importing this lens distortion grid image (taken using the same camera as we shot the scene with) as a new sequence in our file. With this file opened we moved to the “distortion grid” window. This allows us to tell the computer what the distortion looks like and calculates it.
We are presented with a red line grid and we have to take the vertices of that grid and align them to the squares of the grid. This gives the computer accurate information on how the lens is distorted.
After this we expand the grid using control + the corresponding arrow key to give the computer as much data was we can and finally we calculate the lens distortion so we can apply it to our scene
The above screen shot shows how the lens distortion affects the scene. The path of the tracking is warped as the lens would be
We then go back to our parameter adjustment window and make our lens distortion adaptive to give a more organic feel to the shot. This is the same principle as covered previously just with different data
Throughout this project we kept a keen eye on our deviation browser, trying to lower the average with every tweak
This is a screenshot showing the whole scene.
Week 9 – Dynamic lens change
This week we learnt about how to track in 3DE when the lens is zooming in.
This is the original footage we were provided this week
When setting up our scene in 3DE equalizer the set up was basically the same it was just a chase of setting the lens to have a dynamic focal length to let 3DE know the focal length would be variable.
In addition to this the process for tracking changed slightly as we had to add a curve calculation. This done in the curve editor -> camera/lens curves -> focal length/zoom.
From these options we picked ‘calculate zoom curve’ this allowed us to analyze the camera as that some of that solving could already be done and it wouldn’t influence the accuracy of our tracks.
In this window we tell it how much data to analyze using the CV curves number and complexity of reading curve parameters. We made sure our CVs were spread out among the footage and specified the focal range at each segment. For us this stayed the same throughout the whole shot at 24mm to 70mm
Once we solved the curve our highest point of the deviation was a lot better although still quite high. It then because a process of shot refinement and using parameter adjustments. We also added in the tripod height that we recovered from our survey data. The tripod was set at 1.6m
Overall this was a good exercise and I got to learn a new technique and new pieces of 3DE. Getting the track to work and for the deviation browser to be below 1 was a bit of a challenge, there was a lot of small refinements needed
Modifying a bus model for my submission for assignment 2
After making the decision to try and incorporate a bus into my scene I spent time trying to find a suitable model that would work when changing the decade of the scene.
I found this bus model on SketchFab and really like the aesthetic of it.
Unfortunately when I opened the model the textures didn’t work with Arnold straight away. This was a problem I had to fix as I knew I wanted to render an animation of it in Arnold to comp over my scene
After much experimentation I figured out that if I remade the materials as Arnold materials I could add them to the respective surfaces in the model
To do this I had to make the material from scratch. Find the old material, highlight items that were textured with that material and add the new material instead.
A workflow I found that was easier for me to do was to separate the geometry so that I could group all the pieces by material first. This meant that after I made all the materials I could then select the geometry to texture a lot more easily than getting it through the old material.
The above screenshot shows all the material groups that I had.
The below screenshot shows all the materials I remade
These screenshots are the result of this process which worked.
Although the textures aren’t perfect and not quite as good as the original viewport render. In my scene the bus will only be moving horizontally in a flat line so hopefully these discrepancies will not be noticeable.
My next step is to render an animation of this out and add it in to my Nuke scene that has the 3DE track already set up
Week 9 – calculating a dynamic zoom change
This week we learnt how to use 3DE when dealing with a shot the includes focal length changes due to the camera zooming
This is the original footage we were supplied
This is the survey data provided telling us the specifications of the camera, the range of focal length that is used and the height of the tripod
The first step was to set our lens to be dynamic/zooming in 3DE, this tells the software that our focal length is variable.
An editor that we hadnt used before that was important in this project was window that calculates the zoom curve. This is a window that analyses the footage and determines when the camera zooms in and out and by how much. This provides more information for the scene and our trackers and subsequently reduces the error in our tracking points
This screenshot shows the analysis of the zoom curve in the top most horizontal panel, the orange line represents the level of zoom throughout the footage. The deviation browser is the panel beneath. We worked hard throughout the lesson to reduce this number however we all ended up having problems with the end of the scene giving quite high values. I got mine down to the best I could given the time and was relatively happy with the result.
This was a good exercise and definitely something that is very useful to learn. It means we have the tools to track a wider variety of footage.
Continuing work on assignment 2
I exported the tracking into Maya using 3DEs default export for Maya, this meant that the ground plane was in the right place and all I had to do was reinsert the image plane. With this set up I could import my bus and place it into the scene.
My next step was to animate the bus so that it would be moving at the same speed as the car I was trying to cover. Once I got all these elements in place I could render the file and add it to my Nuke scene
This weeks progress on assignment 2 was to add the bus to the scene in Nuke.
The video below shows the stage that I went through, first adding the bus into Maya and animating it, rendering it out as a playblast. The second clip is a Maya render and the third clip is the animation merge onto the Nuke scene.
Overall I am happy with this result I learnt a lot and I am pleased to have it working and in Nuke now. I do think I could do some work on the lighting to make it fit better with the scene so I may end up rendering it out again however I am happy with the scale of the bus and the animation.
My next steps in Nuke are to grade it and add in the necessary roto so that it is accurately incorporated.
Week 10 – Continuing to work on assignment 2
This week I tried to work on the lighting on the bus. The hardest thing for me to figure out was how to generate shadows from the light, even with an aiShadowMatte material I struggled to get this to work
This last render proves that the HDRI does create shadows yet they weren’t showing on the bus, just the cube model.
In the end I decided to just render the bus with an area light, to hopefully add some more directional light. Unfortunately this added unwanted light to other areas of the scene, creating a kind of strobing effect in the foreground therefore I won’t be going ahead with this setup and instead will revert to my previous render where the bus is lit evenly and successfully with a skydome light.
Figuring out the workflow for the rotoscoping
My task today was to figure out how to set up the rotoscoping for the scene as there are elements that the new bus model obscures. After testing different methods I found a suitable solution
I think the layout of my node graph could be improved however I will ask about the best was to do this in class, for now I have a set up that allows me to progress with the project
Week 11 – Moving from 3DE to Maya
This week we learnt how to take the data from 3DE and import it into Maya and Nuke so that we could make CG elements that would fit the scene
This is an example of a good naming convention for the files which is useful to know as files can very easily get unorganized due to hierarchy or not being named correctly.
Once we exported the file to Maya we had a problem where the frame range was incorrect. We want all our frame ranges from all peices of software to start from 1001. Therefore we ended up going into the graph editor in Maya and moving the camera animation to begin on frame 1001
These are some screenshots of me placing objects in the scene within Maya. Because we had already orientated our scene to the ground it was very easy to align objects to the positioning of the locators
To put the footage into Maya we made sure we had a undistorted version. To do this we exported the lens distortion node from 3DE and imported it into Nuke. We applied the undistortion to our footage and rendered that out so that we could use that image sequence inside Maya.
This is the footage inside Maya with correct placement of the tracking points and the floor placed accurately aligned to the ground plane in Maya.
You can see here that I added in 2 CG elements, the long blue cylinder on the floor in the background and the blue sphere on the table. To position them I created the primitives moved their pivot points to the bottom of the shapes and then snapped the shape and pivot to the tracking points.
This is a screenshot in perspective view so that the camera can also be seen
This is a screenshot of what will essentially be the rendered output
This is how we merged the CG elements with the original plate
After trying two methods we decided that the cleaner method was to use the distorted original plate, then import the undistorted CG elements, distort those before merging them together and rendering it out. This makes a cohesive image and is the easiest node graph to read and make sense of.
This is the final result
Week 12 – Mega scans in Maya
This week we looked at using megascans in Maya
We source our megascans from Quixels. Having been bought by epic games these resources are now free to use and download giving you a wide variety of resources and assets to use in projects.
With these resources you can download the 3D model and all textures in either jpeg or exr. For us we always use exr as these are 32bit linear files when compared to a jpg which is 8 bit and not linear
This is the displacement map loaded in Nuke, the reason why we can’t see any of the data here is because the scale of our displacement is really small, although the changes are technically there, they are not visible to us as the variation of the change is too small to visualise.
Here is the displacement on the book and the view of our hyper shade graph
This is a render of the book placed on the table within the tracked shot
Finishing Assignment 2
This week I set about finishing assignment 2 for this module. After continuing to work on the rotoscoping I realised that there was a flaw with my bus as its wheels were dipping beneath the road surface. In hindsight I think I did this deliberately to cover up the car in the shot however I thought it was too unrealistic looking to be left alone, therefore I re-rendered the shot with the bus in a better position
I did try to remove the wheels of the car in after effects using a content aware fill tool however this provided less than ideal results as the shot was just too fast and had too much motion.
When working on the roto I liked to work systmatically so I would make a new node for each piece making sure that I wasn’t mess up any work I had completed or overwrite it. However this meant I would end up with a long string of roto nodes. So I ended up condensing them once each piece was done, leaving me with one roto note with a lot of different shapes
Screen recording of all the roto elements
This screen recording shows all the different shapes I rotoscoped out of the shot
Because I was worried I hadn’t done enough to this shot and that it didn’t integrate well enough I decided to make a poster to go on the bus. I didn’t spend all that much time on this as I knew it was an extra piece I was doing and also I knew it would render out fairly small and not be the most focal point of the shot
Colour grade and noise
With the bus re-rendered with the new scale and the poster I started working on grading the shot and adding in some noise.
I’m happy with how this turned out and it was a lot easier to accomplish than I anticipated.
I came across a strange problem where the lighting would flicker in the original footage. I re-exported it and it still happened and I couldn’t figure out why, however I fixed this problem when I rendered the shot out of nuke as an EXR image sequence rather than an MP4
Finished node graph
Here is my finished node graph for the project
The final render