Week 1:
CGI is for LOsers!
“Why do some filmmakers say no to CGI?”
Some filmmakers like Christopher Waltz are against the use of CGI in films because filmmaking began as an artform of only acting. This meant if a film back then did not look as aesthetically pleasing but, had well acted characters, this film would be well recognised.
As time went on and more CGI was introduced to cinema such as the Marvel Cinematic Universe films, the demand for CGI increased drastically. This is because the idea of a super hero is something fictional in the sense that the elements used to bring these “super heroes” to life are not real therefore, filmmakers need to bring in CGI such as a laser blast that emits from someone’s eyes. Originally, a character with this ability is written in a comic book where acting is not something that dictates how good the story in that medium is unlike a film. In a comic book it works, in film adding this laser to a character overshadows their acting ability because the character being played would not work without this laser ability. Hence, some filmmakers are disappointed that CGI is becoming the main driving factor for how good a role is played.
The “loser” bit Waltz uses implies that CGI is lazy in the sense that an actor is heavily dependent on his CGI abilities within the film’s world and can sacrifice some of their acting skills because the CGI would be in the spotlight as the main driving factor for how good a scene or a character is to an audience.
What impression might these comments give to an audience?
Claims like the Waltz made, are to drive audience away from CGI heavy films because using the logic I explained earlier, if CGI is “lazy” and “for losers” because actors do not have to improve their acting abilities to be featured in CGI scenes, practical effects would however force an actor to improve their acting ability because they need to interact within a physical environment. So if a movie can only be good because of the acting, CGI would be rendered useless.
However, films like “Gravity” (2013) heavy rely on CGI in a way where actors do need to be aware of their non-practical and fake environment. This is because space scenes for example are impossible to film as of today and therefore are made digitally. This statement disproves the idea that using CGI is lazy.
Moreover, human beings are curious by nature, what I mean is that an audience would eventually get bored of a non-fictional form of media, we need bigger stories, stories that are alien to us to really keep us entertained. I believe, filmmakers should not sacralise a good story simply because acting is not the main driving factor. Even as of today, certain films with certain CGI elements like “Star Wars” and their “lightsabres” have custom built training grounds to teach their actors how to adapt to a fake element which does involve their acting skills.
Potential topics for my investigative study fmp
1- How close an actor is to a specific VFX based role.
Actors such as Robert Downey Jr. and Hugh Jackman are heavily associated with their CGI based character roles, Iron Man and Wolverine. Most audience tend to refer to them by their roles instead of their name. Whereas, you have other actors like Robert De Niro and Al Pachino, heavily being referred to by their names and when they are brought up in a conversation, how good their acting career has been- is the centre of the conversation, rather than one of their roles. Even when an actor plays a role with no VFX multiple times, they still are not associated with that role the same way VFX roles are. My question is, what makes audience give a new identity to these VFX based actors?
2- Human VS AI production: who can make the better production within a given timeframe?
I will conduct two live industry briefs of the same idea. One where I dominantly use AI and one where I do not use AI at all. In the aftermath, I will compare results and see where I worked better. For example, I can get someone to learn how to create a non VFX production using AI from scratch and compare how fast they learned that skilled in comparison to me starting a VFX production from scratch.
3- Virtual production VS green screens
Virtual production has rapidly emerged as a game changing alternative to traditional green screen filmmaking. Using LED walls powered by real time rendering engines like Unreal Engine, filmmakers can display digital environments directly on set, allowing actors to perform within immersive worlds instead of a blank backdrop. This topic relates to the “lazy” element I spoke about earlier in “CGI is for losers!” because it talks about how actors should interact with the CGI rather than it just be put on them in post-production.
Deepfakes
This video shows a computer-generated image of Queen Elizabeth II delivering a Christmas message. It looks and sounds exactly like her, but it’s not actually her. This technology, called “deepfake,” uses artificial intelligence to create realistic but fake videos, making it seem like someone is saying or doing things they never did. In this particular video, the deepfake Queen talks about current events and even performs a TikTok dance, highlighting how convincing these fakes can be.
Comparing: “The Mandalorian” deep fake with Queen Elizabeth’s deepfake
The Queen Elizabeth and Luke Skywalker deepfakes highlight the technology’s divergent paths of social commentary versus cinematic storytelling. Channel 4’s “Deepfake Queen” was a transparently fake satire created by a professional studio to deliberately warn the public about the dangers of misinformation, using the spectacle to deliver a message about trust. In contrast, the deepfake of a young Luke Skywalker in The Mandalorian was a narrative tool intended to be accepted as “real” within the story, famously showcasing how a fan’s version could surpass the official studio’s initial CGI efforts and lead to a new era of digital characters in film. While the Queen’s deepfake was designed to make you question what you see, Luke’s was created to make you believe in it.
Ethicality
The primary ethical implication of deepfake technology is its profound ability to erode trust and manipulate reality, threatening both individuals and society. For individuals, this manifests as a gross violation of consent and identity, enabling malicious acts like the creation of non-consensual pornography, targeted harassment, and sophisticated financial fraud through impersonation. On a societal level, deepfakes pose a critical threat to democracy and social cohesion by fuelling political disinformation, allowing bad actors to fabricate “evidence” of events that never happened or, conversely, to dismiss genuine evidence as a “deepfake”—a phenomenon known as the “liar’s dividend.” This undermines the integrity of journalism, the justice system, and our shared sense of reality, forcing us to confront a world where we can no longer instinctively trust what we see and hear.
Unreal: the vfx revolution podcast
The first episode of Unreal: The VFX Revolution introduces listeners to the origins of modern visual effects, focusing on the 1970s moment when George Lucas, John Dykstra, and a small group of innovators founded Industrial Light & Magic in a Van Nuys warehouse. Drawing on interviews with VFX pioneers such as Robert Blalack, Richard Edlund, Dennis Muren, and Douglas Trumbull, the episode explores how early film artists combined ingenuity, engineering, and risk-taking to create the ground-breaking effects of Star Wars and other classics. By situating these achievements against the backdrop of pre-digital, mechanical, and optical methods, the programme highlights the leap that VFX represented at the time, while foreshadowing the digital transformations to come.
This episode not only documents technical milestones but also emphasizes the collaborative and experimental spirit that defined the birth of the VFX industry. As the series opener, it establishes both a historical baseline and a narrative arc that will carry through subsequent episodes, underscoring how creative ambition and technological innovation intersected to redefine cinematic storytelling.
BBC Radio 4 (2023) Unreal: The VFX Revolution, Episode 1: A long, long time ago… [Podcast]. BBC. Available at: https://www.bbc.co.uk/programmes/m001nvyb (Accessed: 30 September 2025).
Week 2:
AI PHOTOGRAPHY
Photographer admits prize-winning image was AI-generated
The article highlights several key debates sparked by Boris Eldagsen’s AI-generated image. These include questions about the definition of photography and whether an image created without a camera can truly be called a photograph; concerns about authenticity and transparency, as many argue artists should clearly disclose AI use; and issues of fairness in competitions, since AI images may have an advantage over traditional photographs. It also raises questions about creativity and authorship, exploring who deserves credit—the human or the machine—and the broader role of AI in art, with some seeing it as a new medium and others as a threat to artistic integrity. Finally, the case touches on public trust and truth, as AI-generated “photos” risk undermining confidence in photography as a faithful representation of reality.
Key debates:
-
Definition of photography: Should AI-generated images be considered photography if no camera was used?
-
Authenticity and transparency: Should artists be required to clearly disclose when AI tools are used in creating images?
-
Fairness in competitions: Is it fair for AI-generated works to compete against traditional photographs taken by humans?
-
Creativity and authorship: Who deserves credit for an AI image — the human who prompted it or the machine that produced it?
-
Role of AI in art: Is AI a legitimate new artistic medium or a threat to traditional artistic skills?
-
Public trust and truth: Do AI-generated “photos” undermine photography’s reputation as a truthful representation of reality?
AI is transforming visual effects production by making complex tasks faster, cheaper, and more accessible. It can automatically generate realistic environments, enhance motion capture, de-age actors, and even create lifelike characters or crowd scenes that once required large teams and long hours. This automation allows artists to focus more on creativity rather than repetitive technical work. However, challenges remain—AI can produce inconsistent or uncanny results, raising concerns about quality control, artistic originality, and the displacement of human jobs. There are also ethical and legal issues, such as the use of AI-generated likenesses and questions of ownership over AI-created content. Despite these challenges, the integration of AI offers major advantages, including greater efficiency, reduced costs, and expanded creative possibilities in film and media production.
Prompt photography (AI-generated images) and traditional camera photography differ in how they create and represent visual reality. Traditional photography relies on capturing light through a lens to record real moments, blending artistic vision with technical skill in composition, lighting, and timing. In contrast, prompt photography uses text or data prompts to generate images entirely through algorithms, creating visuals that may look photographic but are synthetic rather than captured from the real world. Yet, the two overlap in their artistic intent—both aim to evoke emotion, tell stories, and explore visual aesthetics. Technologically, both depend on digital tools and editing software, though AI extends creative control beyond what a camera can see. In terms of visual culture, AI-generated imagery challenges how society defines authenticity, originality, and artistic authorship, pushing the boundaries of what “photography” means in the digital age.
Eldagsen, B. (2023) Photographer admits prize-winning image was AI-generated. The Guardian, 17 April. Available at: https://www.theguardian.com/technology/2023/apr/17/photographer-admits-prize-winning-image-was-ai-generated (Accessed: 9 October 2025).
Essay proposal
The proposed investigative study, titled “Human vs AI: Who Can Make Better Post-Production Scenes?”, will explore the creative and technical boundaries between human artistry and artificial intelligence in visual effects. Starting from identical recorded footage, I will produce three distinct post-production versions: one crafted entirely by a human artist using traditional software tools, one made using AI prompts for one video and created with the guidance of artificial intelligence. This experiment aims to determine which approach produces higher-quality results in areas such as compositing, 3D modelling, rendering, lighting, and animation. The study is driven by a growing need within the creative industry to understand how AI can be integrated into post-production workflows without compromising artistic integrity. As a VFX artist myself, I am personally motivated to discover where AI can enhance efficiency and where human creativity remains irreplaceable. The findings will provide valuable insights for practitioners and studios deciding how to balance automation with manual craftsmanship in an increasingly AI-driven production environment.
Excluding the AI prompt based version both, AI guidance and human made will start from the same raw footage to ensure fairness. The human-made version will rely on established industry software such as Blender, Nuke, After Effects, and Houdini, where all elements, from compositing to final rendering, will be manually executed. In contrast, the AI version will utilize emerging generative tools and guides, such as neural rendering systems, diffusion-based compositing, and AI-assisted modelling and texturing. Each stage will be carefully documented, including workflow processes, time investment, error rates, and aesthetic decision-making. I will analyse and compare the three outputs using qualitative and quantitative criteria, including visual realism, creative coherence, technical accuracy, and overall production efficiency. To support my findings, I plan to include professional insights through short interviews with VFX practitioners about their experiences with AI, alongside case studies of AI applications in commercial post-production. Through this combination of practical experimentation and secondary research, I aim to determine the ideal AI-human workflow order for myself or anyone within the field.
Several key sources inform this investigation. Bebeshko et al. (2021) discuss the use of neural networks in generating 3D models from 2D images, highlighting both the potential and limitations of AI-based modelling, which will guide my understanding of AI’s technical capacity. Zhang et al. (2023) explore neural rendering techniques that accelerate digital model building, providing insight into how AI can improve workflow efficiency. The UX Design article “Using AI for 3D Rendering” (2023) offers a practical look at hybrid workflows where human creativity and AI generation coexist. Industry commentaries such as Post Magazine’s “Outlook: AI in Post Production Should Not Be Feared” (2024) argue that AI should be embraced as a tool rather than feared as a replacement, while LBB On line’s “Human vs Robot: The Shape of VFX and Post Production in 2025” (2024) contextualizes the professional and economic implications of AI integration. Together, these sources frame both the opportunities and challenges of AI in creative industries. Ultimately, this study will not only compare technical outcomes but also engage with the philosophical question of authorship and creativity in an age where machines can generate art. By conducting this side-by-side comparison, I aim to contribute to an informed conversation about the future of VFX and to identify where human imagination still outshines algorithmic precision.

Week 6: Lit Review of my Research Source
Narayan, A. D., Caillard, D., Matthews, J., & Nairn, A. (2022). Artificial imagination: Industry attitudes on the impact of AI on the visual effects process. Interactions: Studies in Communication & Culture, 13(2), 211-228. Available at https://www.intellectdiscover.com/content/journals/10.1386/iscc_00056_1 [Accessed 7 Nov. 2025].
Annotation:
This paper reports a pilot qualitative study based on semi-structured interviews with nine experienced visual-effects (VFX) artists, exploring how AI is perceived within the VFX industry. The authors find overall ambivalence: practitioners recognise AI’s potential to streamline tasks (e.g., rotoscoping, generative imagery) but also express concerns around job security, creativity erosion, and the shifting role of the human artist. They emphasise that adoption is uneven, and that meaningful dialogue is needed about how human-machine collaboration might evolve.
In relation to my assignment comparing AI and human performance in VFX creation this source offers direct insight into how industry professionals view that very question: it frames the human-AI tension and shows what criteria (speed, creativity, control, employment) matter. The article appears credible: published in a peer-reviewed journal, authored by scholars (including Nairn at Auckland Univ. of Technology) with a focus on media/communication studies. It contributes to my research by grounding the human-AI VFX debate in practitioner attitudes, offering themes I can use (e.g., fear of deskilling, trust in AI, creative ownership) and helping to frame my test conditions and evaluation criteria accordingly.
My recorded process:
Plate footage:
AI guided method:
AI Chat: https://gemini.google.com/share/39a630f4dfca
Footage:
Plate info:
1920×1080
25 fps
Focal length: 42.5mm
Shutter speed: 100
ISO: 800
Aperture: 8
HDRI 360 photo info:
4K
30 fps
HDR photo
Process
15/12/2026
Filming: Filmed the original background footage of the backyard – 127 minutes
15/12/2026 to 16/12/2026
Tracking: Tracked the camera movement from the footage to solve the 3D space using 3D Equaliser – 79 minutes
17/12/2026
Imported the solved camera track and ground plane into Blender. Used reference images to model the ship’s individual components ie. Wings, cocompit and hull, etc… – 133 minutes
Shaded the ship in Substance painter using realistic metal textures – 49 minutes
18/12/2026
Rigged the ship’s mechanics like the canopy opening and added custom properties to animate them in a non-destructive way – 71 minutes
Animated the full landing sequence with gear deployment. – 181 minutes
Added and aligned the shadow catcher to the ground plane. – 4 minutes
Created particle systems for the engine exhaust and dust impact. – 19 minutes
19/12/2026
Rendering: Exported the ship and shadow data from Blender as OpenEXR sequences. Imported the renders into Nuke to combine with the footage and colour grade. – 53 minutes
Visual process:

Summary:
The Experience: Complexity Behind the Invisible
I found the process to be deceptively complex. Initially, I assumed the hardest part would be modeling the ship, but I quickly realized that integration is the real challenge. The most difficult moments weren’t creative, but technical—specifically, understanding why the shadow catcher wasn’t transparent or why the dust particles looked like “popcorn” instead of smoke. There was a distinct moment of clarity when moving from Blender to Nuke: the realization that I wasn’t just making a video but creating a dataset (EXRs) to be manipulated later. This shift from “baking everything” to “controlling raw data” was intimidating but ultimately empowering.
Time Management: The “Tweaking” Trap
Time management was a significant hurdle. While the initial tracking and modelling followed a predictable schedule, I underestimated the time required for simulation and physics tweaking.
The Sink: I spent a disproportionate amount of time adjusting particle velocities and material densities for the dust and exhaust. Because these required test renders to see the result, the feedback loop was slow.
The Win: Breaking the animation into 10-frame “chunks” (Approach, Impact, Roll-out) was a major time-saver. It prevented me from getting overwhelmed by the 120-frame timeline and ensured the “weight” of the landing was solved before I worried about secondary details.
Key Learnings: Imperfection is Realism
The steepest learning curve involved unlearning the “perfection” of 3D.
- Lighting: I learned that 3D lights are too clean. The breakthrough came when I had to manually lift the black levels to match the “milky” shadows of the real footage.
- Physics: I discovered that animation isn’t just movement; it’s reaction. The ship didn’t look real until I added the suspension compression and the slight nose dip upon braking.
- Pipeline: Understanding the difference between a PNG render and an OpenEXR MultiLayer file was crucial. It taught me that a professional workflow preserves data (Depth, Vectors, Shadows) rather than just pixels.
Quality of Output
The final quality exceeded my initial expectations for a first attempt, largely due to the compositing phase. If I had rendered the final video directly out of Blender, the ship would have looked like a high-contrast sticker pasted onto the background. By moving to Nuke and applying grain matching, edge blurring, and color grading, the asset settled into the plate convincingly. The addition of the volumetric dust and heat exhaust—while technically difficult—was the “1%” detail that sold the shot, transitioning it from a simple 3D overlay to a believable visual effect.
Final version:
Manual method
Production Period: December 24, 2025 – January 2, 2026 Primary Tools: Blender (Modelling, Animation), Substance Painter (Texturing), Nuke (Compositing).
Phase 1: Assets & Rigging (Blender)
The project began with the conceptualization and construction of the spacecraft. I focused on a hard-surface design that felt both futuristic and utilitarian.
- Dec 24 – Dec 25: Hard Surface Modelling. I blocked out the primary silhouette of the ship, focusing on the dual-engine pods and the cockpit glass. I then refined the mesh, adding “greebles” and mechanical details to sell the scale.
- Dec 26: Rigging & Constraint Setup. To ensure realistic movement, I rigged the landing gear and the engine nozzles. I used transform constraints to ensure the landing gear retracted and extended smoothly during the animation phase.
Phase 2: Look Development (Substance Painter)
Once the UVs were unwrapped in Blender, I moved the asset into Substance Painter to give it a “weathered” and lived-in feel.
- Dec 27 – Dec 28: Texturing & Shading. I applied a base metallic layer, followed by procedural “edge wear” and dirt generators. I specifically focused on the heat-stress textures around the engine exhausts and the grimy streaks across the hull to match the overcast lighting of your background plate.
Phase 3: Animation & Match moving (Blender)
With the textured ship back in Blender, I focused on the physical interaction between the ship and the environment.
- Dec 29: Camera Tracking. I tracked the background footage of the courtyard to ensure the ship remained “pinned” to the ground during the landing sequence.
- Dec 30: Hero Animation. I keyframed the ship’s descent, adding subtle “secondary motion” (a slight wobble) to simulate wind resistance and thruster compensation before the final touchdown.
Phase 4: Compositing & VFX (Nuke)
The final stage involved “marrying” the CG ship with the live-action plate.
- Dec 31: Pass Integration. I exported the beauty, shadow, and AO (Ambient Occlusion) passes from Blender. In Nuke, you used these passes to ground the ship, ensuring the shadows under the landing gear matched the soft, diffused lighting of the courtyard.
- Jan 1: Atmospheric Effects. I integrated the dust/smoke elements at the bottom of the frame. I used a combination of noise patterns and masking to make it look like the engine downdraft was kicking up debris from the grass.
- Jan 2: Final Colour Grade & Grain. To finish the shot, you added a subtle film grain and a unified color grade across the CG and the plate to ensure they shared the same black levels and highlights.
AI prompt method
Trial 1: an alien spaceship lands on the grass creating a dusty environment. After landing the canopy of the spaceship opens.
Trial 2: a sleek alien star spaceship with jet wings lands on the grass slowly creating a dusty environment blending with the scene’s atmosphere. after landing the canopy of the spaceship opens.
The efficiency of modern artificial intelligence in the realm of video production represents a paradigm shift from traditional digital content creation. By analyzing the two trial videos generated through a simple two-prompt process, it becomes evident that while AI offers an unprecedented speed-to-output ratio, it introduces a unique set of trade-offs regarding control and consistency when compared to standard 3D software workflows.
A comparison of the two trial videos reveals the core strengths and inherent limitations of generative AI. Both videos exhibit a high degree of visual fidelity, characterized by cinematic lighting and complex textures that would take a human artist significant time to refine. Trial 1 showcases the “dream-like” fluid quality typical of AI, where environmental elements morph and flow with a high degree of aesthetic appeal but low structural rigidity. Trial 2 demonstrates how iterative prompting can stabilize these visuals, offering a more focused subject and refined movement. In both instances, the production time remained remarkably low—likely under two minutes per clip—demonstrating a “rolling the dice” efficiency where the user can generate multiple high-quality variations in the time it would take to even open a standard 3D application.
When comparing this AI-driven process to the standard 3D software pipeline used in programs like Blender or Maya, the disparity in labor is staggering. A traditional 3D workflow is a linear, multi-stage endeavor requiring specialized skills in modeling, rigging, texturing, and keyframe animation. To recreate the visual complexity seen in these trials, a 3D artist would spend dozens of hours building assets and defining physics before the computer even begins the rendering process. Rendering alone for a ten-second high-fidelity clip can take hours or even days depending on the hardware. In contrast, the AI workflow skips the construction phase entirely, moving directly from conceptual intent to a rendered final product.
However, this efficiency comes at the cost of granular control. In a 3D suite, every pixel and movement is deterministic; if an animator wants a character to move a specific finger at a specific frame, they simply move it. With AI, that same level of precision often requires dozens of “re-rolls” or complex in-painting techniques, as the AI acts more like a director than a technician. The user suggests the “vibe,” and the AI interprets it, which is highly efficient for conceptual work or surreal backgrounds but currently less efficient for projects requiring exact technical specifications or complex character acting.
In conclusion, the use of AI to generate these videos proves to be exponentially more efficient than traditional 3D methods in terms of time and technical barrier to entry. While traditional software remains the gold standard for precision and repeatable results, AI has democratized the creation of high-end visuals. For the goal of rapid prototyping and aesthetic exploration, the two-prompt AI process represents a revolutionary shortcut, trading the absolute control of the 3D artist for the lightning-fast output of generative algorithms.
Below is your essay rewritten fully in first person (author’s POV).
I’ve preserved your tone, structure, technical depth, and academic clarity, while consistently shifting “you” → “I” and adjusting phrasing so it reads naturally as a reflective, authored piece rather than peer commentary.
AI-Assisted Workflow
The dream was simple, yet technically daunting: I wanted to witness a metallic, interstellar spacecraft descend into the mundane reality of my local courtyard. In the past, achieving a photorealistic “integration” shot of this caliber required either a four-year degree or a decade of self-taught trial and error. I decided to take a different path. I chose to use a Pro Stack of software—3D Equalizer for tracking, Blender for 3D creation, and Nuke for compositing—and I tasked an AI with acting as my lead technical director. This is the story of how I navigated that pipeline, the technical hurdles I overcame, and what it means to learn the “impossible” through the lens of artificial intelligence.
Phase I: The Foundation — Precision Matchmoving in 3D Equalizer
Every great VFX shot begins with a lie that must be mathematically perfect. To place a digital ship in a real courtyard, I first had to convince the computer exactly where my camera existed in 3D space. This process—matchmoving—is often the unseen hero of visual effects.
I chose 3D Equalizer (3DE) because it remains the industry gold standard. While many modern tools offer one-click tracking solutions, 3DE demands a deep understanding of lens geometry and point-cloud mathematics.
I began by importing my handheld footage. The camera movement was subtle and organic—exactly the kind of motion that feels natural to a viewer but creates headaches for tracking algorithms. I relied on AI to explain the manual tracking workflow, and under its guidance I began seeding the courtyard with tracking points. I focused on high-contrast features: the edges of stone planters, window-frame intersections, and distinct ground textures.
The AI introduced me to the concept of residual error, explaining that for a track to be considered solid, pixel error must remain below 0.5. I spent hours refining points, deleting unstable tracks, and defining lens distortion parameters. Most consumer lenses introduce barrel distortion, subtly curving straight lines near the edges of the frame. Without correcting this, the ship would appear to drift unnaturally as it moved through the shot.
After undistorting the footage and running the solver, the 3D point cloud finally appeared—a sparse but accurate skeletal representation of my courtyard. In that moment, I felt the first real success: I had transformed a flat video into a navigable three-dimensional space.
Phase II: The Creative Build — Blender and the Art of Physical Reality
With the solved camera exported from 3DE, I moved into Blender, which became my digital workshop. When I imported the camera data and saw the virtual camera replicate my real handheld motion exactly, the experience felt surreal. The physical and digital worlds were now synchronized.
I modeled the spacecraft with a strict focus on Physically Based Rendering (PBR). The AI was invaluable here, breaking down how roughness, metallic, and specular values interact to simulate real materials. To integrate the ship convincingly into the courtyard, generic lighting was not enough. I followed the AI’s recommendation to use HDRI projection, capturing a 360-degree image of the real environment and wrapping it around the scene. This allowed the ship’s metallic hull to reflect the actual buildings and sky from the footage.
Animation presented the next challenge. A spacecraft should not simply descend—it must feel heavy. I worked through f-curves and interpolation, studying how mass influences motion. Together with the AI, I analyzed how a large object might settle upon landing, incorporating subtle bounce and deceleration to avoid robotic movement.
I also implemented a shadow catcher, an invisible plane aligned with the courtyard floor that captured only the ship’s shadows. Without this step, the ship would appear pasted onto the footage. With it, the ship gained grounding—a tangible connection to the real world.
Phase III: The Final Marriage — Compositing in Nuke
The final and most complex stage was compositing in Nuke. If 3DE provided the foundation and Blender built the structure, Nuke was the adhesive that made the illusion believable.
Working in a linear color workflow, I imported the original plate and the rendered beauty pass. The mismatch was immediate: the CG render was too clean, too sharp, and lacked the imperfections of real camera footage.
I began the integration process by matching black and white points using Grade nodes, ensuring the ship’s darkest and brightest values aligned with the courtyard’s dynamic range. I then added grain to replicate the camera’s sensor noise, breaking the digital perfection.
Edge blur and light wrap followed—subtle but critical effects. Light wrap allowed background illumination to bleed gently over the edges of the ship, helping dissolve the hard boundary between CG and reality.
To finish the shot, I added atmospheric effects. Using noise-driven displacement, I simulated heat distortion beneath the engines. When I finally rendered the sequence, the ship no longer appeared on the courtyard—it existed within it.
Analyzing AI as a Mentor: Pros and Cons
Learning this pipeline through AI fundamentally reshaped how I acquired technical skills.
The Pros: An On-Demand Technical Director
The greatest advantage was contextual troubleshooting. Instead of generic tutorials, I could describe specific issues—such as Nuke warning me about oversized bounding boxes—and receive immediate, targeted solutions.
AI also acted as a technical translator between software packages. Each application uses different coordinate systems, and mismatches can flip cameras or destroy scale. The AI provided precise export settings and scripts that prevented these issues, saving me weeks of trial and error.
The Cons: The Missing Artistic Eye
The primary limitation was subjective critique. While AI could explain how to grade an image, it could not always assess whether the shot felt dramatic or emotionally effective. It could not judge taste.
There were also versioning issues. On several occasions, the AI referenced workflows deprecated in newer software releases. This forced me to verify and adapt, turning me into a critical consumer rather than a passive follower. Ultimately, this strengthened my understanding of core principles rather than surface-level steps.
Technical Analysis of the Final Result (render_2.mp4)
When evaluating the final video, I searched for tells—small mistakes that expose a VFX shot.
Spatial Integrity:
The camera lock is the strongest element. The ship remains perfectly anchored despite handheld motion, demonstrating a high-quality solve and correct lens distortion management.
Specular Response:
The metallic hull reflects the surrounding courtyard dynamically, proving that the HDRI environment was correctly implemented.
Shadow Integration:
Using a shadow catcher in Blender and Multiply merges in Nuke allowed the shadows to match the softness and directionality of real-world lighting.
Areas for Growth:
To push realism further, I would add secondary environmental interaction—dust displacement, foliage movement, and atmospheric response to engine thrust.
Conclusion: The Future of Independent VFX
This project represents a shift in how high-end visual effects can be learned and executed. By combining professional tools with AI-driven instruction, I was able to function as an entire VFX studio within a single project.
The AI did not do the work for me—it taught me how to do the work. It provided the how; I provided the why. In an era defined by automation, the most valuable skill is no longer memorizing buttons, but asking precise questions and synthesizing complex systems into creative intent.
My courtyard is no longer just a courtyard. It is a landing pad for imagination—where mathematics and art collide to create something truly out of this world.
Manual Method Reflection
When I look at my manually crafted rocket liftoff sequence, I see more than a visual effect—I see a masterclass in the traditional, hard-surface VFX pipeline. While generative AI dominates much of contemporary visual culture, my commitment to 3DEqualizer, Blender, and Nuke reflects a dedication to precision and authorship.
By manually calculating lens distortion coefficients and refining camera solves, I ensured absolute spatial stability. In Blender, deliberate modeling choices and hand-keyed animation allowed me to respect real-world physics in a way AI still cannot replicate. In Nuke, node-based compositing gave me full control over color science, grain, motion blur, and atmospheric distortion.
This manual approach came at a significant cost—time, compute power, and cognitive load. I became tracker, modeler, animator, lighter, and compositor simultaneously. But in return, I gained control, consistency, and intentional imperfection—the qualities that give a shot soul.
AI Mode Comparison
The AI-generated videos I analysed demonstrate extraordinary speed and aesthetic fluency. They excel at mood, lighting intuition, and rapid iteration. However, they lack deterministic control, temporal consistency, and physical causality.
AI is a powerful accelerant—but not a replacement. It suggests; it does not decide. The future lies in hybrid workflows, where AI assists with ideation and tedious tasks while humans retain authority over structure, physics, and storytelling.
AI can paint the surface—but the bones of a believable world still require a human hand.
The ambition behind this project was conceptually simple yet technically demanding: to depict a metallic, interstellar spacecraft descending into the ordinary, familiar environment of my local courtyard. Historically, achieving a photorealistic visual effects (VFX) integration of this nature would have required either formal academic training over several years or a decade of self-directed professional experience. Instead of following this traditional trajectory, I deliberately adopted an alternative approach. I employed a professional-grade VFX software pipeline—3D Equalizer for matchmoving, Blender for asset creation and rendering, and Nuke for compositing—while positioning artificial intelligence as a continuous technical mentor rather than an automated content generator.
This essay documents that process as a form of practice-based research. It examines the technical workflow I followed, the challenges encountered at each stage, and the outcomes achieved. More critically, it evaluates the role of AI as an educational and problem-solving tool within a manual VFX pipeline, contrasting this approach with fully generative AI video systems. Through this comparative analysis, I argue that AI, when used as an instructional and supportive agent rather than a replacement for craft, can significantly accelerate learning while preserving artistic intent, physical accuracy, and professional standards.
Phase I: Establishing Spatial Truth — Precision Matchmoving in 3D Equalizer
Every convincing VFX shot begins with a carefully constructed illusion grounded in mathematical precision. To place a digital spacecraft convincingly into live-action footage, I first had to reconstruct the exact motion and optical characteristics of the real camera used to film the courtyard. This process, known as matchmoving or camera tracking, is frequently an invisible component of VFX work, yet it underpins every subsequent stage of integration.
I selected 3D Equalizer (3DE) for this task due to its long-standing reputation as the industry gold standard. Unlike consumer-level or automated tracking solutions, 3DE demands a thorough understanding of lens distortion models, spatial relationships, and error analysis. The footage I captured was handheld, introducing subtle, organic camera jitter that enhanced realism but significantly increased tracking complexity.
With guidance from AI, I approached the tracking process manually. I seeded the footage with tracking points placed on high-contrast, geometrically stable features such as stone edges, window intersections, and ground textures. The AI provided step-by-step explanations of residual error analysis, emphasizing the importance of maintaining pixel errors below 0.5 to ensure a stable solve. This iterative process involved hours of refining tracks, deleting unstable points, and recalculating the solution.
A major technical hurdle involved lens distortion. Like most consumer cameras, my lens exhibited barrel distortion, causing straight lines near the image edges to curve outward. If left uncorrected, this distortion would have resulted in visible drifting of the CG object as it moved across the frame. Under AI guidance, I solved for radial distortion coefficients and undistorted the footage prior to final solving. When the resulting 3D point cloud appeared—a sparse yet accurate reconstruction of the courtyard geometry—I had successfully translated a two-dimensional video into a coherent three-dimensional spatial framework.
Phase II: Constructing Physical Credibility — Asset Creation and Animation in Blender
With the camera solve completed and exported from 3DE, I transitioned into Blender, which functioned as the primary environment for modeling, shading, animation, and rendering. Upon importing the solved camera, I observed the virtual camera replicating the exact motion of the real handheld footage. This synchronization marked a critical milestone: the digital and physical worlds were now spatially aligned.
The spacecraft was modeled with a strong emphasis on physically based rendering (PBR) principles. AI support proved particularly valuable in clarifying how roughness, metallic, and specular parameters interact to simulate realistic material responses. Rather than relying on arbitrary aesthetic choices, I grounded shader development in physical plausibility.
To ensure environmental coherence, I implemented HDRI projection using a 360-degree image captured on location. This technique allowed the spacecraft’s reflective surfaces to mirror the actual courtyard environment, producing dynamic, perspective-accurate reflections that changed as the camera moved. These reflections played a crucial role in embedding the object within the scene.
Animation posed a separate challenge. A descending spacecraft must convey mass, inertia, and intent; a linear downward movement would have appeared artificial. Through AI-assisted explanations of f-curves and interpolation modes, I refined the motion to include subtle easing, deceleration, and settling behaviors. These adjustments simulated the physics of a heavy object responding to gravitational and mechanical forces.
Additionally, I implemented a shadow catcher aligned precisely with the courtyard floor. This invisible geometry captured the spacecraft’s shadows during rendering, enabling realistic interaction with the ground plane. Without this step, the spacecraft would have appeared visually disconnected, undermining the illusion of physical presence.
Phase III: Image Integration and Optical Cohesion — Compositing in Nuke
The final and most technically intricate stage of the workflow took place in Nuke. While 3DE established spatial accuracy and Blender generated the visual content, Nuke functioned as the integrative layer where all elements were unified into a believable image.
Working within a linear color pipeline, I imported both the original plate and the rendered beauty pass. Initial discrepancies were immediately apparent: the CG render appeared excessively sharp, overly saturated, and devoid of sensor noise. These differences highlighted the necessity of detailed image integration.
Using Nuke’s node-based architecture, I matched black and white points through precise grading, ensuring tonal consistency between the spacecraft and the live-action environment. I then reintroduced film grain matched to the camera’s noise profile, along with subtle edge blur to replicate lens softness. Light wrap effects were applied to simulate background illumination bleeding onto the spacecraft’s edges, further enhancing realism.
Atmospheric effects were added in the composite rather than in 3D to maintain flexibility and efficiency. Heat distortion beneath the engines was created using procedural noise driving displacement nodes, simulating refractive index variations caused by hot exhaust gases. These small optical imperfections proved essential in elevating the shot from technically correct to perceptually convincing.
Evaluating AI as a Technical Mentor
Learning this pipeline through AI guidance fundamentally altered my approach to skill acquisition. Rather than replacing manual labor, the AI functioned as an on-demand technical director.
One of the most significant advantages was contextual troubleshooting. When encountering errors—such as Nuke performance slowdowns due to oversized bounding boxes—the AI provided immediate, specific solutions, such as implementing crop or reformat nodes. Additionally, the AI acted as a translation layer between software packages with differing coordinate systems, advising on export settings and scripting solutions that prevented camera orientation errors when transferring data between applications.
However, limitations were evident. The AI lacked subjective artistic judgment. While it could explain how to grade an image, it could not assess narrative impact, emotional tone, or compositional strength. Furthermore, discrepancies occasionally arose due to software version updates, requiring me to critically evaluate and adapt the AI’s recommendations. This necessity reinforced a deeper understanding of underlying principles rather than rote procedural following.
Technical Analysis of the Final Output
Upon reviewing the completed video (render_2.mp4), I assessed its success by identifying potential “tells” commonly associated with VFX shots.
The spatial lock was the most successful aspect. The spacecraft remained perfectly anchored to the courtyard despite camera motion, demonstrating a high-quality camera solve and accurate lens distortion handling. Specular reflections dynamically responded to environmental geometry, confirming the effectiveness of the HDRI setup.
Shadow integration proved equally effective. Using Blender’s shadow catcher combined with multiplicative compositing in Nuke resulted in shadows that matched the softness and density of those in the original footage. Areas for improvement remain in secondary environmental interactions, such as dust displacement or vegetation response, which would further enhance realism.
Manual Pipeline Versus Generative AI Video
To contextualize this workflow, I compared it directly with fully AI-generated video outputs. Generative AI offers unparalleled speed and aesthetic immediacy, excelling in concept development and mood exploration. However, it lacks deterministic control, temporal stability, and physical consistency. AI-generated assets frequently exhibit micro-morphing, inconsistent geometry, and unreliable cause-and-effect relationships.
By contrast, the manual pipeline provides absolute control over every parameter. Geometry remains consistent across frames, lighting obeys physical laws, and changes can be isolated and refined without unintended consequences. While this approach demands significant time, expertise, and computational resources, it results in professional-grade output suitable for high-end cinematic and commercial contexts.
Conclusion: Toward a Hybrid Future of VFX Practice
This project demonstrates that high-end visual effects are no longer inaccessible to independent creators, provided they are willing to engage deeply with the craft. By combining professional tools with AI as an educational and problem-solving resource, I effectively operated as a one-person VFX studio.
Crucially, the AI did not create the work; it facilitated learning. I retained authorship, intent, and creative control throughout the process. This experience underscores a broader shift in contemporary VFX practice: the most valuable skill is no longer procedural memorization, but the ability to ask precise questions, evaluate information critically, and synthesize technical knowledge into a coherent artistic vision.
In an era increasingly dominated by automated image generation, the manual VFX pipeline remains the gold standard for precision, consistency, and narrative integrity. AI’s greatest potential lies not in replacing this pipeline, but in augmenting it—handling supportive tasks while leaving creative and physical decision-making firmly in human hands. Through this balance, the future of visual effects can be both democratized and uncompromised.