Week 1 – The Age of the Image and the Trend of the Lens
What was covered in the lecture:
What do you consider to be a Current Trend of VFX? Talk about possible meanings – what do you think a trend is?
A current trend in VFX is a pattern or direction in which the industry is evolving, based on recent innovations, technology, and audience demand. Trends often reflect what is gaining traction, either because it’s solving a problem, creating new opportunities, or reshaping how filmmakers tell stories. In VFX, these trends tend to emerge as technology advances and filmmakers seek more efficient or immersive ways to bring their visions to life.
For instance, de-aging is a trend because it taps into a growing demand to seamlessly alter an actor’s age without the need for extensive makeup or new casting. It’s being used in major films and series, making it a technique that’s gaining relevance.
A trend can also hint at the future. For example, real-time rendering is pushing VFX to become faster and more interactive. It’s reshaping the industry by making it possible to make real-time decisions on-set, blending virtual elements into live-action scenes on the spot.
Impact of Lenses on Cinematography:
Lenses play a crucial role in how a scene is captured and presented, influencing not only the visual quality but also the emotional impact of a shot. Different lenses affect focus, depth of field, perspective, and how the audience perceives the space and characters within a scene. Directors and cinematographers use lenses creatively to convey mood, tension, and storytelling.
Examples of Lenses Used for Dramatic Effect:
1. Dolly Zoom (Vertigo Effect): One of the most famous uses of lens manipulation is the dolly zoom, where the camera moves forward (or backward) while zooming in the opposite direction. This creates a warping effect where the subject remains the same size, but the background appears to stretch or compress. In Harry Potter and the Prisoner of Azkaban (2004) when Harry experiences a moment of dread at the sight of the Dementors near the lake. The effect amplifies his fear and isolation.
2. Wide-Angle Lenses: Wide-angle lenses exaggerate space and perspective, making objects in the foreground appear larger while creating a sense of vastness in the background. Directors use these to emphasize isolation or tension between characters, as seen in The Boys (2019), particularly during scenes where Homelander’s dominance is portrayed, such as when he’s flying or standing over his enemies, emphasizing his power.
3. Telephoto Lenses: Telephoto lenses compress depth, making objects at different distances appear closer together. This can create a sense of claustrophobia or emotional closeness. In Game of Thrones, telephoto shots are utilized during intense battles to emphasize the chaos and confusion among characters, particularly in the Battle of the Bastards.
Lens Effects Replicated in Visual Effects Context:
In visual effects (VFX), similar principles are applied digitally to enhance or replicate these dramatic lens effects:
1. Digital Dolly Zoom: The dolly zoom effect can now be created digitally in VFX, allowing filmmakers to simulate the same dramatic feeling without needing precise in-camera movements. In Avengers: Infinity War (2018), a digital dolly zoom is subtly used when Thanos retrieves the Soul Stone, intensifying the gravity of the moment as Gamora’s sacrifice becomes clear.
2. Simulated Depth of Field: In scenes with heavy VFX, artificial depth of field is often added to digital shots to mimic the way a lens naturally blurs the background, giving focus to the subject. In House of the Dragon, a good example of simulated depth of field happens during the scene where Rhaenyra Targaryen and Alicent Hightower. The background is blurred, which helps to highlight the strong emotions between the two characters. This technique shows the rising tension in their relationship, allowing viewers to really connect with their expressions and the seriousness of the situation, while also showing the impressive setting around them.
Also, they use artificial depth of field to make the background blurry and focus on the main subject in dragon scenes, where it helps show their size and impact.
3. Virtual Lenses in 3D Software: Filmmakers use virtual lenses in 3D programs to imitate how cameras work. They can add effects like blur and light flares to make digital scenes look more real. In House of the Dragon, during the big dragon battles, virtual lenses help create a sense of realism and make the action more exciting.
The Workshop Activity: Can we spot any current trends of VFX? 1. De-aging: A visual effects technique used to make an actor or actress look younger, especially for flashback.
Avengers Endgame –
Captain Marvel –
Marvel –
With the complicated timeline of the MCU, there are a lot of flashbacks. Instead of using new actors, the original stars can travel back in time with the magic of technology to portray their characters at a younger age.
2. Deep Learning for Animation: AI is being trained on human and animal movements to create more realistic animations.
3. Real-time Rendering: The push towards real-time rendering is making workflows faster and more interactive, with improvements in ray tracing and GPU power.
4.Virtual Production: With tools like LED walls, digital environments are integrated with live-action during filming, making virtual production more popular.
5. Advanced Simulations: Simulating natural effects like water, fire, and smoke is becoming more realistic with improved simulation technologies.
6. Holographic Displays: Holographic technology could change how we experience visual effects, offering more immersive visuals.
7. AR and VR: Augmented and virtual reality are expanding in VFX, both in content creation and as production tools.
8. AI in VFX: AI and machine learning are streamlining workflows by automating tasks, improving textures, and enhancing animations.
Select four images of Harold Edgerton and pair a VFX shot for each image:
The Umbrella Academy (Season 2 Episode 1):
Xman – Days of future past (2014)
Deadpool X Wolverine (2024)
Doctor Strange (2016) open your eye scene
Oppenheimer (2023):
The Weekly Activity:
‘A New Reality‘ (2020) Age of the Image, Series 1, episode 1.BBC 4, Television, 24 February, 21:00.
In The Age of the Image, Dr. Fox explores how we live in a visually dominated era where images have an high influence on our daily lives, culture, and even how we see reality. He emphasizes that unlike past centuries that were defined by philosophy or novels, today’s world is defined by an overwhelming creation and consumption of images, due to advanced technologicies like smartphones and digital media. The availability of images has not only changed what we see but also how we interpret and interact with the world.
Dr. Fox discusses the growth of new image-making techniques, which were inspired by scientific developments such as Einstein’s theory of relativity, which suggested that space and time are flexible. This idea is represented in the work of artists like Salvador Dali, whose famous soft clocks in The Persistence of Memory are supposed to represent the bending of time, illustrating Einstein’s ideas. Dali’s art, alongside the rise of film and photography, created a visual language that is capable of manipulating time—freezing or stretching it in ways that the human eye cannot detect. The manipulation of time and space in his art reflects scientific advances such as high speed photography and nuclear images, which expose what was once hidden from the naked eye.
Fox also discusses on how visual culture has grown into the “age of the lens,” in which cameras not only catch but also shape our realities. This era has enabled people to create and edit images. He questions the legitimacy of these photos, wondering if we can actually trust what we see in a time based on digital editing, deepfakes, and visual illusions.
In conclusion, Dr. Fox suggests that today’s era is deeply influenced by both the artistic and scientific manipulation of images, comparing how we now experience time, space, and reality itself through the lens of modern technology.
Week 2 – The Photographic Truth Claim: Can we believe what we see?
What was covered in the lecture: Allegory of the Cave – Video The Allegory of the Cave is a famous philosophical theory by Plato that explains how people mistake shadows for reality. In the story, prisoners are chained in a cave, only seeing shadows on the wall, believing these to be the real world. The allegory shows how images are signs and not the actual thing, with everything tied to image projection—mere phantoms. It represents the journey of enlightenment, where most people live in the shadows until they uncover true knowledge.
This relates to the Socratic method, where learning new truths can be overwhelming at first.
Photographs as Indexical signs
Peirce’s definition of “index” was a sign that shows evidence for the existence of what it refers to e.g. a footprint or a crack. In terms of photography, indexicality refers to the fact that a photograph is created by the direct action of light on a photosensitive surface.
They often called reference object. What is Semiotics?
Semiotics is the study of signs and symbols and how they are used to communicate meaning. It explores how words, images, and other symbols represent ideas, concepts, or objects in society, helping us understand how meaning is constructed and interpreted.
For example, in advertising, a red rose often symbolises love or romance. The rose itself is just a flower, but through semiotics, it has come to represent deeper meanings in our culture, such as passion or affection.
Perceptual Realism
Refers to the way visual media, like films or photos, try to mimic the way we naturally perceive the world. By using techniques that match our real-life visual experiences, it creates a sense of realism that makes viewers feel like what they are seeing is genuine.
In movies like The Matrix, visual effects are designed to look as natural as possible, even when showing impossible things like slow-motion bullets. This helps create a world that feels real to the audience, even though the events are fictional.
Photographic truth claim
The photographic truth claim is the assumption that photographs reflect reality as it is. Because photos are captured by a camera, people often believe they are an objective record of events. However, photographs can be edited or framed in ways that shape how we interpret them, meaning they don’t always show the full truth.
News photos are often considered factual representations of real events. For example, an image of a protest might be seen as proof of the event’s occurrence, but depending on how it’s framed (focusing on the crowd or an isolated act of violence), it could give different impressions of what happened.
The Workshop Activity: Examples where image becomes more reality and reality becomes more image: CGI ads:
Environment Building:
The impact of VFX on Photographic Truth:
Visual effects have a strong impact on the way we perceive images. It challenges the traditional photographic truth claim, which assumes that images captured by a camera are direct representations of reality.
Using semiotic concepts such as indexicality (the link between the image and reality) and iconicity (how much the image resembles reality), VFX challenges this traditional understanding by creating images that blend reality with digital fabrications. While photographs were once considered to have a direct connection to the real world, VFX blurs the line between the real and the artificial, making it harder for viewers to distinguish truth from fiction.
Semiotic Analysis: – Indexicality: In House of the Dragon/Game of Thrones , many scenes involve characters interacting with fully CGI dragons. While the actors and some sets are real, the dragons and some of the environments are completely made using VFX. This breaks the direct connection between the image and the physical world, as the dragons don’t exist in reality. – Iconicity: The CGI dragons designed to look like real animals with realistic textures, movement, and lighting effects, making them look real. Although the audience knows that dragons are fictional creatures, their realistic appearance and interaction with real actors makes it hard to tell that they are not real. – Photographic Truth: The show does not claim to show historical reality, but it builds a world where viewers are drawn into a believable narrative. The presence of dragons and other digital elements challenges the photographic truth claim by mixing real and digital, making it difficult to see where reality ends and fiction begins, even in a fantasy setting.
How VFX Challenges Photographic Truth in House of the Dragon/Games of Thrones:
VFX in House of the Dragon/Games of Thrones combine digital creatures like dragons into scenes with real actors and environments. This challenges the idea that images are always a true representation of reality, as the scenes show a mix of real and fake. The dragons appear real enough to make audiences momentarily forget they are watching CGI.
Do Viewers Accept These Images as “Real”? Although viewers know that dragons are not real, they still accept these digital creatures as part of the story. The detailed VFX allows audiences to believe in the dragons while watching. The dragons behave like real animals, which makes their actions easier to understand and relate to. This connection helps viewers see them as real, even though they are made with VFX. This shows how VFX can create a compelling and believable image that challenges traditional concepts of photographic truth, despite being entirely CGI.
The Weekly Activity: What do you think is meant by the theory: The Photographic Truth-Claim? The Photographic Truth-Claim is the idea that photographs provide an accurate and objective representation of reality. This belief comes because photos are created through a mechanical process leading people to assume that what they see in a photo is exactly what happened. This assumption creates trust in photographs as true proof of reality. People feel that because a photograph captures a moment directly from the world so it must be true.
However, even when a photo captures true reality, how we interpret it can still be shaped. The photographer influences the narrative through choices like angle, framing, lighting, and lenses. Tom Gunning in his article “What’s the Point of an Index? Or, Faking Photographs”, explains that both traditional and digital photography involve choices and manipulation that affect the final image. Although photos have an indexical link to reality, it does not guarantee truth. Manipulations like editing can change how reality is presented, making the photographic truth claim less reliable.
Visual effects greatly impact how we interpret images and challenge the photographic truth claim. VFX uses semiotic concepts like indexicality (the link between the image and reality) and iconicity (how closely the image resembles reality) to blend real-world elements with digital fabrications, making it hard to tell real from fake. For example, in “House of the Dragon”, CGI dragons interacting with real actors break the indexical link, as dragons don’t exist. However, the iconic nature of the dragons, which were created with realistic textures, movements, and lighting, gives them a real and convincing appearance. The show blends digital creatures and real actors so smoothly that viewers forget for a moment that they’re watching CGI and accept the images as real.
In summary, the Photographic Truth Claim is the belief that photographs capture true reality. I believe this theory is wrong because photographs can be manipulated and modern technologies like VFX further blur the line between reality and fake.
Week 3 – Faking Photographs: Image manipulation and computer collage
What was covered in the lecture: A Trend of Fakery and Hoaxes Fakery and Hoaxes usually involve creating fake images to trick people. In the past, before digital technology, people used non-digital photographs in hoaxes. They would stage photos or use tricks with cameras to make things look real, even though they weren’t. With digital editing today, it’s even easier to make fake images. People can use software to change pictures, making them look convincing even though they’ve been altered.
In politics, fake images are sometimes used to create false stories or change how people feel about a person or event. This trend has grown with the internet, as these fake pictures can spread fast. Conspiracy theories: Conspiracy theories often use these fake or hoax photos to make their stories seem real. When people see a picture, even if it’s fake, they might believe the conspiracy is true because it looks convincing.
Today, with VFX it’s possible to create very realistic images and videos. VFX, often used in movies, is also used in hoaxes to make things that never happened seem real, making it even harder to tell what’s fake.
The Workshop Activity: Collect four famous faked photographs (non-digital) that were later proven to be fake. The use of doctored photographs to achieve political ends:
In this 1937 publicity photograph taken at filmmaker Leni Riefenstahl’s villa in Berlin, Nazi propaganda minister, Joseph Goebbels, can be seen standing beside Adolf Hitler.
Experts believe that Goebbels was removed from the published photograph to dampen rumors he was having an affair with Riefenstahl.
Stalin used a large group of photo retouchers to cut his enemies out of supposedly documentary photographs. One such erasure was Nikola Yezhov, a secret police official who oversaw Stalin’s purges.
Stalin knew the value of photographs in both the historical record and his use of mass media to influence the Soviet Union, they often disappeared from photos too. Conspiracy theories:
Fairy Hoax and Loch Ness Monster
What techniques were used to fake these imaged?
Retouching: Artists would physically alter the negatives or prints of photographs using tools like brushes and inks to carefully erase or paint over unwanted details.
Airbrushing: This involved using an airbrush tool to blend parts of the photo, effectively smoothing over areas where a person or object had been removed.
Cropping: Sometimes parts of the photo were simply cut out or cropped to remove the subject.
Double Exposure: Another technique involved overlaying multiple negatives to combine or remove elements from an image.
Collect four famous faked digital photographs digital that were later proven to be fake A photo of a lion strapped onto a machine in order to make the iconic MGM intro of a lion roaring is actually a photo of a lion being diagnosed at the vet’s office. This photo of an explosion was actually taken 7 years after Einstein’s death Politics:
Events: Collect four historic examples where visual effects shots were faked to look like real documentary or TV footage: Chernobyl (HBO):
The Crown (Netflix):
The Irishman:
VFX were used to recreate historical New York City and various environments from the 1950s to 1970s
The Queen’s Gambit (Netflix): 00:48-00:58
this show uses VFX to recreate the look and feel of various cities in those decades, including New York, Paris, and Moscow. The environments were digitally altered to match historical footage of these cities. VFX by Chicken Bine FX
How does the use of visual effects contribute to the story?
The use of VFX in movies and TV shows that recreate old environments helps make the story feel more real and believable. By recreating historical places and events it allows the audience to be fully immersed in the time period without distractions. It brings places that no longer exist or have changed a lot over time back to life, so viewers can feel like they are actually there.
For example, in “The Crown” and “Chernobyl,” VFX makes it possible to recreate 1940s London and the Chernobyl disaster site in a way that looks real and matches old footage. This makes the audience feel like they’re witnessing history as it happened, which helps tell the story more effectively. VFX allows filmmakers to show events or environments that are difficult to film today, making the story richer and more authentic.
The Weekly Activity: Find two composite VFX shots from a movie or TV show. Try to pick ones that are not obvious VFX shots. Period dramas or more subtle scenes can be great choices.
A composite in visual effects VFX means combining different visual elements from different sources into one smooth realistic scene. These elements could include live action footage, CGI, and practical effects like props or fake smoke. Compositing is used to create environments or effects that would be hard and sometimes impossible to film in real life and blending them together in a way that looks natural and convincing.
In The Queen’s Gambit, many scenes use VFX compositing to create realistic environments that combine real actors and objects with CGI. One example is the use of set extensions with green screen to expand the look of streets and buildings. While the actors and part of the building and street were filmed on set, the rest of the environment, such as additional streets and background elements, was added digitally.
In another scene, green screen was used for background compositing when actors were filmed inside a car or standing on a balcony. The actors and parts of the set were filmed live, while the surrounding environment, such as buildings, skylines and other objects, was created using CGI.
Both examples required strong attention to optics, perspective, and framing to ensure that the CGI extensions and backgrounds followed the same angles, lighting, and shadows as the real world elements. Proper alignment of lighting and reflections was important to make the scene feel realistic, blending the real footage with digital effects.
Another example involves adding CGI elements, like snow, in a winter scene. The house and trees were filmed practically, but the snow and stairs were added digitally, making the scene look like a real winter setting. The snow followed the shape of the roof and ground naturally, and the lighting and shadows were adjusted to make it look convincing.
In all these examples, VFX compositing was essential in expanding and improving the real world footage, and visual composition techniques, such as the rule of thirds, helped frame the shots in a natural way. This allowed the VFX to blend smoothly with the live-action parts, making the final scene look realistic.
What was covered in the lecture: What is Photorealism? Photorealism is an art style where the artist creates a painting or image that looks as realistic as a photograph. Artists use detailed techniques to capture the subject with a high level of precision, including shadows, reflections, and textures, so the final result appears almost identical to a photo. Trend of Photorealism: Recently photorealism has become popular in digital media, with advancements in 3D modeling, computer graphics, and CGI. Artists now use software to create highly realistic images that are used in movies, video games, and advertising. The trend is moving towards creating digital environments and characters that look incredibly real so its really hard to tell what’s real and what is fake. Richard Estes:
Richard Estes is an American artist, best known for his photorealist paintings. The paintings generally consist of reflective, clean, and inanimate city and geometric landscapes.
The Workshop Activity:
Photorealism in visual effects refers to creating digital images or scenes that are so realistic they look like actual live action footage. VFX artists achieve this by replicating real world details, making it hard to tell the difference between what’s real and what’s CGI. Key Elements: – Lighting: Accurate light sources and natural light behaviour.
Example: The dragons are in the shadow because the angle of the camera and the sun – Texture: Realistic surfaces/skin with fine details.
Example: Castle texture, Wall texture. – Shadows: Correct placement and depth of shadows based on light sources.
Example: The dragons are in the shadow. The soft shadows reflected on the sand because of the cloudy environment. – Reflections: Proper reflection of objects in water or glass.
Example: A car’s body reflecting the surroundings. – Motion: Natural movement that mirrors real-life physics.
Example: The dragon flies/diving with realistic gravity and wind effects. – Proper scale and proportions – of objects and scenes.
Lighting, texture, and motion are the most important elements of photorealism.
These elements distinguish Photorealism from other styles because it focuses on replicating real life visuals as closely as possible.
In visual effects, there are two main trends of Photorealism: CGI: This version of photorealism is completely constructed using 3D rendering techniques. It involves building everything digitally, including objects, lighting, and textures, to achieve a highly realistic image. This approach is often used in animated movies or fully virtual scenes where all elements are generated from scratch. Problems with CGI Photorealism: High cost: Creating realistic 3D renders, requires significant computing power, which can be expensive and time consuming. Uncanny valley: Sometimes fully CGI scenes or characters can feel “almost real” but slightly off, which can be unpleasant to viewers. Lack of realism: Certain elements like human facial expressions or natural environments can be hard to recreate digitally.
Avatar: The Way of Water
House of The Dragon: city and dragon
Dune: Dessert and sand warms
Queen’s Gambit:
Composite Photorealism: This method combines real world footage with CGI elements. Artists mix live action with CG assets to create scenes that look realistic. This approach is commonly used in films to combine visual effects into real life settings, like adding digital characters or backgrounds to real world environments. Problems with Composite Photorealism:
Mismatch between real and CGI elements: Integrating live action with CGI can sometimes lead to inconsistent lighting, shadows, or textures, making the combination less believable. Complexity in blending: The process of blending real and virtual elements requires precise matching of angles, lighting, and colour, which can be challenging and time-consuming. Limited flexibility: Since some elements are real, there may be limitations in terms of camera movement or scene adjustments compared to fully CGI environments.
The Umbrella Academy- JFK assassination – 1960s Dallas.
Peaky Blinders: CGI is used to enhance the industrial setting of 1920s Birmingham.
The Crown: Queen’s Gambit: Moscow
The Weekly Activity
What is Photorealism?
Photorealism is an art style that its goal is to create images that look as realistic as photographs. Artists aim for high accuracy by capturing small details like lights and shadows, reflections, textures and perspective to make the subject appear lifelike.
Trend of Photorealism
In recent years, photorealism has become more popular in digital media, thanks to developments in 3D modelling and computer graphics. Artists now use software to create digital environments and characters that are so realistic it’s hard to tell what’s real and what’s digitally created. This trend is most noticeable in movies and tv shows, where digital elements blend smoothly with live action footage.
Photorealism in Visual Effects
Photorealism in VFX is the creation of digital elements that are identical to real world footage. VFX artists replicate real world details such as lighting, texture, shadows, and movement to make the scenes appear realistic.
Composite Photorealism
Composite photorealism is a trend within VFX, where real-world footage is combined with CGI elements. This technique is often used to extend sets, add digital characters, or create environments that would be difficult or impossible to film in real life.
For example, in TV shows like The Queen’s Gambit, Peaky Blinders, and The Crown, compositing was used to extend streets and buildings by using green screens and other techniques to blend live action shots with CGI backgrounds or elements.
The key to achieving photorealism in compositing is by making sure the lighting, shadows, and perspective of the CGI elements match the real-world footage. However, this process can be difficult because any difference between real and digital elements, such as inconsistent lighting or textures, can make the scene less believable.
CGI Photorealism
CGI photorealism is a trend in visual effects in which complete scenes are created digitally using 3D rendering techniques. This approach is often used to create virtual environments, characters and objects that would be too expensive or impossible to film in real life.
For example, in animated films or fully digital scenes in movies and TV shows like House of the Dragon and Avatar, CGI is used to create entire worlds, characters, and action scenes that are entirely digital yet designed to look as realistic as possible.
Week 5 – The New Indexicality in Digital Media Trend: Other Indexes
What was covered in the lecture: What is capture in VFX? Capture in VFX is recording a real life information like movements, textures, how people look, objects and places to build digital models, animations, or simulations. This data helps VFX artists make realistic or stylised virtual scenes.
The Workshop Activity: Types of capture used in VFX:
– Motion Capture (MoCap) – This technique records an actor’s body movements using sensors which then implemented on a digital character. MoCap creates realistic movement and is often used in movies and games to make animated characters move naturally.
– Performance Capture – An advanced type of mocap that also records facial expressions and small details. This is used for creating realistic characters like Hulk in The Avengers, Pogo in The Umbrella Academy and Avatar.
– LiDAR and 3D Scanning – LiDAR stands for light detection and ranging. This method uses lasers to map the shape of real places or objects, creating precise digital versions. It’s useful for adding realistic locations to scenes or making digital copies of real objects.
– Green/Blue Screen Markers – These are on screen reference points, used when shooting a scene. They are strategically placed on-set to save time in post-production.The size, shape, and color of the markers can vary depending on the project.
– MatchMoving – A technique that allows to add 2D, live action or CG elements into live action footage with correct position, scale, orientation, and motion relative to the photographed objects in the shot.
What’s the pros and cons of MoCap?
Advantages of Motion Capture:
1.Realism and Accuracy: MoCap captures real human movements, making animations look lifelike and natural, especially for complex scenes.
2. Efficiency and Speed: Compared to manual animation, MoCap is faster and can capture complex motions quickly, saving time in production.
3. Consistency: MoCap data allows characters to move consistently across scenes, making it easier to maintain continuity.
4. Versatility in Function: MoCap is used across different media like film, games, and VR, making it a flexible tool for different types of projects.
Disadvantages of MoCap: 1. Cost: Setting up MoCap equipment can be expensive, requiring specific cameras, suits, and software.
2. Limited Expression: Small and subtle expressions or movements might not capture well, especially without real good equipment.
3. Technical Challenges: MoCap requires a controlled setup, and errors in data capture can take a lot of time to fix.
4. Actor Limitations: The captured performance depends on the actors abilities which may limit the range of characters or motions possible.
When and for what characters motion capture will be used? Motion capture is used when filmmakers or game designers want realistic movements for characters. It’s usually used for:
1. CGI Characters: For creatures or characters that don’t exist in real life, like Pogo in The Umbrella Academy or Hulk in The Avengers.
2. Human like Movements: To capture natural human movements for animated characters in video games, films, or VR.
3. Fantasy or Sci Fi Characters: For characters with unique body shapes or abilities, like aliens or robots, where human like movement still needs to feel believable.
The Weekly Activity: Compare motion capture and keyframe animation. Begin with definitions of each method, then discuss the advantages and disadvantages of motion capture.
Motion Capture and Keyframe Animation are two techniques for animating characters. Each has different advantages and disadvantages that make them ideal for different types of projects in films, TV, and video games.
MoCap is a technique that uses sensors on an actor to record real life movements which allows digital characters to move realistically. Actors wear suits with sensors that track their movement, and special cameras and software process this data, making digital characters to implement the actor’s movement naturally. MoCap now offers even more advanced options like performance capture, which captures facial expressions for even more lifelike characters.
Some of the films that used MoCap are Avatar: The Way of Water and Guardians of the Galaxy, and Avengers: End Game.
MoCap has many benefits, including making movements look natural, being faster than creating animations manually, keeping movements consistent across scenes. However, it also has some disadvantages. The equipment is expensive, it might miss small facial details, and technical issues can be hard to fix.
Keyframe Animation is a technique in which animators manually create keyframes for movement and the software fills in the frames between these keyframes. This method allows animators to control every detail in the movement, making it ideal for stylised or exaggerated animations.
Keyframe animation is also used for non realistic creatures like dragons, thanks to the flexibility it provides in creating unique and stylised movements that wouldn’t occur in real life.
MoCap is perfect for realistic animation, but it can be expensive and technically challenging. Keyframe animation might take more time to create but it, gives complete control over motion which is ideal for stylised animation. While MoCap works best for realism, keyframe animation best at flexibility, making both techniques useful for different kinds of storytelling purposes.
Week 6 – Reality Capture (LIDAR) and VFX
What was covered in the lecture: Reality Capture and VFX Reality capture is the process of using technology to take real world digital pictures of objects, locations, or people so that after 3D models can be created accurately for VFX. Many movies use this method to make scenes look real and full of details. It lets producers to digitally recreate places or objects that would be hard or expensive to build in real life or impossible to visit the original location.
For example, in Chernobyl, VFX artists digitally recreated the Pripyat and the nuclear plant to replicate the real environment accurately without needing to film on the actual site.
What is perspective?
Perspective in drawing is a technique that gives the illusion of 3D depth and space to a 2D image.
One Point Perspective vs Two vs Three:
Linear perspective is a mathematical method that Renaissance artists used to create the illusion of depth and space (3D) on a flat surface (2D).
To achieve this effect, there are three essential components needed in creating a painting or drawing using linear perspective:
1. Orthogonals (also known as parallel lines)
2. Vanishing point
3. Horizon line
Following these rules, it is possible to arrange the composition in a similar way to how the human eye sees the world. The main idea of this technique is that objects that are closer to the viewer appear to be larger and objects that are further away appear to be smaller. In order to accomplish this, the artist places a horizontal line across the surface of the picture (horizon line). Parallel lines then converge as they recede and meet at the vanishing point on the horizon line.
The sketch of Leonardo da Vinci’s The Adoration of the Magi serves as a good example of linear perspective, because the grid-like lines which were used to scale the figures can still be seen.
Why perspective is important in 3D Scanning (LiDAR)? Perspective is very important in 3D scanning with LiDAR because it helps capture the correct depth, angles, and positions of objects in a scene. When perspective is accurate, every point in the scan is placed in the correct location, making the 3D model look more like the real environment.
Accurate perspective is very important when blending 3D scans with CG elements. With a right perspective CG objects can align smoothly with 3D scans, matching the size, depth, and location of real world elements. It helps create a final image where real and digital objects look alike, making the scene look natural and realistic.
The Workshop Activity:
3D Scanning
3D scanning is the process of capturing the shape, texture, and details of real objects or people using special cameras and software. It can be done with LiDAR or other methods like photogrammetry, which requires taking several images from different angles and combining them to generate a 3D model. In VFX 3D scanning helps recreate real world objects or characters with high accuracy, adding realism to scenes.
For example, in Avengers: Endgame and other Marvel movies they used 3D scanning to create digital doubles, like Iron Man’s suit and Hulk, allowing them to move realistically and interact naturally within their environment.
LiDAR (Light Detection and Ranging)
A method that uses laser pulses to measure distances by recording how long it takes for light to return from an object. These measurements create a detailed map of the object’s shape that is after used to build accurate 3D models. LiDAR is commonly used in VFX to make digital copies of buildings, landscapes, and other complicated shapes. This makes it very useful for set extensions or adding digital environments.
For example, in Game of Thrones they used advanced VFX techniques to create locations like King’s Landing, Dragonstone and Winterfell. Digital copies were made to extend real places and landscapes, making them look large and detailed. Visual Skiescompany use Lidar, drones and photogrammetry to scan environments. Scan Lab company use Lidar technology to create visuals of historical sites.
Military LiDAR Solution:
LiDAR is widely used by militaries for tasks like battlefield mapping, line of sight, mine detection, and vehicle navigation. It creates detailed 3D maps of terrain, showing elevations and obstacles to help with planning. By determining line of sight, LiDAR identifies visible and hidden areas, allowing safer positioning. It also scans the ground to detect hidden landmines, making paths safer for troops. For vehicle navigation, LiDAR helps unmanned vehicles detect obstacles and choose safe routes, reducing risk to soldiers. In some cases, LiDAR can even scan tunnels to identify underground threats and ensure safer operations.
LiDAR Scan vs. Photograph of the Pyramids
ScanLAB’s LiDAR scan of the Great Pyramid shows us and helps us understand this old building in a new way. It uses a point cloud effect (an image made up of thousands of small dots instead of actual surfaces) to show the shape and depth of the pyramid without colour or texture. This effect gives the pyramid a skeleton look that makes it look like a very accurate 3D model. The dark background in the scan helps us focus only on the shape of the pyramid.
On the other hand, we are used to see images of the pyramids that only show one angle and prespective based on where the photographer stands. Photos capture details like the colour of the stones of the pyramid, textures and lighting, helping us to see and understand the beauty of this historical site. However, they don’t give us specific size and accurate information about the shape and structure. This makes it hard to understand exactly how the pyramid was built or how its stones fit together. Pictures of the pyramid show the exterior of it and maybe how big it is, but they don’t show the exact plan, angles, or design choices that the Ancient Egyptians made. Therefore we miss out the structural details needed to fully understand its engineering.
LiDAR scans provide accurate measurements of the pyramid’s structure, allowing historians and archaeologists to study every stone and layer in detail. This data gives us information about the building itself that pictures alone can’t show, like how the stones were arranged and where the hidden rooms are.
LiDAR is also very helpful for preservation. It allows experts to track small changes in the structures over time, like shifts or erosion, helping them to prevent damage to historical sites. LiDAR can create detailed 3D models that document the pyramid’s current condition for future study and virtual exploration.
ScanLAB’s project shows how quickly the ancient Egyptians learned to build pyramids, creating huge structures that still stand today. Their LiDAR work helps us understand not only the pyramids but also Cairo’s history. LiDAR gives us a closer look at ancient sites, helping us explore and protect them for the future.
The Weekly Activity:
Case Study: Visual Skies and the Creation of Dragonstone in House of the Dragon
Visual Skies is a UK company founded in 2016 that specialises in 3D scanning and creating digital assets. They use reality capture technology which is useful in film, TV, architecture, and preserving historical sites. Visual Skies has found new ways to use LiDAR, photogrammetry, and drones. They work with well known clients to create realistic digital environments. Their work was key in creating Dragonstone for House of the Dragon, using advanced scanning and creative tools to make it look real.
To capture Dragonstone, Visual Skies used drones with LiDAR to safely scan the cliffs and castle structures in difficult terrain. Their drones collected aerial data and they used photogrammetry to add close up details of stairs and castle walls. This gave VFX artists a full model to work with. Combining aerial and ground scanning gave the production an accurate maps and data to work with.
Visual Skies created “VS Scout” app using unreal engine which was an important tool in this project. Their software was designed for location scouting and shot planning, turning scanned environments into interactive AR space. Directors and VFX teams could place dragons, actors, and props in the digital scene and see their size and position in real time. The app allowed to control and adjust focal length, aperture, lighting, sun position and fog level. By using reality capture instead of traditional filming, Visual Skies offered several advantages:
– Creative Flexibility: Directors could explore and change Dragonstone’s digital world by playing around with lighting and shot composition.
– Safety and Accessibility: Drones safely captured the isolated, tide-sensitive areas of San Juan de Gaztelugatxe, reducing logistical risks.
– VFX Integration: Scanned models kept visuals consistent between scenes, making VFX integration easier.
In summary, Visual Skies’ use of LiDAR scanning, photogrammetry, drones, and the VS Scout App set new standards for visual realism in House of the Dragon, showing the potential of reality capture in luxury film production.
What was covered in the lecture: Photogrammetry and VFX: Photogrammetry is a method that creates 3D models from photos taken at different angles. In VFX, it helps make realistic digital versions of real world objects, places, and people. This technique is used in movies and TV shows to create lifelike environments and characters.
Benefits of Photogrammetry in VFX Photogrammetry allows VFX artists to create digital models that look very real. This helps make movies and TV shows more immersive for viewers. It also saves time and money because it reduces the need to build physical sets or props Photogrammetry catches colour and texture very well.
Future of Photogrammetry As technology improves, photogrammetry is expected to become even more important in VFX. Better cameras and software will allow for more detailed and accurate 3D models. New techniques, like Neural Radiance Fields (NeRFs), are also being developed to enhance photogrammetry.
Quixel Megascans: Quixel is a company that creates 3D assets and is the maker of Megascans, a photogrammetry asset library. Quixel is part of the Epic Games family.
Their assets are used by game developers, filmmakers, and visualisation specialists to create games, animated entertainment, and lifelike scenes.
They use a combination of photogrammetry and laser scanning to capture real world objects. Photogrammetry –
This method is suitable for capturing some subjects and materials, but shiny surfaces can be difficult to capture because of reflections. Laser scanning –
This method can capture the shape of shiny or reflective surfaces, but it doesn’t capture texture detail. For this reason, every laser scanned subject is also scanned using photogrammetry. Their goal is to create photorealism in real time in an easy way.
They usually scan rocks, old ruins and historical objects rather than shiny sky scrapers.
Digital Doubles
Actors and stunt doubles are often 3d scanned to create digital facsimiles.
Photogrammetry is better than LiDAR to create digital doubles because it catches colour and texture better.
The Workshop Activity: The Digital Michelangelo Project: The Digital Michelangelo Project began in 1998, led by staff and students from Stanford University and the University of Washington. They worked to scan Michelangelo’s sculptures and architecture, making it one of the first major 3D scanning projects. This project was very important because it helped start the development of 3D scanning technology.
Their goal was to create high quality 3D models of Michelangelo’s artworks and share these models with scholars around the world. They wanted to improve 3D scanning technology and use it to help preserve cultural heritage. This project aimed to build a long term digital archive of important historical artifacts.
To complete the project they used a mix of laser scanners, structured light scanners, and custom built systems to handle the large scale and fine details of the sculptures. For example, they had to capture details like tiny cracks and chisel marks. Scanning large sculptures like David was challenging because of the scale and lighting conditions, and they needed to avoid touching the artwork. The team built special scaffolding and used careful techniques to complete the scans without causing any damage.
The quality of the scans was so high that the Italian government did not allow then to put the full data set online. However, Stanford researchers created a tool called Scan View that let users see specific details of the sculptures, including parts that regular museum visitors could not see up close.
This project had a big impact on preservation and study. Scholars could use the digital models to study the works in detail, which also helped with preservation efforts. The Digital Michelangelo Project also inspired future digital preservation projects, showing how technology could help protect cultural heritage.
The Veronica Scanner:(link)
The Live Portraiture Project at the Royal Academy in 2016 used a special 3D scanner called the Veronica Scanner. The Factum Foundation created this scanner to make realistic digital portraits of people’s faces. In just a few seconds, the scanner captured detailed 3D images. These images were then shared online, so people could see their digital portraits right away. Some portraits were even turned into physical sculptures using robots and 3D printers. This project showed how 3D scanning can capture realistic details and connect traditional portrait art with new technology.
Mimesis Test: The Mimesis Test mean a way to judge how well an artwork or image copies or mirrors reality. “Mimesis” is a concept from ancient Greek philosophy, especially from Plato, which means imitation or representation. In art, mimesis is about making something that closely resembles the real world. For example, a painter who copies a tree exactly as it appears is using mimesis. The goal of mimesis is to make a faithful copy of real things, where the meaning is thought to come from the real objects themselves. In this way, mimesis is like a mirror that reflects reality as accurately as possible. In mimetic representation, the success of the artwork depends on how closely it resembles its real-world counterpart.
However, there are some limitations. Even very realistic images or 3D models still differ from the original. Real objects are 3D, while images are often flat and bounded by a frame. 3D scanning technology today tries to overcome these limitations by creating highly detailed digital copies that capture both form and texture. This technology allows us to get closer to perfect mimesis, but there are still differences between digital copies and real objects.
In VFX, mimesis is used when expanding sets or creating lifelike 3D models, where digital elements are designed to look as close to reality as possible.
Can you write a definition of either Verisimilitude or Hyperrealism? how it relates to visual effects. Verisimilitude means making something look realistic and believable. It’s about creating objects, people, or scenes that feel natural, even if they aren’t exact copies of reality. In visual effects, verisimilitude is used to make digital scenes look real to the viewer without copying real world details precisely. For example, in movies it can involve creating lifelike characters and environments that look true-to-life but don’t necessarily mirror real photos.
Hyperrealism is like verisimilitude but taken one step further. It creates images that look even “more real than real.” Hyperrealism is about adding extra details and perfection to make an image or scene feel intense and evoke emotions. Instead of just looking like reality, hyperrealism adds a heightened sense of life, making it look almost too perfect. In VFX, hyperrealism is used to create scenes that look amazing and very detailed, often creating a strong impact on the viewer.
Difference between Mimesis and Verisimilitude: While verisimilitude and mimesis are both about representing reality, there’s a difference. Mimesis is about copying reality directly. It’s focused on copying the real world as closely as possible, like a mirror reflecting what exists. In mimesis, the goal is to make a true copy of nature or life.
In contrast, verisimilitude doesn’t need to make an exact copy. It’s about making something feel realistic and believable but allows for creativity. Verisimilitude creates a natural look, focusing on how people see things, rather than strictly imitating them.
Week 8 – Simulacra, Simulation and the Hyperreal
What was covered in the lecture: “Simulacra and Simulation” by Jean Baudrillard “Simulacra and Simulation” book by Jean Baudrillard explores how modern reality is shaped by simulations and images and not by direct experiences of the real world. According to the book, Simulacra are images or copies that have no original reference, creating a “hyperreal” state. Simulations are processes that try to replicate real world scenes or actions. Simulacra and Hyperreality: Simulacra: Images or representations with no original source, creating their own reality. For example a fictional characters that feel real but not real.
Hyperreality: A situation where reality is replaced by simulacra. What we see as real is shaped by media, signs, and symbols, not authentic experiences. Four Stages of the Image: – Reflection: The image mirrors reality (portrait)
– Masking: The image distorts reality (propaganda)
– Absence: The image pretends to show something nonexistent (myths).
– Simulacrum: The image has no link to reality and exists independently (virtual worlds)
What did Baudrillard think about The Matrix?
Jean Baudrillard’s didn’t like how the movie showed his ideas of the Matrix. He thought the film made the difference between the real world and the fake world (the Matrix) too obvious. As he sees it, in hyperreality, the real and the fake mix together, and you can’t tell them apart anymore.
He also didn’t like that the movie used Plato’s Allegory of the Cave, where people escape the fake world to find the truth. Baudrillard believed there is no real truth to find because everything is already fake or part of the simulation. He felt The Matrix made his ideas too simple.
The difference between Simulation and Simulacra Simulation: A simulation tries to copy something real. For example, smoke, building being destroyed, fire, ocean etc. Houdini allows to create simulations. It’s based on something real and keeps a connection to it. Simulacra: Simulacra have no connection to the real world. For example, when thinking about a desert island, we imagine what we’ve seen in movies – sand, palm trees, and ocean. This idea isn’t based on a real island, it’s just a symbol that feels real because we’ve seen it so much.
Objective: List your personal encounters with simulacra and hyperreality. Experiencing Hyperreality in House of the Dragon
All the dragon scened in House of the Dragon felt hyperreal. The dragons looks so real like they could actually exist. All the small and big details that makes a scene more believable were very accurate like the details on their scales, their muscles moving, and how the light hit their bodies.
This is what Baudrillard calls a simulation: something fake that feels real.
The world of “Westeros” is also a simulacra. It is not a real place, but the show makes it feel like it could be. The sets, like cliffs, castles and cities are so detailed that It’s easy to forget it’s all fiction. Sometimes everything looked more perfect than real life, and that’s what makes it hyperreal.
What made the scene feel hyperreal to me was how the dragons moved so naturally, like they are really part of the world. Every object they interacted with reacted to them, whether it’s the wind and clouds, lighting, actors and surfaces. It makes the audience forget that it was all created on a computer.
The Workshop Activity:
Illustrate Baudrillard’s concept of the four phases of the image and hyperreality. Phase 1: The Image Reflects Reality
The Last Supper by Leonardo da Vinci. The Starry Night by Vincent van Gogh
This phase reflects reality because it is a painting made by the artist.
The original painting shows reality through the artist’s eyes. It feels authentic.
Phase 2: The Image Starts to Change Reality
A printed poster of the painting sold in a store
The poster is a copy and feels less real. It’s made for mass production, not art.
Phase 3: The Image Hides the Fact That It Is Not Real
The edited version makes the painting feel new, but it hides that it is not the original.
Phase 4: Hyperreal – No Original
The hyperreal version has no original. It is created from scratch but looks believable.
The Weekly Activity:
Connection to VFX:
These phases explain how images shift from reflecting reality to creating something new. VFX follows the same process. At first, it enhances real footage Then it mixes real and fake elements. Finally, it creates fully digital worlds with no real connection, like the hyperreal cities or creatures.
VFX Breakdown: Exploring the phases of the image:
Have a look at how VFX move through Baudrillard’s four phases of the image. Phase 1: A real image of New York
Mirror Behaviour: Phase 2: The image distorts reality – how people “imagine” NYC Phase 3: 3D model of New York City Phase 4: Mirror verse VFX shot from Doctor Strange.
2nd Assessment: How do Spectacular, Invisible, and Seamless Visual Effects Influence Modern Filmmaking?
– I want the essay to mainly explore seamless VFX. However, I also want to explain what spectacular and invisible VFX are, why they are important, and provide examples of each.
– I want to compare all three types, discussing how they differ and overlap in their use, purpose, and contribution to storytelling.
– I want to explore how the directors prepare the set taking into consideration VFX
– I want to explore how photorealism and mimesis are applied in seamless VFX and their contribution to immersion, narrative development, and audience experience.
– I want to mention as well that often seamless VFX, because of their unnoticed nature, people often undervaluing it.
Week 9 – Virtual Filmmaking
What was covered in the lecture: What is Virtual Production Virtual production is a new way of making films and TV shows using advanced technology.
It uses digital tools that creates a virtual set with LED screens. These screens show realistic backgrounds in real time. Cameras are tracked by motion capture devices. The data is usually sent to Unreal Engine, to create live visual effects that can be seen on camera. This helps actors see the environment and react naturally. Directors can also see finished effects while filming. An example is the Dragonstone bridge in Game of Thrones. It was recreated using LiDAR scans of the real location. The virtual set allowed filming from different angles, saving time and adding creative freedom. Virtual production makes filmmaking more realistic and flexible.
The Workshop Activity:
Overview of the Project Title: House of the Dragon Production Company: HBO Key Personnel: Greg Yaitanes (Director), Ed Hawkins (VFX on set Supervisor), Miguel Sapochnik (Showrunner and Director), Ryan J. Condal (Showrunner and writer), Sven Martin (VFX supervisor, Pixomondo) and more. Visual Skies – (a company specializing in 3D scanning and LiDAR technology) contributed to recreating environments like Dragonstone.
Virtual Production Techniques The Volume:
– A virtual set with over 2,000 LED screens, fully enclosed to display real-time CG environments.
– Equipped with 92 motion capture cameras to track the position of the main camera.
– Integrated with Unreal Engine, allowing real time rendering of visual effects directly on set. Previsualization:
– Environments, like the Dragonstone bridge, were previsualized using LiDAR scans of real world locations.
– This enabled the team to plan shots and camera movements ahead of time. In Camera VFX:
– Backgrounds and CG elements were displayed on LED screens during filming, eliminating the need for post-production compositing.
– Actors could interact with realistic environments, enhancing their performances.
Challenges and Solutions Challenge: Not Practical Real World Locations – The real Dragonstone bridge was too difficult to film due to its zig zagging structure and remote location.
– Solution: The bridge was digitally recreated using LiDAR scans. The virtual set allowed the team to film scenes from any angle. Challenge: Perspective Distortion – The LED screens displayed perspectives that looked strange to the naked eye.
– Solution: These distortions were corrected by the camera’s perspective, ensuring that the final footage appeared seamless.
Challenge: Dynamic Scene Requirements – Scenes needed to switch between different angles and inclines.
– Solution: The set was built to rotate, allowing for quick adjustments without dismantling and rebuilding.
Outcomes Quality:
– The virtual environments provided realistic lighting and reflections, adding depth and texture to the final footage.
– Actors delivered more natural performances due to the immersive sets. Efficiency:
– Real time rendering allowed directors to see finished visuals during filming, reducing post production time.
– The rotating set design made scene transitions faster and more efficient. Creativity:
– Virtual production expanded creative possibilities by enabling shots that would have been impossible or too expensive to achieve on location.
– The team could visualize and adapt scenes on the spot, fostering innovation.
By using virtual production, House of the Dragon delivered high quality, immersive visuals while saving time and costs, setting a new standard for modern filmmaking.
Week 10 – 2nd Assessment Development
What was covered in the lecture: This week we focused on getting Assignment 2 done. We had a really helpful workshop in the Ian Carter room at the library. Katie was there, and she gave us great advice on referencing and finding research materials.
The question I chose to explore is “How Spectacular, Invisible, and Seamless Visual Effects Influence Modern Filmmaking,“ with a focus on invisible effects.
Week 11-12 – 2nd Assessment Presentation
What was covered in the lecture: In Weeks 11 and 12, we presented our assignments. My presentation, which is attached, summarizes the key ideas and research from my essay.
2nd Assessment:
How do Spectacular, Invisible, and Seamless Visual Effects Influence Modern Filmmaking?
Visual effects are an essential part of modern filmmaking, helping to create immersive worlds and compelling stories. There are three main types of VFX: Spectacular, Invisible, and Seamless effects. This essay will explore these three types, comparing their uses and contributions to filmmaking, with a focus on invisible effects. Invisible effects significantly influence modern filmmaking by enhancing storytelling, supporting narrative development, and improving audience experiences.
We will examine tools such as compositing, photographic manipulation, and digital rendering, which allow invisible effects to integrate perfectly into scenes. Additionally, this essay will engage with theories of photorealism and mimesis, analysing how these concepts are applied in invisible effects to create visuals that feel lifelike. Finally, the essay will discuss how directors and producers plan their shoots with VFX in mind, ensuring that every element aligns for a cohesive final product.
Although invisible effects are essential, they often go unnoticed by viewers, and the effort behind them is sometimes underappreciated. By studying examples from popular TV shows and movies, this essay will show how invisible VFX are a key part of modern storytelling.
Visual Effects are essential for visualizing a story and helping the audience connect with it. They make scenes more engaging and help viewers understand complex or imaginative narratives. VFX are usually divided into three types: spectacular, invisible, and seamless effects. Spectacular effects are noticeable and designed to grab the audience’s attention. They are often used in fantasy, science fiction, and action genres. For example, the battle scene in Avengers: Endgame, dragon battles in House of the Dragon, and the human-like creatures in Avatar show how spectacular VFX can create memorable and immersive scenes. These effects add excitement and highlight important parts of the story, even if they don’t always look completely realistic. Invisible effects are more subtle and work quietly to improve realism. Their goal is to blend seamlessly with live action footage to create a natural and realistic appearance. These effects are often go unnoticed by the audience, as they focus on enhancing realism rather than drawing attention. They are used for things like recreating historical locations, extending environments, changing backgrounds and creating weather effects. In Chernobyl, invisible VFX were used to rebuild the nuclear factory, helping the story feel historically accurate while keeping the audience focused on the narrative. Seamless effects bridge the gap between spectacular and invisible effects.They combine the realism of invisible effects with the artistic creativity of spectacular effects. They mix digital and real world elements so smoothly that they look completely natural. For instance, the dragons in House of the Dragon look realistic within their fantasy world while still standing out as part of the narrative’s visual style. This combination allows seamless effects to enhance storytelling while keeping the visuals exciting.
These three types of VFX work together to play an important role in modern filmmaking. They bring directors’ ideas to life and shape how stories are told, creating memorable experiences for audiences around the world. It is sometimes hard to tell exactly the difference between each of them, and we will explore it later in this essay.
Invisible effects make scenes feel real by blending digital elements with live action footage so naturally that viewers don’t even notice them. They create believable environments, adjust settings, or add details that audiences assume are real. Their power comes from how well they support the story without standing out. Techniques like compositing, rendering, and photo manipulation allow VFX artists to make these subtle changes, keeping everything aligned with the narrative to make sure the scenes look natural. These methods rely heavily on photorealistic principles. Photorealism is key to invisible effects, as their goal is to replicate real world textures, lighting, and movements. This requires strong attention to details, even the small ones, like shadows, reflections, and surface textures that match the live action elements and therefore make the scene feel genuine. This connects to the idea of mimesis, which means imitating reality so well that the audience believes what they’re seeing. For example, in Gladiator 2, VFX artists brought ancient Rome to life by digitally recreating the city and the Colosseum. The old stones and the detailed design made the setting feel authentic. This allowed the filmmakers to transport viewers into a historically accurate world without building massive sets. Similarly, in The Crown, invisible effects were used to rebuild historical locations, such as Buckingham Palace and 20th century London streets. In The Queen’s Gambit, invisible effects added snow and frost to create realistic winter scenes. They carefully adjusted the lighting, reflections, and textures to make everything look natural and fit the story’s seasonal mood. Despite their role in creating immersive narratives, invisible effects are often overlooked because they are designed not to draw attention to themselves. This lack of attention shows just how important they are in making scenes feel real without the audience realizing it.
Invisible effects also give filmmakers freedom to create things that would be impossible in real life. They allow directors to build worlds, adjust environments, or add details without being limited by budget, locations, or historical authenticity. By mimicking real world details like how light reflects or how textures respond to weather, invisible effects keep the story visually consistent.
What makes invisible effects so important is how they support the story without stealing the spotlight. They shape the visual language of films. Furthermore, they quietly work behind the scenes to bring the director’s vision to life, keeping the audience engaged and fully immersed in the narrative.
It can sometimes be hard to tell the difference between spectacular, invisible, and seamless effects because they share many similarities and are often used together. All three are essential for storytelling and keeping the audience engaged. They rely on advanced digital tools like compositing modeling and rendering. However, their differences come from how obvious they are, what purpose they serve, and how they support the narrative. Spectacular effects are the most obvious. They are designed to grab attention, impress and excite the audience. These effects are often used to showcase technological achievements or artistic creativity. For example, superpowered combat in The Boys, dragon battles in House of the Dragon or the huge battle scenes in The Avengers are clear examples of spectacular effects. The visuals don’t just support the story, they are part of the excitement, entertainment and highlight the technical skills involved. Invisible effects, on the other hand and as explained earlier, are subtle and blend into live action scenes. Their goal is to make the story feel more realistic without being noticed. For example, in The Crown, digital effects recreated historical places like Buckingham Palace so naturally that viewers wouldn’t realize they were CG. These effects focus on building believable environments that enhance the narrative without drawing attention. Their subtlety means the hard work behind them often goes unnoticed. Seamless effects combine elements of both spectacular and invisible effects. They aim to balance between spectacle and realism. They are visually striking but still realistic enough to fit into the world of the story. A good example is the dragons in House of the Dragon. The interactions between characters and their dragons, like petting, or riding are great examples of seamless effects. The dragons, which do not exist and are obviously digitally created, have textures, movements, and shadows that give them a real and convincing appearance. The show blends digital creatures and real actors so smoothly that viewers want to forget that they’re watching CGI and accept the images as real. Seamless effects balance the big visual impact of spectacular effects with the subtle realism of invisible effects.
In some cases, it is hard to separate the three types of effects because they often overlap. For example, seamless effects can feel like spectacular effects because of their scale, but they also rely on the realism of invisible effects. In House of the Dragon, the contrast between exciting dragons battle sequences and quieter moments of the characters and their dragons shows how these categories can blend. Each type serves a different purpose, but together they create a visually rich and immersive story.
In the end, while spectacular, invisible, and seamless effects each serve different purposes, they all work to achieve the same goal: to captivate the audience, support the story, and push the limits of creativity. Each type enhances the others, resulting a powerful toolkit that helps filmmakers create unforgettable movies and TV shows.
The advancement of visual effects has revolutionized the filmmaking process. Directors and producers now plan live action shoots bearing visual effects capabilities in mind, giving them more flexibility and creative freedom. For instance, when filming scenes involving CGI, they take into consideration the lighting, camera angles, and actor placement to make sure it will blend smoothly with digital elements. This collaboration between on set and post production teams ensures the final product feels holistic and believable.
Motion capture is a clear example of this transformation. It allows filmmakers to bring fantastical characters into live action scenes. In Avengers: Endgame, Mark Ruffalo wore a motion capture suit to play the Hulk, combining his acting with CGI to make the character feel real. This technique captured subtle facial expressions and movements, giving the Hulk emotional depth and making him a key part of the narrative. Mocap technology has become essential for character based storytelling, enabling filmmakers to show complex relationships and interactions with realism. Digital aging and de-aging have also revolutionized how characters are presented. In the past, filmmakers had to cast multiple actors or use makeup to show a character at different ages, which often disrupted narrative consistency. Today, digital de-aging solves these problems. In Avengers: Endgame, Robert Downey Jr. was digitally de-aged to play a younger Tony Stark in flashback scenes. This technique maintained the actor’s performance while ensuring continuity, allowing filmmakers to easily navigate complex timelines. Digital aging has improved storytelling options, allowing for dynamic narratives without logistical limits. Seamless and invisible effects have raised audience expectations for immersive and realistic worlds. LiDAR and photogrammetry technology are used to digitally enhance locations. In House of the Dragon, real environments were scanned and then expanded digitally, creating landscapes that felt real. By blending real and digital elements seamlessly, filmmakers were able to create worlds that enhanced the narrative experience.
One of the biggest advancements in filmmaking today is Virtual Production, which combines spectacular, invisible, and seamless effects. This technique uses LED screens to create virtual sets that display realistic backgrounds in real time. Before virtual production, filmmakers used green screen as background which required actors to imagine their environment and wait until post-production to see the final visuals, Virtual Production allows actors to see and interact with the environment during filming.
This technique tracks cameras with motion capture devices and sends the data into Unreal Engine to generate live effects visible on set. Using the same example from before, Dragonstone in House of the Dragon was recreated using LiDAR scans of the real location. A small part of the bridge was built and placed in a virtual production dome, allowing filmmakers to shoot from dynamic angles and see completed visuals immediately. This method helps actors deliver more authentic performances while giving directors more control during filming. It also reduces post-production time and effort. In conclusion, these advancements in visual effects have changed the way films are made. Technologies like motion capture, digital aging, LiDAR scanning, and Virtual Production allow filmmakers to tackle challenges more effectively, focus on storytelling, and create cinematic moments that once seemed impossible.
In conclusion, spectacular, invisible, and seamless visual effects have changed modern filmmaking, each adding something unique to how stories are told. Spectacular effects amaze us with their immersive visuals, while seamless effects blend realism and creativity to draw us deeper into the world of the story. Invisible effects, though, often play the most crucial role by making scenes feel real without us even noticing.
What makes invisible effects so important is how they quietly support the story. They bring environments to life and let filmmakers create moments that feel natural and believable, even when they couldn’t exist in real life. Invisible effects require a lot of skill and technical knowledge, but people often don’t notice or appreciate them. Their success is in blending into the scene, helping the story without standing out.
Filmmakers have more tools than ever to bring their ideas to life, combining creativity and technical skill to push the limits of what’s possible. Visual effects aren’t just about making things look cool, they’re about making stories more engaging, more powerful, and more real. They’ll continue to shape the future of movies, making sure we keep being amazed every time the lights dim and the screen lights up.
Bibliography:
– Dinur, E. (2021). The Complete Guide to Photorealism for Visual Effects, Visualization and Games (1st ed.). Routledge. https://doi.org/10.4324/9780429244131
– Rehak, B 2018, More Than Meets the Eye : Special Effects and the Fantastic Transmedia Franchise, New York University Press, New York. Available from: ProQuest Ebook Central. [28 November 2024].
– North, D. (2015) Special Effects: New Histories, Theories, Contexts. London: BFI.