Current Trends in VFX
WEEK 1
Introduction To The Module
***Topic: The Age of the Image and the Trend of the Lens***
What is a trend?
Explore and discuss the meaning of a “trend” in VFX.
A trend is the general direction in which something is developing or changing over time. A trend in VFX refers to emerging techniques, technologies, styles, or workflows that gain popularity within the visual effects industry. Trends are often driven by advancements in technology, changes in audience expectations, or the creative direction of popular films, TV shows, and video games. For example, trends like virtual production (seen in The Mandalorian) or the increased use of real-time engines (such as Unreal Engine) are reshaping how VFX is created.
How do we know when a trend is emerging?
There are actionable ways to spot a trend:
- Tracking Industry Innovations:
- Keep an eye on film festivals (like SIGGRAPH, Annecy) and award ceremonies (Oscars’ VFX category, BAFTA).
- Analyze the VFX breakdowns of recent blockbuster films or TV shows.
- Technology and Tool Adoption:
- Trends often align with new tools or advancements, such as LiDAR scanning, photogrammetry, or AI-driven techniques.
- Follow updates from key software providers (e.g., Unreal Engine, Blender, Houdini, Maya).
- Trade Publications and Blogs:
- Websites like Art of VFX, CG Society, and Animation World Network frequently discuss trending projects and technologies.
- Community Engagement:
- Follow social media communities (Reddit’s VFX thread, Discord servers, and YouTube tutorials).
- Networking at conferences or within online forums often reveals trends as they emerge.
- Streaming and Pop Culture:
- Popular shows and films that push VFX boundaries often dictate trends. Examples include The Mandalorianpopularizing virtual production or Avatar: The Way of Water driving underwater motion capture.
- Emerging Fields:
- VFX trends often mirror technological shifts, such as the growing interest in AR/VR, real-time rendering, or AI for procedural generation.
Identifying Current Trends:
What do you believe are the latest trends in VFX? Think of examples from recent films, TV shows, video games, or other media.
- Virtual Production:
- Real-time environments created using LED walls and game engines like Unreal Engine.
- Example: The Mandalorian.
- AI-Assisted VFX:
- Tools that automate rotoscoping, compositing, or texture generation.
- Example: Runway ML and Adobe’s AI features.
- Real-Time Rendering:
- Increasing use of real-time engines for high-quality visuals in movies and games.
- Example: Unreal Engine for previsualization.
- Hyperrealistic Digital Humans:
- Creating lifelike CG characters using photogrammetry and AI.
- Example: Avatar: The Way of Water.
- Hybrid Animation:
- Mixing 2D and 3D animation styles for innovative storytelling.
- Example: Spider-Man: Across the Spider-Verse.
- Environmental Scanning (LiDAR/Photogrammetry):
- Scanning real-world locations for photorealistic environments.
- Example: Avengers: Endgame.
How do we recognize when a technique or style is becoming a trend? Is it about popularity, innovation, or something else?
I believe Innovation attracts popularity, and popularity gains traction and attention, and when something is well-known and accepted, it gradually takes over the current norm.
Example of a Current Trend
The hottest trend right now is AI.
CURRENT TRENDS
ON-SET VIRTUAL PRODUCTION
On-set virtual production refers to a filmmaking process where digital environments, characters, and effects are integrated into the live-action production process in real-time on set. It combines physical production techniques (actors, props, and sets) with virtual elements (computer-generated imagery, pre-rendered environments, or real-time 3D graphics) to create a seamless blend of real and virtual worlds. This technique is enabled by advancements in technology like LED walls, game engine software, and motion tracking.
On-set virtual production is not only a revolutionary filmmaking technique but also a significant trend in the entertainment industry. Its adoption has grown rapidly in recent years due to technological advancements and its ability to address many traditional production challenges.
Key Elements of On-Set Virtual Production:
LED Volume (or LED Walls):
Large LED screens or walls display virtual environments in real-time.
These screens act as dynamic, photorealistic backdrops, replacing traditional green screens.
They allow actors and filmmakers to see and interact with the virtual environment during shooting.
Real-Time Rendering:
Game engines (like Unreal Engine) render digital assets and environments instantaneously.
Changes to the virtual set can be made on the fly, offering flexibility and creative freedom during production.
Camera Tracking:
Motion tracking systems sync the physical camera’s position and movement with the virtual environment.
This ensures the perspective and parallax of the virtual world adjust correctly as the camera moves.
Blending Real and Virtual Elements:
Practical props, costumes, and actors are combined with digital elements to create a cohesive scene.
Lighting on set can be synchronized with the virtual environment to create realistic reflections and shadows.
REAL-TIME RENDERING
Real-time rendering is a process in computer graphics where images, animations, or 3D scenes are generated and displayed at high speeds, typically fast enough to appear as if they are happening instantaneously. This allows the user to see changes or interactions in the virtual environment in real-time. Real-time rendering is not just a technological breakthrough but also a significant trend across various industries. Its widespread adoption has been fueled by advancements in hardware and software, and it plays a critical role in meeting the growing demand for interactivity, immersion, and efficiency.
Why is Real-Time Rendering a Trend?
Advancements in Technology:
The development of powerful GPUs (like NVIDIA’s RTX series) and real-time capable engines (like Unreal Engine and Unity) has made real-time rendering accessible and efficient.
Features like ray tracing in real-time enhance visual fidelity, narrowing the gap between real-time and offline rendering.
Increased Demand for Interactivity:
Industries like gaming, VR/AR, and virtual production rely on real-time rendering to deliver seamless, interactive experiences.
Audiences and users now expect content to respond dynamically in real-time.
Adoption in Virtual Production:
Real-time rendering is the backbone of virtual production, enabling filmmakers to see and manipulate digital environments and effects during live-action shooting.
Cross-Industry Applications:
Beyond gaming and film, industries such as architecture, automotive design, medical training, and e-commerce are adopting real-time rendering to provide interactive and immersive experiences.
Faster Iteration and Feedback:
Real-time rendering allows for rapid prototyping and decision-making. For example:
Designers can see changes instantly and adjust. Filmmakers can view virtual scenes directly on set without waiting for post-production.
Rise of AR/VR:
Augmented and virtual reality experiences depend entirely on real-time rendering to provide immersive and responsive environments.
Harold Edgerton
Harold Eugene “Doc” Edgerton, also known as Papa Flash, was an American scientist and researcher, a professor of electrical engineering at the Massachusetts Institute of Technology. He is largely credited with transforming the stroboscope from an obscure laboratory instrument into a common device. Today you probably will all have seen the work of Edgerton. Even if you do not recognise his name, his images are part of our “visual lexicon”.
– Edgerton pioneered and perfected the use of electronic strobe flashes combined with high-speed film to capture images of object and phenomena movement imperceptible to the human eye
– His work sits at the intersection between art and science.
– The photographs are both a quantifiable exploration of how things work, but also has a unique aesthetic and with it a category of photography – that of the freeze frame.
FOUR VFX COMPARISIONS OF HAROLD EDGERTON’S WORK
The Tyranny of the Lens
Photographic imagery currently dominates image-making, but lens-based capture predates photography. Scientist Charles Falco and artist David Hockney refer to this as the “Tyranny of the Lens.”
The Hockney Falco Theory
Secret Knowledge
- In Secret Knowledge (2006), Hockney claims that art history is closely tied to the use of lenses.
- He argues that before photography was invented in the 1830s, artists often used lens projections to create accurate representations of complex scenes.
- Critics argue that Hockney and Falco’s thesis lacks evidence and implies that Old Masters may have cheated.
- Proposed by artist David Hockney and physicist Charles M. Falco in the early 21st century, the Hockney-Falco thesis suggests that some Renaissance painters, like Jan van Eyck and Caravaggio, used optical devices such as the camera obscura and concave mirrors in their work.
- This challenges the traditional belief that these artists relied solely on their skills and techniques.
- The primary optical tool used by Old Masters is believed to be the camera obscura, a device where light rays pass through an aperture onto a flat surface to form an image (Kemp, 1990, p. 189).
- Leonardo da Vinci created over 270 diagrams of camera obscuras (Veltman, 1986), and it is widely believed that Johannes Vermeer (1632–1675) used one to trace projected images for some of his paintings (Steadman, 2001, p. 1).
TAKEAWAYs
What is visual effects if it is not the combination of computer-generated imagery with filmed footage?
Visual effects extend sets beyond what has been filmed
It takes filmed actors and puts them in different environments
The computer-generated needs to look as if it has been photographed, this tends to be called photorealism
Photorealism is dependent on creating “photographic-looking” images.
HOME WORK
Can you explain what is meant by Dr James Fox’s phrase ‘The Age of the Image’?
Why does he use this phrase to describe the use of images in our age?
Dr. James Fox coined the phrase ‘The Age of the Image’ to describe our current era, in which visual culture dominates nearly every aspect of human life. According to Fox, we are living in a time when images are the primary medium through which we communicate, express ideas, and engage with the world. This marks a significant shift from previous eras dominated by the written word or oral traditions, where word of mouth was crucial for establishing credibility. The phrase suggests that images are no longer supplementary; they are central to how we experience reality, and many people today find it difficult to function without them.
Technological advancements such as smartphones and social media platforms like Facebook, TikTok, Instagram, and YouTube have accelerated this shift. These platforms thrive on visual content, shaping how we form opinions, connect with others, and even construct our identities. Fox highlights how this reliance on visual media influences areas like politics, art, advertising, and personal expression.
Fox also explores the psychological power of images, arguing that they have a more immediate emotional impact than text. Images can communicate across language barriers, making them powerful tools in the modern world.
Despite these advantages, Fox also warns of the dangers inherent in the rapid spread of images. This can lead to misinformation and encourage a superficial engagement with information, where anyone or anything can present a false identity. It’s easy to trust what we see, but that trust can be misplaced.
In summary, ‘The Age of the Image’ reflects Fox’s belief that visual content has fundamentally changed not only how we communicate but also how we think and process information.
References:
LEARNING ON SCREEN (2020) Age of the Image: A New Reality. Available at: https://learningonscreen.ac.uk/ondemand/index.php/prog/158E7D18?bcast=131408235 (Accessed: 2 October 2024).
____________________________________________________________________________________________________________
WEEK 2
The Photographic Truth Claim
***Can we believe what we see?***
Plato’s cave
Plato’s allegory of the Cave is a metaphor used to describe the difference between what is perceived as the truth versus the actual truth.
In his story, there are prisoners chained in a dark cave, with all of them restricted to facing a wall and all they can see are shadows cast by objects behind them. Now, they cannot see the objects themselves but only shadows created by the objects. With nothing to rely on, they mistake the shadows as reality. One of the prisoners breaks free and escapes the cave to find that the shadows they had been seeing were not real. This prisoner then ventures back into the cave to enlighten the other prisoners with the truth. However, not all of them were willing to accept the truth, because for some of them, the shadow was their reality.
This highlights how difficult it can be for people to accept a new truth when it challenges their long-held beliefs.
The Allegory of the Cave:
- In a more straightforward way, that the shadow of the bird in the cave is seen as nothing less than a real bird by the prisoners, but in reality, it is an illusion.
- The prisoners are mistaking the “appearance” for “reality”
- When they call the shadow a bird, they are wrong, as the bird is the thing that causes the shadow.
- Plato’s point is that the word bird refers to our understanding of the form of a bird as we see it in our minds, not an actual bird, it is a sign.
- That knowledge can be overwhelming (escaping the cave and seeing the bright Sun), but spending time with it can bring us to an understanding of the world.
Other Meanings:
Those in the cave have a lack of education, they do not know anything else (beyond the shadows) But is not their fault..
That much can be built and invested around an illusion, not just visually, but also generally in life.
Things we might think are very important in our lives (cars, mobile phones, personal status etc.) are actually not so, they are not real in that sense.
Other meanings –Socratic Method:
That knowledge can be overwhelming, but spending time with it can bring us to a deeper understanding of the world.
You cannot force your point of view upon someone, but be patient, have a cooperative debate (the Socratic method) The method stimulates critical thinking through the asking of questions.
Eventually you reach the truth, but this is only done via the presenting of different viewpoints.
In what ways is Plato’s Allegory of the Cave reflected or represented through storytelling and visual effects in cinema?
While the allegory itself is not a visual effect, its themes and ideas have been explored and adapted in various visual media, including film and television, through the use of visual effects and storytelling techniques
(Fake Views, Age of the Image, 2020)
Example images of “where are images becoming more like reality?”
SET EXTENSIONS
A set extension is a visual effects (VFX) technique used in filmmaking to expand the physical set by digitally adding elements that would be impractical, expensive, or impossible to construct physically. This process blends real, on-set footage with digitally created environments or enhancements to create the illusion of a much larger or more elaborate scene.
Key Characteristics of Set Extensions:
Hybrid Approach:
Combines physical sets or props with computer-generated (CG) elements.
The physical and digital components are seamlessly integrated to appear as one continuous space.
Cost Efficiency:
Instead of building large, detailed physical sets, filmmakers construct smaller, practical portions of the environment, with the rest added digitally.
Flexibility:
Digital environments can be easily modified or scaled to fit creative needs.
Photorealism:
Advanced VFX techniques ensure that the set extensions blend naturally with the live-action footage, matching lighting, texture, and perspective.
How It Works:
Filming on a Partial Set:
The filmmakers shoot on a small, practical portion of the set that includes key interactive elements (e.g., a staircase or doorway).
Green Screen or LED Walls:
Green screens or LED volumes are often used behind the physical set to allow for seamless integration of the digital extension.
Digital Extension in Post-Production:
Artists use 3D modeling, matte painting, and compositing techniques to create and add the rest of the environment in post-production.
Software like Unreal Engine, Maya, or Nuke is often used for this process.
Examples of Set Extensions:
Fantasy or Sci-Fi Worlds:
Films like Lord of the Rings or Star Wars use set extensions to create vast, otherworldly landscapes.
Urban Expansions:
Historical dramas like Game of Thrones digitally expand castles, cities, or towns, adding depth and scale to the setting.
Distant Landscapes:
Adding distant mountains, skylines, or futuristic cityscapes in scenes where practical shooting is impossible.
Signs
In semiotics, signs are the basic units of meaning, and they represent something other than themselves. Charles Sanders Peirce, a key figure in semiotics, categorized signs into three types based on the relationship between the sign and its meaning: icons, indexes, and symbols. Here is an explanation of each:
- Icon
An icon is a sign that physically resembles what it represents.
The connection is based on similarity or likeness.
Examples:
A portrait (resembles the person it depicts).
A map (resembles the geographic area it represents).
Emojis, like a smiley face 😊, resemble an emotional expression.
- Index
An index is a sign that is directly connected to its referent in a causal or physical way.
The connection is based on cause and effect or contiguity.
Examples:
Smoke as an index of fire.
Footprints as an index of someone walking.
A thermometer reading as an index of temperature.
- Symbol
A symbol is a sign whose relationship to its referent is arbitrary and established by convention or agreement.
The connection is based on cultural or social agreement, not resemblance or direct causation.Examples:
Words in a language (e.g., the word “cat” does not look or sound like a cat but represents it by convention).
Mathematical symbols (e.g., “+” for addition).
Flags representing nations.
Key Differences:
Type | Relationship to Referent | Examples |
---|---|---|
Icon | Resemblance | Portraits, maps, emojis |
Index | Causal or physical connection | Smoke, footprints, thermometer |
Symbol | Arbitrary, learned | Words, logos, national flags |
These three categories help us understand how meaning is constructed and communicated through different types of signs in various contexts.
Analysing the impact of VFX on Photographic Truth
In this activity, please search for VFX heavy images and analyze how visual effects challenge the photographic truth claim (in the images) using semiotic concepts.
Analyze an image for:
Indexicality – Are there any traces of the real world, or is it fully fabricated?
Iconicity – Does it resemble reality convincingly or is it fully fictional
Photographic Truth – Does it claim to represent reality and how does this challenge the traditional truth claims
Photographic Truth Claim– This does represent reality with a twist, it makes you question if human could really have super abilities. The photo is photorealistic but the addition of lightning(s) make it fictional.
Photographic Truth
Photography and film would seem to be excellent examples of sign systems that merge icon, index and to some extent symbol. Although indexical because of the photographic image has an existential bond with its object, they are also iconic in relying upon a similarity with that object.” (Gunning, 2004, p.134). Understanding this is important for successful visual effects outcomes.
What we perceive as truth may be perceived the other way by someone else, meaning, perception is fallible, as it’s is dependent on our sensory organs.
Different animals have other types of sensory capabilities and thus see the world differently. We can be taken in by illusions, even physical ones such as hallucinations. What we perceive is our sensory representation of an object, not the object itself, therefore what is directly perceived is only ever an idea, a mental representation of the object.
The philosophical notion here is that our sense might deceive us, so if an image looks photographic then we are casually convinced that it was there in front of the lens.
Tom Gunning said: ” I use the word “truth claim” because I want to emphasize that this is not simply property inherent in a photograph, but a claim made for it”
We can see that in many ways the photograph is not like the thing it represents. As an image, its power lies in its ability to represent something objectively or truthfully. And the knowledge that it was created by light bouncing from a surface through the lens to create an image of that same surface.
All photographs present a truth: their makers’. The issue is not whether or not that truth has any relation to the Truth. The issue is, instead, what the photographs tell us about our own truths (our interpretation), about those beliefs that we take for granted, that we stick to so obsessively, weighing what we see. In the modern age, with. the level of technology we deal with, its hard to specify whether it is iconic or not. Photographically it may be truthful, but how much of it you believe can be directly related to our personal experiences, cultural values and more.
HOME WORK
What do you think is meant by Photographic Truth-Claim?
The Photographic Truth-Claim refers to the belief that photographs depict reality as it is, capturing objective, real representations of the world. This belief stems from the idea that photography, unlike painting and other forms of art, involves mechanical processes that cannot be falsified. That is, the use of light, lenses, and film that record what’s in front of the camera without the influence of personal interpretation or opinions.
Historically, people trusted photographs as accurate visual proof. In fields like journalism, photos were regarded as evidence of events that took place. This trust is based on the assumption that cameras cannot lie; they unconditionally capture whatever is put before them.
Over time, people realized that photographs are not purely objective. While the camera does capture reality, the way the photographs are framed, taken, and the photographer’s intentions influence the final image. Furthermore, with the rise in technological advancement, altering images has increased, challenging the idea that photographs always represent an unfiltered truth. Alison Jackson is one such artist whose artwork uses real-looking caricatures, and Dr James Fox mentions that she is one quintessential artist of the age of the image whose style of artistry proves that images cannot be trusted.
Ultimately, the Photographic Truth-Claim exposes the tension between the objective and subjective nature of photography, reminding us that while photos may reflect reality, they are still shaped by human choices and contexts.
References:
To be updated
____________________________________________________________________________________________________________
WEEK 3
***Faking Photographs: Image manipulation and computer collage***
In the book The Reconfigured Eye (1998) William J. Mitchel refers to the photograph:
“One way or another, a photograph provides evidence about a scene, about the way things were, and most of us have a strong intuitive feeling that it provides better evidence than any other kind of picture. We feel that the evidence it presents corresponds in some strong sense to reality, and (in accordance with the correspondence theory of truth) that it is true because it does so.” (Mitchel, 1998, p.24)
Francesco Casetti writing in 2011 on the Digital Imaging Revolution observed the following:
“Faced with an image on a screen, we no longer know if the image testifies to the existence of that which it depicts or if it simply constructs a world that has no independent existence” (Casetti, 2011, p. 95)
Discussion:
Take a few moments to discuss the opposing quotes in groups:
Mitchel (1998, p.24) comments “….a photograph provides evidence about a scene, about the way things were… the evidence it presents corresponds in some strong sense to reality”
While Casetti (2011, p.95) asserts: “Faced with an image on a screen, we no longer know if the image testifies to the existence of that which it depicts or if it simply constructs a world that has no independent existence”
Who’s right here?
Whilst they are both accurate, I think the accuracy of their statement is tied to their time. In 1998, Mitchell says “a photograph provides evidence about a scene, about the way things were”. It didnt mean photo manipulation wasn’t in existence at the time, but rather that it wasn’t as advanced as it is now. Casetti then says, “faced with an image on a screen, we no longer know if the image testifies to the existence of that which it depicts” This statement made by Casetti 13 years later after much technological advancement, is more accurate in my opinion as it has become increasingly difficult to distinguish between a manipulated and a non-manipulated image.
Therefore, whilst they were both correct in their time, I will say Mitchell is technically wrong now, due to technological advancement in visual Effects.
Four Analogue Fakes
A jewelry engraver named William Mumler was the first enterprising mind to combine the emerging fields of spiritualism and photography for profit. Mumler’s hobby for photography paid off one day in the early 1860s when he sat for a self-portrait. He discovered a ghostly figure standing behind him as he developed the photo, which he originally believed to be the remnants of a previous image. He showed the photo to his friends on a lark and, based on their credulous responses, went into business as a spirit photographer soon after. Mumler’s fame grew so large that his photographs appeared on the cover of the national magazine Harper’s Weekly. Although his contemporaries were skeptical, no photographer could find any evidence that Mumler faked his ghostly photo shoots. Despite his detractors, he had at least one very famous fan. In what would be her last photograph, Mary Todd Lincoln sat for a photo with him, Abraham Lincoln’s ghost visible behind her.
How They Did It: Skeptical photographers both then and now ascribe Mumler’s spooky shots to one of two methods. One possibility is double printing, when the subject and the spirit appear in two different negatives that the photographer later combines. The other is double exposure, when the person designated as the ghost leaves the picture mid-exposure to produce a transparent, ghostly effect. Mumler ensured that no one would ever know for sure when he destroyed all his negatives shortly before his death.
In 1936, photographer Robert Capa released an image that encapsulated the horror of the Spanish Civil War and went on to become one of the most famous war photographs in history. It also helped kickstart Capa’s career as a famous photographer. The image captures a Spanish soldier, Federico Borrell Garcia, as he takes a fatal shot.
How They Did It: The story of Capa’s photos started to unravel when other images in the same series were released. Academics studied these photos next to this most famous version and determined that Capa did not snap these images near Cerro Muriano in Andalusia as he claimed. Instead, the photographs were taken near Espejo, a place that the war didn’t reach until after Capa published the photographs.
The modern fervor over the Loch Ness Monster came to a head with a photo taken in 1934. Known as the Surgeon’s Photo, the most famous image of Nessie was taken by British surgeon Colonel Robert Wilson. It shows a creature with a long neck rising out of the water. For more than 50 years, the picture stirred up a fervor about what swam beneath the surface of the Loch.
How They Did It: The picture stood as a testament to the existence of the marine creature until 1994, when a man named Christian Spurling confessed to his involvement in the hoax. The Daily Mail previously hired Spurling’s step-father, Marmaduke Weatherell, to find the Loch Ness monster and Weatherell felt betrayed when they debunked what he found. So he set out on a plot of revenge straight out of an episode of Scooby Doo: he and Spurling constructed a model out of a toy submarine from Woolworth’s with a sculpted head attachment and photographed it. They sent the photo to Wilson, whose pedigree made him a trustworthy Nessie spotter, and Loch Ness was never the same again.
One of the most infamous photographic hoaxes in history all started because a little girl didn’t want to get in trouble. In 1917, Frances Griffith returned from a brook with wet feet and wasn’t looking forward to the inevitable punishment. When her mother asked her what happened, Frances told her mother that she went to see the fairies. In a show of familial solidarity, Frances’s cousin Elsie backed her up and agreed that fairies played down by the water.
With the adults obviously dubious, the girls took a camera to the brook and came back with proof – pictures of both girls with fairies and gnomes. After both girls’ moms shared the photos around, the pictures sparked a public phenomenon. Even Sir Arthur Conan Doyle, creator of Sherlock Holmes and a famous spiritualism supporter, weighed in on the photographs, believing them to be genuine proof of humanity’s ability to commune with the spirit world.
How They Did It: Almost 60 years later, Frances and Elsie finally admitted that their photos were fakes. Elsie had art training and drew the figures on paper. The girls fixed the drawings to hat pins and stuck them in the ground for the photographs. Then, they destroyed the evidence in the brook. A hoax so simple that a child could do it.
Digital Fakes
Digital tools like Photoshop have revolutionized image manipulation by providing precision, flexibility, and convenience that are challenging to achieve with analogue techniques.
How Photoshop Makes Manipulation Easier:
Precision Tools: Photoshop offers pixel-level control over edits, allowing for highly detailed and precise alterations, such as retouching blemishes or seamlessly removing objects.
Non-Destructive Editing: Layers and masks enable edits without permanently altering the original image, making experimentation easy and reversible.
Automated Features: Features like content-aware fill, healing brushes, and AI-driven tools streamline complex tasks, such as removing backgrounds or adjusting lighting.
Wide Range of Effects: Photoshop provides extensive filters, blending modes, and adjustments (e.g., color correction, sharpening) that can transform an image with just a few clicks.
Ease of Access: Digital tools make it simple to copy, clone, resize, and transform elements of an image, which would require extensive skill and time in an analogue setup.
Undo/Redo Options: The ability to undo actions eliminates the fear of irreversible mistakes, encouraging creativity and trial.
Why Digital Manipulation Is Harder to Detect:
Realistic Alterations: Advanced algorithms can mimic natural textures and lighting conditions, making edits blend seamlessly with the original content.
Subtle Adjustments: Minor tweaks to contrast, shadows, or composition can change the perception of an image without leaving obvious traces.
Metadata Editing: Digital tools can also alter or erase metadata, which might otherwise provide clues about tampering.
Complexity of Detection: Identifying digital forgeries often requires specialized tools and expertise, such as detecting inconsistent pixel patterns or compression artifacts.
Wide Knowledge Gap: The average viewer lacks the expertise to spot signs of manipulation, especially when done by skilled professionals.
Access to Resources: Open-source software and AI-driven tools democratize sophisticated editing techniques, increasing the potential for untraceable manipulations.
In contrast, analogue techniques often leave behind physical clues, like uneven cutting, visible brush strokes, or mismatched lighting. Digital manipulation, by leveraging technology, allows for cleaner and less detectable edits, posing challenges for authenticity verification.
The use of visual effects (VFX) significantly enhances the believability of fake shots by creating visuals that can closely mimic reality or surpass it with hyperrealistic elements.
How VFX Increases Believability:
Seamless Integration: VFX techniques like compositing, tracking, and color grading can blend computer-generated elements with real footage so seamlessly that the fabricated elements appear natural.
Hyperrealism: Advanced rendering technologies simulate realistic lighting, textures, and physics, making artificial elements indistinguishable from real ones.
Attention to Detail: Modern VFX tools can replicate tiny details, such as reflections, shadows, and depth of field, which sell the illusion to viewers.
Motion Realism: By incorporating realistic physics simulations and motion tracking, VFX can convincingly mimic how objects or characters interact with the environment.
Psychological Cues: VFX artists exploit the human brain’s tendency to accept visuals that align with expectations, subtly guiding viewers to believe the scene is genuine.
How Advancements in VFX Have Increased Sophistication:
AI and Machine Learning: Modern VFX incorporates AI to generate deepfake videos, automate facial expressions, and enhance realism with minimal human input. Tools like GANs (Generative Adversarial Networks) can produce photorealistic results.
Improved Rendering Software: Software like Unreal Engine, Blender, and Cinema 4D allows for near-photorealistic renders in real-time, making it easier to create high-quality fake footage quickly.
Higher Computational Power: With GPUs and cloud-based rendering, creators can simulate intricate scenes with high detail, improving the authenticity of hoaxes.
Access to Pre-Made Assets: Libraries of pre-made 3D models, textures, and effects make it faster and easier to create convincing environments and objects.
Augmented Reality (AR) and Motion Capture: These technologies allow for precise mimicry of human movements or environmental dynamics, making even fictional scenarios look plausible.
Post-Production Tools: Advanced editing tools refine the final output, from matching colors and lighting to synchronizing audio for added immersion.
Photomontage to Computer Collage
- Computer collage comes from photomontage (combination of parts of images to “produce new compositions” as put by Mitchel, (1998) p.24)
- Photomontage can be visually seamless
- Different to image assemblage, which can reveal seams – e.g the multi viewpoint compositions of David Hockney’s polaroid’s (which is perhaps similar to how we actually see)
The tools and methods of Photomontage include:
- Cutting and Pasting
- Airbrushing
- Multiple exposure
- Photos created from multiple negatives
- Masks, Blends and Mattes
- Reflecting and Rotating
- Stretching and Shearing
- Scaling
- False Foreshortening
- Retouching
- Feathering
Computer Collage to Compositing
- One of the main differences between digital compositing and manual photographic manipulation is that digital images do not degrade when copied. There is no loss of quality.
- Digital compositing a higher level of quality of finish.
- Depth information gives control over what goes in front of what and selective depth of field
- A composite can be made from both filmed and CGI elements.
- Must appear consistent as with other lens-based imagery
The Collage art form, is a type of composition. It involves assembling various materials—such as photographs, paper, fabric, or digital images—into a unified work of art. This technique is used in both traditional and digital mediums and is recognized as a distinct compositional method in visual arts. Here’s how collage functions as a form of composition:
Why Collage is a Composition:
- Arrangement of Elements: Like any compositional method, collage involves organizing disparate elements into a cohesive whole. The artist considers balance, contrast, rhythm, and focal points to guide the viewer’s eye through the piece.
- Combining Media: Collage often juxtaposes different textures, colors, and styles, creating dynamic interactions between elements.
- Conceptual Unity: Despite the diverse sources of materials, a well-crafted collage communicates a unified theme, message, or aesthetic, much like traditional compositions in painting or photography.
- Creative Process: The act of selecting, cutting, layering, and arranging materials mirrors the intentional design found in other compositional practices.
Types of Collage in Composition:
- Traditional Collage: Made with physical materials like paper, photographs, or textiles, arranged and glued onto a surface.
- Digital Collage: Created using software tools, where digital images and textures are layered and manipulated.
- Mixed-Media Collage: Combines traditional and digital methods or incorporates unconventional materials such as found objects or natural elements.
Collage in Broader Contexts:
Collage is not only a visual art form but also applies metaphorically to other creative disciplines:
- Music: A musical collage might combine samples, loops, or recordings from different genres into a single piece.
- Literature: Writers sometimes use a collage technique to compile fragmented texts, quotes, or voices into a narrative.
- Film and Multimedia: Editing styles or sequences that juxtapose unrelated footage can be considered a form of collage.
In essence, collage as a type of composition thrives on its ability to recontextualize existing elements, encouraging both artists and audiences to see familiar materials in new and unexpected ways.
“The better film-makers get at creating lifelike illusions, the less we are impressed by them. And crucially, the less likely we are to believe anything they show us is real.
Digital manipulation can now achieve almost anything – even turning the world upside down. Christopher Nolan’s film Inception didn’t just employ visual effects. Its very premise turned on the fine line between reality and the purely visual. The real world and dreams.” … ~Fox
What is Compositing?
Compositing is the final step in the visual effects (VFX) pipeline, where various filmed, rendered, and computer-generated elements are creatively combined to produce a seamless, lifelike final image. This process involves integrating these elements to make them appear as if they naturally belong in the same scene.
Compositing requires a range of skills, such as rotoscoping (separating the foreground from the background) and techniques like chroma keying, 2D and 3D tracking, matting, and incorporating computer-generated imagery (CGI). The goal is to blend all elements convincingly to create a cohesive, visually compelling final result.”
Compositing involves layering different elements, including background plates, foreground action, and various CGI assets. These layers are carefully aligned, color-matched, and blended using techniques named previously, like chroma keying to remove green/blue screens, rotoscoping to isolate objects or characters, matting for creating masks, and 2D & 3D tracking to match the movement of the camera or objects. The compositing artist must ensure that all elements, whether live-action or computer genarated images are realistically lit, color-corrected, and positioned to match the scene’s perspective and motion.
The Great Gatsby (2013) by Baz Luhrmann is a prime example of how VFX compositing can be seamlessly integrated into a period drama. The film used extensive digital compositing to recreate the lavish settings of 1920s New York. Many of the city’s skylines, sweeping party scenes, and luxury mansions were created using CGI and compositing, blending real actors with digitally created environments. The goal was to amplify the opulence and grandeur of the era, creating a visually immersive world while still maintaining a grounded, dramatic story.
Forrest Gump (1994) is a classic example where VFX compositing is used to blend real historical footage with the fictional character played by Tom Hanks. Digital compositing was used to insert Forrest into actual historical events, like his interaction with Presidents John F. Kennedy and Richard Nixon. These scenes required blending historical footage with newly shot footage of Hanks, using techniques like rotoscoping and digital compositing. The movie also used VFX to achieve scenes such as Lieutenant Dan’s amputated legs.
References:
Studio Binders
https://www.studiobinder.com/blog/what-is-compositing-definition/ Accessed: 20th October, 2024
Adobe
https://www.adobe.com/creativecloud/video/hub/guides/what-is-vfx-compositing.html Accessed: 20th October, 2024
Before and Afters
https://beforesandafters.com/2024/07/05/how-a-tracking-algorithm-helped-insert-tom-hanks-into-news-scenes-with-historical-figures-in-forrest-gump/ Accessed: 21st October, 2024.
____________________________________________________________________________________________________________
WEEK 4
***The Trend of Photorealism***
PHOTOREALISM
Photorealism evolved from the Pop Art movement in the late 1960s. Photorealism sought to convey real life with minute finesse. Artists like Chuck Close or Duane Hanson began to paint pictures using photography as a source, aiming to render a more evocative reality. Photorealists were inspired by everyday objects, scenes of commercial life and modern-day consumerism.
The term Photorealism comes originally from a genre of painting established in the late 1960s, and through to the 1970s. It was related to Pop art and came as a reaction to and was influenced by the photographic image. The American Photorealists used photography as reference, and subjects were often mundane, brightly coloured, and highly detailed. It was important the viewer was aware that they were looking at a painting that looked like a photograph.
Who or What is a Photo-Realist?
According to Louis K. Meisel’s five-point definition, a Photo-Realist is a person who:
- uses the camera and photograph to gather information.
- uses a mechanical or semi-mechanical means to transfer the information to the canvas.
- must have the technical ability to make the finished work appear photographic.
- must have exhibited work as a Photo-Realist by 1972 to be considered one of the central Photo-Realists.
- must have devoted at least five years to the development and exhibition of Photo-Realist work.
Define Photorealism:
Back in the day, photographs were seen as unforgeable truth, hence the term Photo-Real, to mean realistic like a Photograph. Photoreallism will be the outcome achieved from the application of various methods and skill in the creation of images, wether digital or analogue.
Key Attributes:
Lighting and shadows
Textures
Depth of field
Motion blur
Color accuracy
Surface imperfections
Perspective
Camera effects
How Photorealism Differs from Other Styles or Techniques:
Focus on Authenticity:
Photorealism prioritizes an accurate replication of reality. Other styles, like stylized animation or surrealism, may intentionally exaggerate or distort features for aesthetic or artistic purposes.
Natural Imperfections:
Photorealism embraces imperfections like irregular textures or lighting variations to enhance believability like in the real world.
Physics-Based Rendering:
Photorealism heavily relies on physics-based algorithms to simulate light and material interactions, distinguishing it from traditional CGI, which may use approximations for efficiency.
Integration of Detail:
Photorealistic works emphasize small, intricate details (e.g., tiny reflections in eyes, hair strands) that are often omitted in less detailed styles like cel shading or low-poly animation.
Photorealism can be complex because within VFX it manifests in different ways, It may apply to whole scenes and shots, or to individual characters or creature attributes. It may consist of purely of Computer-Generated Images or may consist of a mix of CGI and filmed-footage (plates).
Make a visual comparison between the 1994 and 2019 versions of the Lion King
The 1994 The Lion King is a classic Disney animated film, celebrated for its vibrant, hand-drawn animation, expressive characters, and colorful, stylized visuals. It employs exaggerated facial expressions and fluid movements, creating an emotional and dynamic storytelling experience. The musical sequences, like “Hakuna Matata” and “Circle of Life,” are visually imaginative and playful, embodying the charm of traditional 2D animation.
In contrast, the 2019 version is a photorealistic remake using cutting-edge CGI technology. The animals and environments are depicted with meticulous detail, resembling a nature documentary. While visually stunning and immersive, this approach sacrifices the expressive qualities of the characters, making their emotions harder to convey. The realism also limits the whimsical creativity found in the original’s musical numbers.
Both versions offer unique visual experiences, with the original relying on artistic stylization and the remake focusing on technological realism.
Versions/Trends of Photorealism
We have two versions/trends of Photorealism here:, one is completely constructed using CGI (think path-traced 3D render), and the other is the composite, so it’s a mixture of acquired and computer generated imagery.
Version ONE ~ CGI Based Films
Here we can think of this type of photorealism as computer-generated images, as a type of simulated photograph.
Computer-generated photorealism was recognised by art-media theorists such as Manovich, Casetti, Cubit and Batchen as form of virtual cinematographic or even virtual photographic process performed using 3D computer software to simulate the operation of lights and cameras.
3D softwares like Maya simulate not just geometry, but the virtual interaction of light, material and lens. It is the light and lens aspect that leads to a photorealistic image.
CGI realism can lead to the uncanny valley, over-reliance on technology, high costs, loss of imagination, ethical concerns (e.g., deepfakes), environmental impact, and desensitization of audiences to visuals. Balancing realism with creativity and storytelling is essential.
Version TWO ~ Photorealism of the Composite
This type of photorealism is typically more subtle and not as easy to spot. There is an interweaving of digital image types at play here, a mix of indexical image and generated image with photo fakery techniques and CGI comped into one photoreal visual outcome. Generated images need to look like simulated images, so careful planning is needed to align parts and hide seams.
Composite realism combines different elements, like real photos, videos, and CGI, into one scene to make it look completely real. Artists match the lighting, colors, and textures of all the parts so they blend together perfectly, tricking viewers into believing it’s a single, natural image or video.
Dr. Barabara Flueckiger writes about the need to include the characteristics of analogue film in a digitally generated scene.
- “The gap between CGI and digital photography is becoming more and more irrelevant.”
- Because digital visual effects often involves “the combination of CGI with live-action footage”
- CGI does not typically generate lens artifacts which are common, or we might even say revered in photography and cinema.
- They help to create aesthetic coherence in compositing, integrating the image parts form different sources seamlessly in a convincing way.
- They mark different narrative strands in complex patterns to support the viewers orientation
- Depth-of-field ~ Virtual cameras do not produces depth of field, in CGI everything is in focus. Hence the need for this to be added manually.
- Lens distortion and Chromatic aberration
- Lens Flare
- Vignetting
- Bokeh
- Motion Blur
Finding some examples of lens effects
HOMEWORK
What is Photorealism in VFX?
Photorealism, derived from the words ‘photo’ (photograph) and ‘real’ (realistic), is an art style that aims to create images so lifelike they convince viewers of the authenticity and factual nature of the scene.
Photorealism in VFX is the art of making digitally created images or elements appear indistinguishable from real-life footage. This process involves crafting visuals with such high levels of detail, lighting accuracy, texture, and motion that they blend seamlessly into live-action scenes or are convincingly real on their own. In VFX, photorealism covers a broad range of elements, including:
- Compositing: Seamlessly layering digital elements with live-action footage, ensuring alignment in scale, lighting, and depth to blend with the original scene.
- CGI (Computer-Generated Imagery): Creating digital objects, characters, environments, and effects with realistic textures, shadows, and reflections.
- Textures and Materials: Adding realistic surface qualities, like the roughness of stone or the shine of metal, using techniques like texture mapping, which gives objects lifelike appearances.
- Lighting and Rendering: Matching lighting conditions, shadows, and reflections to the live-action footage so digital and real elements appear to share the same space.
- Physics Simulation: Simulating natural movements and physics-based effects, like smoke, fire, water, and hair, to mimic real-world behavior.
Composite photorealism is a technique in VFX and digital art where multiple photographic or CGI elements are combined to create a single hyper-realistic image or scene. The goal is to seamlessly blend these separate parts so they appear as though they naturally belong together in the scene, resulting in an image that feels completely authentic to the viewer.
Composite Film Examples-
Avatar: The Way of Water (2022) – This film is a standout in composite photorealism, blending live-action footage with highly detailed CGI. Its underwater scenes, alien creatures, and the fictional world of Pandora appear incredibly lifelike due to the meticulous integration of digital and practical effects.
The Batman (2022) – This film uses composite photorealism to create Gotham City, combining practical sets, miniatures, and CGI to achieve a gritty, realistic feel. Key sequences, such as the Batmobile chase, blend live-action stunts with digital elements like rain, debris, and cityscapes to heighten immersion.
____________________________________________________________________________________________________________
WEEK 5
***Digital Index: Bringing indexicality to the capture of movement***
In the world of Visual Effects (VFX), various capture techniques are employed to generate high-quality digital assets, simulate realistic interactions, and seamlessly integrate computer-generated elements with live-action footage. Here’s a breakdown of the key types of capture technologies, the data they produce, how it is captured, and where it is used:
1. Motion Capture (MoCap)
Data Captured: 3D positional data of actors’ movements.
How Captured:
Infrared cameras track reflective markers placed on an actor’s suit.
Markerless systems use AI and depth-sensing cameras to analyze motion directly.
Usage:
Animating realistic character movements in films, games, and virtual environments.
Examples: Gollum in The Lord of the Rings, video games like The Last of Us
Trends:
Markerless motion capture (e.g., AI-driven systems).
Real-time mocap for virtual production.
2. Performance Capture
Data Captured: Facial expressions, emotions, and detailed physical performances.
How Captured:
High-resolution cameras or helmet-mounted rigs focus on actors’ faces.
Subtle marker placement or AI-driven systems record muscle movements.
Usage:
Capturing realistic facial animations (e.g., Thanos in Avengers).
Enhancing emotional authenticity in characters.
Trends:
AI-based tools for facial reconstruction.
Real-time facial capture for live feedback.
3. Photogrammetry
Data Captured: 3D models and textures of objects, environments, or people.
How Captured:
Multiple photographs taken from various angles.
Software reconstructs detailed 3D geometry and textures.
Usage:
Creating realistic environments, props, or digital doubles.
Example: Realistic terrains and assets in The Mandalorian.
Trends:
Use of drones for large-scale environment capture.
Integration with AI for faster reconstruction.
4. Volumetric Capture
Data Captured: Full 3D representation of an object or person over time.
How Captured:
Arrays of cameras capture video from multiple angles.
Data is processed to create 3D volumetric videos.
Usage:
Immersive experiences in AR/VR.
Example: Digital humans in holographic projections.
Trends:
Cloud-based volumetric capture for accessibility.
Improved compression for streaming in real time.
5. Lidar (Light Detection and Ranging)
Data Captured: High-resolution spatial maps of environments.
How Captured:
A laser scanner measures distances to build a 3D point cloud.
Often used in conjunction with drones or handheld scanners.
Usage:
Recreating real-world locations as digital sets.
Example: Accurate environment reconstruction in Blade Runner 2049.
Trends:
Portable lidar for on-set scanning.
Integration with photogrammetry for photorealism.
6. 360° Video and Spherical Capture
Data Captured: Full panoramic video or image data.
How Captured:
360-degree cameras or rigs with multiple lenses.
Usage:
Background plates for virtual sets or immersive experiences.
Example: Environments in VR films and games.
Trends:
High-resolution 8K+ cameras for cinematic VR.
Stereoscopic 360° capture for depth realism.
7. Texture and Material Scanning
Data Captured: High-resolution textures, material properties (e.g., reflectivity, roughness).
How Captured:
Specialized tools like X-Rite or handheld scanners.
Usage:
Creating photorealistic assets with accurate material responses.
Example: Textured objects in The Lion King (2019).
Trends:
Real-time material scanning for seamless integration.
AI tools for texture synthesis.
8. Virtual Production Tools
Data Captured: Real-time motion, environment lighting, and actor positioning.
How Captured:
LED volumes and tracking systems for actors and cameras.
Usage:
Interactive real-time environments in films.
Example: Virtual sets in The Mandalorian.
Trends:
Advanced LED panels for higher fidelity.
Real-time Unreal Engine integration for dynamic updates.
Up and Coming Trends in Capture for VFX
AI-Driven Capture: AI improves accuracy, reduces the need for markers, and speeds up processing times.
Cloud-Based Solutions: Remote collaboration and cloud-based rendering streamline workflows.
Real-Time Feedback: Tools like Unreal Engine and Unity enable immediate visualization on set.
Integration with AR/VR: Enhanced interactivity and realism in immersive experiences.
Portable and Affordable Systems: Advances in consumer-level equipment expand accessibility for indie creators.
These capture technologies revolutionize how filmmakers and VFX artists bring stories to life, enabling greater creativity and efficiency across industries.
In VFX, Is the Captured Data Indexical or Iconic?
In the context of VFX, the data captured can be categorized as indexical or iconic depending on its relationship to the real world:
Indexical Data
Definition: Indexical data has a direct, physical relationship with the source object or event. It is captured through measurements or recordings that correspond to the actual object or environment.
Examples in VFX:
Lidar Scans: A point cloud map of a real-world location.
Photogrammetry: Photos directly reconstructed into a 3D model.
Motion Capture: Real movements of an actor’s body are mapped to digital skeletons.
Why Indexical?:
These processes derive their data directly from real-world phenomena, ensuring fidelity.
Iconic Data
Iconic data is representational and relies on resemblance rather than a direct physical connection to the source.
Examples in VFX:
3D Models Created from Scratch: Designers create assets resembling real-world objects but with no direct link to a captured source.
Hand-Keyed Animation: Animations designed by an artist to “look like” movement without motion capture.
Concept Art and Matte Paintings: Created to resemble environments or objects without being directly measured or scanned from reality.
Why Iconic?:
These are interpretative, using resemblance rather than a physical trace of reality.
Some data in VFX, blend indexical and iconic properties, How? :
Performance Capture: Facial animations based on real recordings (indexical) but are later stylized or exaggerated for emotional effect (iconic).
Textures: Scanned from real objects (indexical) but may be enhanced or modified digitally (iconic).
Class Activity:
What do you think ? Find examples
- List what you think are the advantages and disadvantages of motion capture? Pros and cons.
- When and for what character might use traditional key-frame technique?
- When and for what character might motion capture be used?
Advantages of Motion Capture (MOCAP)
- Realism.
- Time Efficiency.
- Cost-Effective for Large Productions.
While the setup can be expensive, it saves costs over time for large-scale productions with extensive animation needs. - Integration with Virtual Production.
Disadvantages of Motion Capture (MOCAP)
- Limited Applicability:
Complex, non-human characters (e.g., dragons, aliens) often require traditional animation as their movements don’t correspond to human anatomy. - Dependence on Setup and Technology:
Requires a dedicated studio, expensive equipment, and skilled technicians.
Marker occlusion or technical glitches can cause inaccuracies. - Stylistic Constraints:
Raw motion capture data might feel too realistic for stylized or exaggerated animation styles. - Post-Processing Requirements.
When to Use Traditional Keyframe Animation
Traditional keyframe animation is better suited for non-Human or Fantasy Characters.
Creatures with anatomy or movements impossible for humans, like dragons (Game of Thrones), talking animals (Zootopia), or magical beings (Frozen).
Stylized Movements.
When exaggerated or surreal motion is needed, such as in animated films (Spider-Man: Into the Spider-Verse).
Budget Constraints.
Smaller productions may prefer hand-keyed animation to avoid the high costs of motion capture technology.
Artistic Freedom.
Keyframing allows animators to create movements that convey emotion or storytelling in a more intentional way.
When to Use Motion Capture
Motion capture is ideal for Human or Human-Like Characters.
Characters like Gollum (The Lord of the Rings), Caesar (Planet of the Apes), or video game avatars (The Last of Us). Mocap provides realistic motion and facial expressions.
Realistic Fight or Dance Sequences.
Complex, coordinated human movements like in martial arts or dance-based films (Avatar).
Large-Scale Productions with Real-Time Needs.
Projects using virtual production or real-time animation pipelines (The Mandalorian).
Actor-Specific Performances.
When capturing the unique performance of a specific actor for emotional depth or realism (Thanos in Avengers: Infinity War).
Temporal Recordings / Capture
- The slides below show temporal drawings made by ghosting moving objects / light to form lines or marks across single and multiple images.
- They are types of temporal recordings.
- These temporal drawings are by-products of 3D scans and motion-capture takes.
- In the case of motion capture; Locators fixed across the actor’s body – part of the motion-capture apparatus, can be made to reveal ghostly ghosting sweeping trails, motion-paths, lines and swirls; effectively drawing movement in 3D space.


Temporal Capture Maya SceneWhat is notable about motion capture, is that when you watch it back transposed on a digital character (in Maya), no mater how rough the character– think boxy robot!) your eye notices all its complexities and subtleties of movement derived from a real person.
HOMEWORK
Comparisons or Definitions of Motion capture vs Keyframe Animation.
Motion Capture (MoCap) and Key Frame Animation are techniques used to animate characters, but they differ in their approach and application.
In Motion Capture, actors wear special suits with sensors that track their movements. These movements are recorded and translated into digital animations, making the characters move in a realistic way. MoCap is widely used in realistic animations, like in video games and movies, where lifelike expressions and intricate body motions and nuances are essential. Although there is usually a post process clean up afterwards, to align the animation to fit properly in situations where the actors’ movement does not quite match up.
Key Frame Animation, on the other hand, is a manual process where animators create specific “key frames” that define important positions throughout the movement of the character, and then use a software to fill in-between the rest of the movement, between these frames, creating a smooth animation. This is ideal for cartoons or stylized films because animators have full control over how characters move, making room for imaginative or exaggerated actions. Key Frame Animation takes more time and skill but allows for more creativity.
Although MoCap excels at realistic movements, gestures and nuances, Key Frame Animation offers more flexibility for creative and imaginative effects. Depending on the project, both techniques can even be combined, with Mocap used as the base and Key Frame Animation added to give characters more expressive gestures. The choice of either method or both will be dependent on what the project at hand needs.
____________________________________________________________________________________________________________
WEEK 6
***Week 6: Reality Capture (LIDAR) and VFX***
Reality capture refers to the process of using advanced technologies to digitally record physical objects, environments, or movements from the real world and convert them into precise 2D or 3D digital models or data. This data serves as the foundation for creating photorealistic visual effects, simulations, and virtual environments. Reality capture relies on data collected directly from the physical world, such as images, laser scans, or motion data. It strives for high accuracy and detail, making digital models as close to their real-world counterparts as possible and it’s applicational use, spans across industries like film, architecture, gaming, engineering, and even medicine.
Reality capture technologies in visual effects (VFX) are used to create realistic, detailed, and dynamic digital environments, characters, and effects. Here’s a breakdown of a few Reality Capture technologies, how the data is captured, and whether the data is indexical:
1. LiDAR (Light Detection and Ranging)
What It Is: Uses laser scanning to create precise 3D maps and models of environments or objects.
How It’s Captured:
A LiDAR scanner emits laser beams and measures the time they take to return after hitting surfaces.
The data creates dense point clouds converted into 3D models.
Indexicality: Highly indexical as it is a direct measurement of physical surfaces.
In Avengers Endgame, LiDAR scanning was used to capture real-world environments with high precision, allowing the integration of CG elements such as Thanos or massive battle scenes. For example, environment scans around Portland, Oregon, were used to align VFX elements with real-world locations.
2. Photogrammetry
What It Is: Captures 3D geometry and textures from 2D photographs by analyzing overlapping images from multiple angles.
How It’s Captured:
Take hundreds or thousands of high-resolution photos of an object, character, or environment.
Use specialized software (e.g., Agisoft Metashape, RealityCapture) to stitch photos into 3D models.
Indexicality: Photogrammetry is highly indexical since it directly derives data from real-world imagery.
Photogrammetry played a crucial role in creating the immersive visuals of James Cameron’s Avatar films. The technology was used extensively to capture the actors’ facial performances and transform them into highly detailed digital characters like the Na’vi, blending realism and creativity. For instance, high-resolution photogrammetry scans of actors’ faces were used to build incredibly lifelike digital doubles. These scans allowed the visual effects teams to capture intricate skin details, pores, and even subtle facial movements, which were then integrated into CG models using advanced rendering software like Maya and Wētā FX’s proprietary tools.
Moreover, photogrammetry wasn’t limited to human characters. The technique was used to scan real-world objects, environments, and even miniature models to create the stunning, photorealistic settings of Pandora. This approach ensured a seamless integration of live-action footage with computer-generated imagery, maintaining the immersive quality of the film.
3. Motion Capture (MoCap)
What It Is: Captures the movement of actors or objects to animate digital characters.
How It’s Captured:
Sensors or reflective markers are attached to the actor’s body or face.
Multiple cameras track these markers in real time.
Indexicality: Partially indexical (movements are real), but data is abstracted into animation rigs.
4. Volumetric Capture
What It Is: Captures 3D volumetric videos of subjects from all angles, creating a 3D video that can be viewed from any perspective.
How It’s Captured:
Multi-camera setups record a subject from all angles simultaneously.
The data is processed into a 3D representation.
Indexicality: Highly indexical as it records reality in volumetric detail.
5. Scanning (Laser or Structured Light Scanning)
What It Is: Captures fine details of objects for 3D modeling.
How It’s Captured:
A laser or structured light scanner projects patterns onto a surface and captures how the patterns deform.
Indexicality: Indexical as it directly maps real-world geometry.
6. Photometric Stereo
What It Is: Captures surface details and textures by analyzing how light interacts with a surface.
How It’s Captured:
Multiple photographs are taken with varying light angles.
Software calculates surface normals and texture maps.
Indexicality: Indexical, as it represents physical surface details.
7. 360° and HDR Imaging
What It Is: Captures spherical images and high-dynamic-range (HDR) lighting for environment mapping and accurate lighting.
How It’s Captured:
Use a 360° camera or a DSLR with a fisheye lens.
Capture HDR by taking multiple exposures of the same scene.
Indexicality: Indexical, as it captures the real lighting and spatial layout of environments.
Reality Capture is an umbrella term that covers the technologies and practice of 3D scanning.
3D scanning is the process of analyzing a physical object or environment to collect data about its shape, dimensions, and sometimes texture and color. This data is used to create precise 3D digital models that can be utilized for visualization, analysis, and manufacturing.
Trends of 3D Scanning
- Depth-Based Scanning
An example of depth-based scanning using Microsoft Kinect is capturing people or objects to create 3D models. Projects like using Kinect or similar RGB+Depth sensors to reconstruct faces or bodies demonstrate how depth information can be combined with RGB data to create detailed, textured models. A typical project could involve scanning small objects or even creating animations with scanned human models for educational or entertainment purposes.
- Laser Scanning (LiDAR)
A prominent use of LiDAR scanning is in cultural heritage projects, such as documenting ancient structures or historical sites. For instance, LiDAR is often used to capture the intricate details of monuments or landscapes, such as scanning the interior of ancient cathedrals to preserve their designs. A virtual 3D environment allows users to manipulate these scans for study or analysis, such as learning about architectural details.
- Photogrammetry
A project by Pix4D demonstrates photogrammetry in use for mapping a quarry. Using a drone with a high-resolution camera, hundreds of images (photographs) were captured from different angles and processed into a 3D model. This model was used to calculate dimensions, volumes, and other data essential for planning and educational purposes. Photogrammetry is also widely used for creating detailed 3D models of archaeological artifacts.
Practical Applications of scanning in VFX
Environments: LiDAR scanning captures detailed textures and terrains, enabling VFX teams to create lifelike backdrops.
Objects and props: Small to mid-scale LiDAR and structured light scanners allow artists to scan individual props, costumes, or characters.
Human and creature capture: Increasing use of 3D scanning for digital doubles provides a foundation for realistic character animations.
PERSPECTIVE
Perspective is a system used to represent three-dimensional objects on a two-dimensional surface, such as paper or a digital screen. This technique achieves the illusion of depth and space by foreshortening forms and positioning objects according to specific viewpoints.
Types of Perspective
Linear Perspective:
Based on a mathematical system that uses lines to create the illusion of depth.
Types:
- One-Point Perspective: All lines converge to a single vanishing point on the horizon. Common in hallway or road scenes.
- Two-Point Perspective: Two vanishing points are used, often for objects viewed at an angle, like buildings on a street corner.
- Three-Point Perspective: Adds a third vanishing point, typically above or below the horizon, to simulate extreme height or depth.
Atmospheric Perspective (or Aerial Perspective):
Simulates depth by altering color, contrast, and clarity. Objects further away appear lighter, less detailed, and bluer due to atmospheric scattering.
Isometric Perspective:
Maintains equal proportions and angles, creating a “flat” 3D appearance. Common in technical drawings and video games.
Forced Perspective:
Manipulates the viewer’s perception by placing objects at specific distances or angles to create illusions, often used in photography and film.
Analogue Perspective
Analogue perspective typically refers to non-digital or traditional methods of creating the illusion of depth and three-dimensionality in a two-dimensional representation. Unlike mathematically precise systems like linear perspective, analogue perspective often relies on intuitive techniques or naturally occurring visual cues to suggest spatial depth. Giotto di Bondone (1267–1337) is credited with advancing realistic depth through shading and overlapping in his frescoes, but then artists like Leonardo da Vinci and Filippo Brunelleschi later codified linear perspective, which transitioned from analogue methods to mathematically grounded systems.
Digital Perspective
Digital perspective refers to the use of computational methods and digital tools to simulate, manipulate, or analyze perspective in visual media. It plays a crucial role in fields like visual effects (VFX), computer graphics, video games, architecture, and virtual reality, where the illusion of depth and spatial relationships must be represented accurately and flexibly.
What is Pictorial Space?
Pictorial space refers to the illusion of three-dimensional depth and spatial relationships within a two-dimensional artwork or image. It is the perceived “space” created by the artist to represent objects, figures, and their spatial arrangement on a flat surface. Pictorial space is a conceptual construct, often achieved through artistic techniques such as perspective, shading, and compositional elements.
De Pictura (1450) Principles
- Dimensions and proportions of forms
- Theories of Perspective
- Rules of composition
- Light and shading
- Theories of colour
- Representation, aesthetic and moral principles
Perspective Machines
Invented in the period following Alberti’s window device, these machines were later used by Da Vinci, Dürer and others to trace individual visual rays as invisible lines (or sometimes actual wires) from a point on an object to a point on a 2D plane. Renaissance perspective drawing could also involve taking measurements. Kemp writes that the drive to invent “a machine or device for the perfect imitation of nature” (1990, p. 167) existed in Western art from the Renaissance period until the invention of photography.
- Dürer was particularly interested in the foreshortening of objects, shown in his well-known woodcut of draughtsmen plotting points to capture the foreshortening of a lute (1525) .
- This woodcut demonstrates the principle of a ray (or string in this case) passing from point on object to intersection on a 2D plane,
- This 2D point was marked by horizontal (x) and vertical (y) strings which could then be transferred to a paper in a rotating frame.
In a sense, the 3D LiDAR scanner is also a perspective machine.
A Bridge Between Physical and Digital Worlds
- The 3D computer space is what makes it possible to blend real-world scans with CGI, which would otherwise be impossible to achieve with traditional filming alone.
- This digital space serves as a bridge, allowing artists to create scenes that are both grounded in real-world physics and enhanced by digital effects.
So why is this important to 3D scanning?
Digital 3D space provides a both a literal and conceptual framework for Reality Capture data files to reside. Conceptually, perspective methods and machines are similar to Scanners as they take measurements.
The 3D scanner takes millions of measurements to form digital replicas as point clouds (and in some cases geometry). The measurements are taken digitally, but this could be taken in a slow analogue way, we can think of mapmakers using triangulation, and the 3D scanned objects originate from measurements of actual, existing things.
So, you can load or model 3D objects in 3D space, you can imitate or simulate something, OR with 3D scanning you can capture. 3D scans are not another type of built 3D model, as they take their dimensions and form from the world, they are not modelled by hand, they are digital replicas.
Analysis activity: LiDAR Scan vs. Photograph
Find a LiDAR Scan Example: Search for a LiDAR scan image online. Look for scans of landscapes, buildings, or famous landmarks like the Eiffel Tower or forests. Good sources include Sketchfab or scientific/architectural websites.
Describe key visual characteristics:
Point Cloud: LiDAR scans often appear as a collection of dots or points, not continuous surfaces.
Depth and Structure: They show shape and depth well but lack colour and texture details found in photos.
Wireframe Effect: Many scans have a skeletal, wireframe look, emphasising structure over surface details.
Compare to a photograph:
Surface and Texture: Photos have smooth, continuous surfaces with natural lighting, textures, and colours; LiDAR scans lack these.
Realism: Photos capture realistic lighting and shadows, while LiDAR focuses on accurate spatial data, appearing more mechanical or schematic.
This comparison will highlight how LiDAR emphasises 3D structure, while photographs focus on visual detail and realism.
HOMEWORK
Write a Case Study post on Reality Capture.
Reality Capture is a process and technology used to create accurate 3D digital representations of real-world objects, environments, or structures. By capturing spatial data, it enables the creation of models that can be used for various applications, from architecture and construction to virtual reality and gaming. Reality Capture combines different techniques, such as photogrammetry, laser scanning, and LiDAR, to gather precise 3D information and translate it into digital models.
Case Study: Reality Capture for the Restoration of Notre-Dame Cathedral
After the 2019 fire at Notre-Dame Cathedral in Paris, a team of engineers, architects, and preservationists used Reality Capture technology to aid its restoration. The objective was to digitally preserve the cathedral’s intricate details to ensure that reconstruction would honor the original design.
The team used a combination of laser scanning and photogrammetry to create high-resolution 3D models. LiDAR scanning of the interior and exterior captured millions of data points, allowing the team to digitally replicate unique architectural features like delicate carvings and vaulted ceilings, which traditional methods couldn’t easily document.
One of the main challenges was obtaining accurate data from the fire-damaged structure while working around safety risks in the unstable environment. Despite these difficulties, Reality Capture techniques yielded comprehensive and reliable data, allowing the restoration team to simulate the reconstruction process before physical work began. This careful planning minimized errors and helped preserve the integrity of the original design.
The use of Reality Capture technology accelerated the restoration process, keeping the project on schedule. Today, these 3D models serve as a permanent digital record of Notre-Dame, preserving its architectural legacy for future generations.
____________________________________________________________________________________________________________
WEEK 7
***Reality Capture (Photogrammetry) and VFX***
Digital Replicas and Facsimiles
A digital 3D facsimile is a highly accurate, digital three-dimensional reproduction of an object. Using techniques like photogrammetry scanning, a digital 3D facsimile replicates the appearance, shape, texture, and sometimes even the material properties of the original object as closely as possible. Photorealistic digital props and environments are digital facsimiles created from real-world objects or locations. VFX artists can use these photorealistic digital props in digital scenes.
Virtual sets and backgrounds can also be created from detailed 3D scans of buildings, landscapes, or iconic settings that can serve as virtual backdrops or environments. This then allows VFX artists to replicate real locations without needing to transport cast and crew to a physical location.
Digital Double
Actors and stunt doubles are often 3D scanned to create digital facsimiles. Some re-modelling is done afterwards before these models can be rigged and animated.
What was the Digital Michelangelo Project?
The Digital Michelangelo Project was a landmark initiative launched in 1997 by a team of researchers from Stanford University, led by Professor Marc Levoy. The project’s goal was to use advanced 3D scanning technology to create highly detailed digital replicas of Michelangelo’s sculptures and other historical artifacts, capturing them with unprecedented accuracy and precision. The project focused on several of Michelangelo’s masterpieces, including the David, Pieta, and Moses, as well as smaller pieces like the Prisoners (Slaves). A custom 3D laser scanner was developed, capable of capturing fine details like tool marks and surface textures at a resolution of 0.1 millimeters. But scanning such large sculptures presented significant logistical hurdles, including working in crowded museum environments, safety of the statues, limited accessibility, and the sheer volume of data collected as the project generated terabytes of data, creating 3D models detailed enough to serve as a virtual backup of the sculptures.
Key Objectives:
- Preservation:
Create a high-resolution digital record of Michelangelo’s masterpieces to preserve their form and detail against potential future damage or degradation. - Study:
Provide art historians, researchers, and students with detailed, manipulatable 3D models to better study the artist’s techniques, processes, and creative genius. - Public Access:
Enable broader access to Michelangelo’s works by allowing virtual interaction with his sculptures for those unable to visit the physical locations. - To create and test Standford’s “project to build a 3D fax machine” (Graphics Stanford, 1995, para.1).
The project not only preserved Michelangelo’s works digitally but also set a new standard for cultural heritage preservation using digital technology. It inspired future endeavors in the digitization of historical artifacts and advanced the fields of computer graphics, 3D modeling, and art history.
Laser Stripe Scanning
Laser stripe scanning is a 3D digitization method used to capture the shape and surface details of objects by projecting a thin laser line (or stripe) onto the object’s surface and recording its deformation using cameras. The technique is commonly employed for creating highly detailed digital models of artifacts, sculptures, or industrial components. This method was used in capturing fine details in the Digital Michaelangelo Project.

Motorized gantry positioned in front of Michelangelo’s David. From the ground to the top of the scanner head is 7.5 meters.

The red stripe generated by our laser triangulation scanner sweeps across the face of the David. By analyzing these sweeps, they digitized the David with a spatial resolution of 0.29mm.


Computer renderings from a 2.0 mm, 8-million polygon model of David. The veining and reflectance are artificial. The renderings include physically correct subsurface scattering, but with arbitrary parameters.
The Veronica Scanner
The Veronica Chorographic Scanner (referred to as the Veronica) is a bespoke 3D scanner designed by Manuel Franquelo Jr. and built in Factum Arte to record faces and objects within a 50 x 50 x 50 cm range. Originally conceived for the anti-ageing industry, the Veronica is specifically designed to capture the fine surface detail of the human face. The technology is not limited to faces – any object that fits within the range of focus and can remain still for 4 seconds may be captured as well. There are currently two versions of the scanner; a ‘mask’ scanner and a ‘bust’ scanner.
The high-resolution 3D recording system was developed through the contributions of artists, artisans and technicians working under one roof at Factum Arte in Madrid. It is based on the emerging technologies behind composite photography and photogrammetry. These techniques have now enabled the possibility within photographic portraiture of quickly recording 3D form as well as 2D, which can then be viewed on screen or re-materialised through a range of 3D printing and prototyping systems.
The theory behind the Veronica is based on multi-view photogrammetry. Photogrammetry has existed since the birth of modern photography in the 19th c. and is the science of making measurements from photographs. Technically speaking, it is the process of capturing a subject from multiple angles, then subsequently collating and aligning these images based on similar points of reference processed using a range of different visual computing algorithms, in order to create a three dimensional representation of the subject.
A Note On Resolution
There is a great deal of misunderstanding about accuracy and resolution. The way we are using the term resolution refers to the level of detail a 3D file holds. We evaluate the resolution not just by a theoretical or mathematical description of the sensor of the scanner, but by the correspondence between the scanned data and the original surface. Close range scanners have greater correspondence to the surface of the object than long-range scanners.
The main variables that affect the resolution are the lenses, the sensors, the area that is being scanned and the software algorithms that process the data. Somewhere in this mix of elements is the sweet-spot that will result in data that passes the ‘mimesis test’ – If it looks like a sweet that has been sucked it has failed – if in direct comparison with the original it looks the same, then it has succeeded.
Mimesis
Mimesis refers to the process of creating visual elements that imitate real-world appearances, physics, and behaviors to achieve lifelike or believable imagery. It is about replicating the natural world’s intricacies, whether through physical realism, biological accuracy, or emotional resonance, in digital or augmented spaces. This concept is central to making VFX convincing and immersive, blending seamlessly with live-action footage or portraying realistic environments, characters, and phenomena. Mimesis in VFX is not just about imitation but also about storytelling. By creating believable visuals, filmmakers immerse audiences in worlds where the boundaries between the real and imagined blur, enhancing emotional and intellectual engagement.
Mimesis Test
The term “mimesis test” refers to the concept of evaluating how well a visual or digital creation imitates reality, particularly in fields like VFX, AI, or art. In this context, a mimesis test involves assessing the realism and believability of an effect, simulation, or representation.
In layman’s terms, the goal of the Mimesis concept is to create an image or object that closely resembles its real-world counterpart. the concept acknowledges that it’s meaning resides in the real things themselves, and the representation’s job is to “mirror” or “copy” that reality.
Limitations of Mimesis
Even the most realistic, “straight” (unmanipulated) images differ from their subjects in fundamental ways:
- Physical Nature: Images are 2D, silent, and static, while the real world is 3D, dynamic, noisy, and complex.
- Artificial Frame: Images, such as photographs, are bounded by rectangular frames, whereas real-world objects exist in an infinite spatial environment.
Optical Realism vs. Actual Likeness
Optical Realism and Actual Likeness are concepts within the framework of mimesis that distinguish between the visual appearance of reality and a deeper, more intrinsic resemblance to the essence or truth of what is being represented.
Optical Realism: This refers to a surface-level imitation of reality, focused on accurately replicating how things look to the human eye. It emphasizes details like texture, lighting, and spatial proportions
Actual Likeness: This goes beyond appearances to capture the inner truth, essence, or character of the subject. It reflects an understanding of what the subject represents rather than just how it looks.
Even highly realistic images (like photographs, film, or video) only partially resemble their subjects. Adding elements like sound and motion in video and film may bridge some gaps but doesn’t eliminate the representational distance from reality.
Mimesis and Indexicality
I think Scanning is both indexical and mimetic. It is indexical as it emphasizes a physical or causal connection between the representation and the represented, just like how photographs, videos, or recordings serve as traces of reality. Scanning is also mimetic as the final product acquired becomes the imitation or mimicry of the represented.
Verisimilitude
Verisimilitude in the context of Visual Effects, refers to the quality of being believable or having the appearance of truth. It is a measure of how convincing the digital elements—whether characters, environments, or effects—blend with the real-world footage, or create a realistic world in a fully computer-generated environment. Verisimilitude is less about photorealism and more about achieving a sense of believability, while photorealism focuses on creating visual details as close to reality as possible, verisimilitude ensures the audience perceives the digital creations as consistent within the film’s world. For example, a character like Thanos in Avengers Infinity War may not look exactly like a real person, but his design and animations must feel believable within the context of the fictional universe.
Difference between Verisimilitude and Mimesis:
- Mimesis is the imitation or replication of nature, life, or reality in art. Originating from Greek philosophy, it’s the idea of art mirroring or copying the world around us.
- Verisimilitude is the quality of seeming true or appearing realistic. In art and film, it refers to how believable or convincing a representation is, not necessarily whether it is a direct copy of reality.
Hyperrealism
Hyperrealism in VFX refers to the creation of visual effects that surpass traditional realism by not only imitating real-world visuals but also enhancing or exaggerating details to make them more visually striking and engaging, often to the point where they appear more “real” than reality itself. It involves using advanced techniques to simulate elements like lighting, textures, physics, and even human emotions in a way that heightens the realism beyond what is typically seen in the natural world. Unlike photorealism, which aims to match real-world visuals closely, hyperrealism often enhances reality by presenting it in an exaggeratedly perfect, flawless, or stylized way. This style is commonly used in film, and gaming to create immersive experiences that captivate audiences.
WEEK 8
***Simulacra, Simulation and the Hyperreal***
Simulation And Simulacra
The concept of “Simulacra vs. Simulation” originates from the work of French philosopher Jean Baudrillard, particularly in his book Simulacra and Simulation (1981). What is the difference between simulacra and simulation? Simulation is a term that is used to describe a process in which things that appear to be real are not, in fact, real. Simulacra, on the other hand, is an object that is not real, but appears to be real. For example, the plastic figurine that is popular with children today is a simulacrum of a real person.
The idea of simulation is a little more complex. A simulation is an object that appears to be real but is not. In other words, it is a real object, but it is not real. For example, when you watch a movie, you are watching a simulation. In the film, the characters and the world around them seem to be real but are actually fake.
Simulation
Simulation involves creating a representation of a system or environment that mimics the real world but is not actually real. Simulation is most often applied in creating realistic behaviors or dynamics.
Examples include:
- Physics simulations (e.g., simulating water, fire, explosions, or collapsing buildings).
- Crowd simulations (e.g., generating thousands of moving characters in a battle scene).
- Weather or environmental effects (e.g., snowstorms, fog, or rain).
Simulations strive to replicate the real-world processes accurately, often aiming for photorealism or physically plausible results.
Simulacra
Simulacra are copies or representations of things that may not have a true original or are significantly altered from their source. This could refer to hyperrealistic digital recreations of objects, environments, or even people that go beyond merely mimicking reality.
For example:
- Digital doubles of actors.
- Entirely computer-generated worlds (e.g., Avatar’s Pandora).
- Artificially constructed creatures or phenomena (like a dragon or magical explosion).
Simulacra in VFX are often not tethered to a need for realism; instead, they exist to evoke their own reality or meaning, one that viewers accept even if they know it isn’t real.
Theorist Jean Baudrillard in his work in 1981, introduces the concepts of simulacra and simulation, for him, simulacra and simulation are pervasive in modern society, and they have had a number of profound effects.
Baudrillard posits that society has moved beyond the point of reality into a world of simulation. That is to say, what we experience is not reality, but rather a copy of reality. In Baudrillard’s view, this copy is not an exact replica of reality, but rather a distorted version of it.
An order of a simulacra would be a representation of a person’s face. A stage of simulacra would be a representation of a person’s name. In Baudrillard’s view, orders and stages of simulacra are two different types of simulacra. The difference between the two is based on how the simulacra are being used. If the simulacra are being used to represent the original, then the order and stage of simulacra would be the same. However, if the simulacra are being used to represent something else, then the order and stage of simulacra would be different. Baudrillard states that the stages of simulacra are ‘simulated’ and the orders of simulacra are ‘representations’.
4 Stages of the Sign-Order
Sign-order: Phases of the Image
- It is the reflection of a profound reality;
- It masks and denatures a profound reality;
- It masks the absence of a profound reality;
- It has no relation to any reality whatsoever: it is its own pure simulacrum.
(Baudrillard, 2010, p.6)
1) The first stage is a faithful image/copy, where we believe, and it may even be correct, that a sign is a “reflection of a profound reality” (pg 6), this is a good appearance, in what Baudrillard called “the sacramental order”. For example, a photo of a king on a piece of paper, we have faith that it is the king in question.
2) The second stage is perversion of reality, this is where we come to believe the sign to be an unfaithful copy, which “masks and denatures” reality as an “evil appearance—it is of the order of maleficence”. Here, signs and images do not faithfully reveal reality to us, but can hint at the existence of an obscure reality which the sign itself is incapable of encapsulating. For example, a fake painting, which is made to look like a real painting. The fake painting is capable of communicating something to the viewer, but the viewer is not capable of deciphering what this means.
3) The third stage masks the absence of a profound reality, where the simulacrum pretends to be a faithful copy, but it is a copy with no original. Signs and images claim to represent something real, but no representation is taking place and arbitrary images are merely suggested as things which they have no relationship to. Baudrillard calls this the “order of sorcery”, a regime of semantic algebra where all human meaning is conjured artificially to appear as a reference to the (increasingly) hermetic truth. For example, an advertisement which uses a celebrity to sell a product. The celebrity is not actually endorsing the product, but their image is used in order to make the product seem more appealing.
4) The fourth stage is pure simulation, in which the simulacrum has no relationship to any reality – there is no longer any distinction between the original and the copy. For example, in the movie The Matrix, the characters are actually living in a simulated world, and the world around them is fake.
Orders of Simulacra
According to Baudrillard the world, as we know it now, is constructed on the representation of representations. These simulations exist to fool us into thinking that an identifiable reality exists. Baudrillard’s orders of simulacra exist as follows:
1) The first order of simulacra focuses on counterfeits and false images. In this instance the sign no longer refers to that which it is obligated to refer to, but rather to produced signifieds. In this level, signs cease to have obligatory meanings. Instead the sign becomes more important than the physical. That is to say that the focus is placed on the sign rather than on what it is intended to represent. For example, a picture of a Madonna might be placed in a church. The picture would take on religious meaning even though it is not the real Madonna. In this way, the simulacrum has taken on a life of its own and is no longer bound by its original meaning.
2) The second order of simulacra is when the simulacrum is so far removed from its original that it can no longer be said to be a copy. In this order signs become repetitive and begin to make individuals the same. Signs refer to the differentiation between the represented signifieds, not to reality. For example, a picture of a person might be placed in a gallery and the viewer might see multiple representations of the same person. This is a type of simulacra that is derived from the first order. In this way, the individual is no longer a unique individual but rather a collection of similar individual representations.
3) The third order of simulacra is that which is referred to as the simulacrum of the real. This simulacrum is a form of representation that is not based on the reproduction of the original but rather on the appropriation of the original signified. For example, a person might be represented by a computer graphic image of themselves. This simulacrum is based on the idea that the individual has an essence and is simply represented in the image. This type of simulacrum is an instance of a non-semantic simulacrum.
Baudrillard believed that society had reached a point where the simulacra (i.e. the copies) were more real than the actual reality. In other words, we are more likely to believe what we see in the movies than we are to believe what is actually happening in the world around us. This is because movies often present a more attractive or appealing version of reality than actuality. As such, Baudrillard’s theory can be used to explain why people are often drawn to movies over actual life experiences. In the modern world, as of 2023, Baudrillard’s theory has been extremely influential in the development of internet communication.
Virtual worlds such as Second Life are being created with the purpose of replicating the real world in order to bring people together. The ‘real’ world of Second Life can be seen as a simulacratic representation of the ‘real’ world of the physical world. The metaverse (as it is known) is the result of an attempt to simulate reality in a way that is both virtual and infinite. This metaverse can be used to facilitate communication between people.
Reference:
UnearnedWisdom ~ A Summary Of Simulacra And Simulation By Baudrillard — Unearned Wisdom (Accessed: 18 November,02024)
Borges Fable and The Precession of Simulacra
Baudrillard uses a story about a map that outlasts the Empire it represents to illustrate how simulations (signs), replace reality. The distinction between original and copy dissolves, leading to simulacra—a representation with no original. In VFX, digital models and environments often take precedence over physical counterparts, blurring the line between real and virtual.
- The Empire: Original
- Map of the Empire: Copy/Simulation
Once all evidence of the Empire (original) has gone and thus, the map according to Baudrillard, stands as a copy without an original, it is a Simulacra.
“It is the generation by models of a real without origin or reality: a hyperreal.” (Baudrillard, 2010, p.1)
The territory no longer precedes the map, nor does it survive it. It is nevertheless the map that precedes the territory – precession of simulacra – “(Baudrillard, 2010, p.1)
Hyperreality
“Baudrillard defined “hyperreality” as “the generation by models of a real without origin or reality”; hyperreality is a representation, a sign, without an original referent. The prefix ‘hyper’ signifies more real than real whereby the real is produced as per model. Baudrillard in particular suggests that the world we live in has been replaced by a copy world, where we seek simulated stimuli and nothing more.” (Santoso and Wedawatti, 2019, p.69)
- Hyperreality: is a state where the distinction between the real and the simulated blurs completely, leading to the creation of a “real” that is generated from models without origin—more real than real.
While Simulacra: is when representations stand alone without an original, they are simulacra.
Hyperreality Journal
Objective: List your personal encounters with simulacra and hyperreality.
For me it was the movie “The Matrix”. Watching it for the first time was mind blowing. Having multiple digital doubles for “Agent Smith”,,dodging bullets, levitating and many other effects were just incredible to watch. The environments looked it would in the real world, and the interaction on scene were near flawless, as of the year 1999. Now in the future, All I see is the CGI effects I was originally blinded to.
It was really believable back then because of the amount of details used in simulating reality in the movie.
The movie is now it’s own simulacrum to which other movies rely or try to imitate.
ClassWork 1
Image Analysis based on Baudrillard’s theory of Simulation and Simulacra
Task: On your digital sketchbooks please illustrate Baudrillard’s concept of the four phases of the image and hyperreality.
For example,
-
- find a famous painting (1st phase),

The painting of “The Girl With the Pearl Earring” by Johannes Vermeer.
This is the Faithful copy. The original.
An astonishing piece of work I dare say.
The Original real painting. This depicts reality.
-
- a photo of that painting (2nd phase),
This is an image of the image. A distorted version of the original, where it still retains some realworld characteristics to the original paining.
-
- a digitally edited version (3rd phase),


This image depicts the absense of reality in comparism to the riginal image. The image itself has been changed, modified create it own real image.
-
- and a fully computer-generated image version with no real-world referent (4th phase)


This image has no original and is by itself it’s own original as ther is no real world copy of it. Making it, it’s own simulacrum.
Categorise which phase of the image each represents, according to Baudrillard.
Writing: After categorising, make notes on how each stage alters perception of the “real” and how this affects visual effects work in media.
ClassWork 2
Image Search and Analysis
Objective: Visually illustrate Baudrillard’s phases of the image and hyperreality.
Task: Find examples online that represent each of Baudrillard’s four phases of the image:
-
- A faithful representation (e.g., a photograph of a real object).
The original image of of Lamborghini Reventon Vehicle.
- A distorted representation (e.g., a parody or edited photo).
It looks like a lamborghnin, and has elements of the vehicle like the shape, has space for lights, glass and tyres. But it is not real, and is far removed from the original.
- A copy of something that doesn’t exist (e.g., a digital rendering of a fantasy creature).
This image has wings like an angel, it has a similar shape to the Lamborghini but thats as far as it goes. Here it loses the basic reality of the the original Lamborghini. - A simulacrum—an image with no reference to reality (e.g., an entirely CGI world with no basis in reality).
This Image is it own Simulacrum as it has no original and no real world reference. It’s shape and features are entirely changed.
- A faithful representation (e.g., a photograph of a real object).
Writing: Writes a brief explanation of why each image fits the phase you chose, relating it back to Baudrillard’s theory.
Discussion: Present images and justifications to the class, discussing how these images might influence how we perceive reality.
HomeWOrk
VFX Breakdown: Exploring the Phases of the Image
Have a look at how visual effects move through Baudrillard’s four phases of the image, from faithful representations to pure simulacra.
Task: Choose film scene that utilises VFX (e.g., the city folding in Inception or the digital landscapes in Avatar).
Break the scene you have chosen down into elements that represent each of Baudrillard’s phases.
Phase 1: Identify elements in the scene that are direct representations of reality (digitally enhanced but still faithful to the real world).
Phase 2: Highlight areas where the VFX distorts reality (exaggerated physics or surreal effects that look realistic but aren’t true to life).
Phase 3: Pinpoint parts of the scene that create the illusion of something that doesn’t exist in reality, with no original referent (a mythical creature or a sci-fi object).
Phase 4: Identify purely simulated aspects, entirely digital environments with no reference to the real world (full CGI shots).
Reflect on how each phase impacts the audience’s perception of the “real” and how VFX can create hyperreal experiences. Reflect on how filmmakers use these techniques to manipulate reality and generate emotional responses.
Phase 1
The Wall he’s climbing and the Floor are a faithful representation of the real world. The higher he climbs, the farther away the ground gets.
Phase 2
The zooming out effect behind the Spiderman looks real enough, but it is exaggerated, when you notice the ground he covers while climbing the wall VS the the rate at which the fllor is moving away.
Phase 3
The ground behind him getting smaller and smaller, doestn actually exist as a green screen is used, which is replaced later on with other complimenting images.
Phase 4
The man climbing the wall on all 4’s. In reality hes crawling on the ground with a green screen behind him as the ground plate. The ground is addred in post production.
____________________________________________________________________________________________________________
WEEK 9
***Virtual Filmmaking***
What is Virtual Production?
Example of Shows
House of the Dragon (2022–present)
This show used virtual production for complex dragon-riding sequences and large-scale environments, allowing the actors to interact with LED screens displaying pre-rendered settings, providing natural lighting and reflections.
Westworld (Season 4, 2022)
This show employed the use of virtual production to create futuristic cityscapes and otherworldly environments using LED volumes, enhancing visual fidelity and reducing the reliance on post-production.
Real-Time Rendering: Virtual production relies on game engines like Unreal or Unity to render 3D environments in real-time. This allows filmmakers to see digital sets, backgrounds, or effects as they would appear in the final shot, directly on set.
LED Volume: Instead of shooting against traditional green screens, filmmakers use massive LED walls that display real-time, photorealistic 3D backgrounds. These walls provide realistic lighting and reflections for actors and objects on set.
Motion Capture (MoCap): Motion capture suits or systems track an actor’s movements, allowing their performance to be applied directly to digital characters or avatars.
Camera Tracking: Virtual cameras are synchronized with physical cameras, allowing the digital background to adjust dynamically to the camera’s movement, maintaining realistic perspectives and parallax.
Previsualization (Previs): Directors and crew can plan and visualize scenes in a virtual environment before shooting. This helps in blocking shots, setting up lighting, and refining sequences.
In-Camera VFX: Visual effects are integrated directly into the camera feed during shooting, eliminating the need for extensive post-production compositing.
Virtual production is a complex pipeline
- Previsualization (Previs)
- Pitch Visualization (Pitchvis)
- Technical Visualization (Techvis)
- Stunt Visualization (Stuntvis)
- Post-Visualization (Postvis)
- In-Camera VFX (ICVFX)
Benefits of Virtual Production:
Creative Freedom: Directors can experiment with different lighting, angles, and settings without the constraints of physical locations.
Cost and Time Efficiency: Reduces the need for extensive location shoots and post-production VFX work.
Real-Time Collaboration: Teams can make creative decisions on the fly, adjusting environments, props, and effects during filming.
Immersive Actor Experience: Actors can perform with visible backdrops and realistic lighting, enhancing their performances.
Virtual production is used in filmmaking, TV shows, video game cinematics, commercials, and even live events. Notable examples include Disney’s The Mandalorian, where extensive use of LED volumes and real-time rendering revolutionized the production process. This technology is rapidly transforming how stories are told, making production more adaptable and immersive.
LED Screens
More recently the use of LED screens combined with Unreal Engine has revolutionised virtual production. LED screens, often arranged in large volumes, serve as dynamic backdrops that can display real-time rendered environments created in Unreal Engine. This setup allows filmmakers to capture both live-action and virtual elements simultaneously, enhancing the realism and in-theory streamlining scene production.
LED screens with Unreal Engine in virtual production
- Dynamic Backdrops LED screens arranged in large volumes display real-time rendered environments from Unreal Engine.
- Realistic Lighting LED screens provide realistic lighting and reflections on actors and sets
- Seamless Integration Combines live-action and virtual elements, creating a seamless blend between physcial and digital worlds.
- Immediate Adjustments Allow for real-time changes to the virtual environment, such as time of day or weather, without extensive post-production.
- Motion Tracking Ensures the virtual background moves in sync with the camera, maintaining correct perspective and scale.
- Creative Possibilities Opens up new creative opportunities, making complex and visually stunning scenes more accessible and cost-effective.
This technology was notably used in The Mandalorian, setting a new virtual production.
Virtual Production Evolution
Virtual production has evolved significantly over the years, starting with techniques like front and rear projection used in the mid-20th century. These methods involved projecting pre-recorded footage onto screens behind actors, as seen in films like North by Northwest (1959) and 2001: A Space Odyssey (1968). Blue and green screens allowed for more complex backgrounds to be added in post-production.
A Few Things to Note
- LED volumes don’t give off hard light. Light sets have to be used to project hard light for hard shadows.
- Prep work and reheasals go a long way in realtime production.
- On the day of the shoot, the outcome might change slightly, but the preparation give room for creativity.
VFX Production has evolved from Chemical to Digital and now Virtual.
____________________________________________________________________________________________________________
2ND ASSIGNMENT CHOICE:
OPTION 3: How do Spectacular, Invisible, and Seamless Visual Effects Influence Modern Filmmaking? Use examples or case studies to analyse one type of effect or compare two or more. In your essay, you may choose to:
-
Focus on one type of effect (Spectacular, Invisible, or Seamless) and explore its techniques, impact on storytelling, and contribution to audience experience.
-
Compare two or more types of effects to discuss their similarities, differences, and how they align or diverge in their use and purpose.
-
In either approach, engage with theories of photorealism and consider how these effects contribute to immersion, narrative development, and visual storytelling. You may wish to examine the role of compositing, photographic manipulation, or the integration of 3D elements with live action.
____________________________________________________________________________________________________________
WEEK 10
***Library Research***
PLANNING YOUR ASSIGNMENT
https://www.uwl.ac.uk/current-students/support-students/study-support/planning-your-assignment
EBSCO
https://research.ebsco.com/c/v5takb/search/advanced/filters?autocorrect=y&expanders=fullText&expanders=concept&isDashboardExpanded=false&limiters=FT1%3AY&q=%28%22virtual+production%22+OR+%22Visual+Effects%22+OR+VFX%29+AND+Seamless+AND+%28Mandalorian+OR+Avatar%29&resetPageNumber=true&searchMode=all&searchSegment=all-results&skipResultsFetch=true
PERLEGO
https://ereader.perlego.com/1/book/4231207/10?page_number=24
____________________________________________________________________________________________________________
WEEK 11 – 12
***My Powerpoint Class Presentation***