Current Trends of VFX

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Week 1 – Introduction to the module

What is a trend?

In the context of Visual Effects, a trend refers to a prevailing style, technique, or technology that is gaining momentum and being widely adopted in the industry over a particular period. It represents a shift in how visual effects are created, perceived, or applied in media production, often driven by new technologies, audience expectations, creative innovation, or advancements in hardware and software. Trends in VFX can span several aspects, including specific techniques (e.g., particle simulations, motion capture), aesthetics (e.g., photorealistic rendering), or even conceptual themes (e.g., deepfake technology or virtual humans).

How do we know when a trend is emerging?

  • Widespread Adoption: When a specific technique or style begins to appear in multiple high-profile projects (films, TV shows, video games, etc.) over a short period, it’s a strong indication that it’s becoming a trend.
  • Innovative Technology or Software: The introduction of new tools or platforms that enable previously impossible effects can lead to the rapid adoption of a new VFX technique. For example, the rise of real-time rendering with game engines like Unreal Engine has influenced both film and video game industries, allowing for more immersive and interactive visuals.
  • Creative Shift: When artists, directors, or studios begin to experiment with a new visual language or artistic approach that resonates with audiences, it can lead to a wave of similar productions. The growing use of stylized, 3D-animated characters, for example, has been a trend in the animation industry.
  • Audience and Critical Reception: Positive reactions from audiences and industry critics can accelerate the adoption of a specific visual technique, as it signals that the effect resonates well with viewers, which can set a new standard.

What are the current trends of VFX?

Real Time Rendering

Abandoned streets - Unreal engine 5 rendering - Showcase - Epic Developer Community Forums

What it is:
Real-time rendering is the process of generating graphics instantly during the production process, allowing for immediate feedback. Unlike traditional rendering, which can take hours or days for a single frame, real-time rendering displays the visual output as soon as the computation happens. This is largely driven by advancements in game engines like Unreal Engine and Unity.

Key Applications:

Film and TV production: Filmmakers can visualize environments and scenes live on set, integrating CGI elements and interacting with them in real-time.

Video games: Used for creating immersive, dynamic, and interactive worlds.

Virtual events: Live streams, concerts, and sports events are increasingly leveraging real-time rendering for virtual effects.

Example:

The Mandalorian (2019): The use of real-time rendering with Unreal Engine enabled the series to showcase realistic environments on large LED screens during production, instead of relying on post-production CGI.

Virtual Production

What it is:
Virtual production combines digital tools and traditional filmmaking techniques, using real-time VFX to allow for interaction between live-action footage and virtual environments. Filming with virtual sets or environments on LED screens or green screens offers seamless integration between the physical and digital worlds.

Key Applications:

  • Filmmaking: Filmmakers can shoot scenes with virtual sets, landscapes, or backdrops in real-time, making the integration of CGI much smoother.
  • TV Series: The ability to capture dynamic lighting, reflections, and interactions between live actors and virtual elements.
  • Commercials and Music Videos: Cost-effective
  •  solutions for environments that would be otherwise difficult or impossible to create.

Example:

  • The Mandalorian (2019): Used the Stagecraft system with large LED screens to project 3D environments around the actors, allowing real-time adjustments to lighting, reflections, and virtual environments, all while the scene was being filmed.

Why it’s a trend:

  • Enhances actor performance and interaction with realistic digital environments.

  • Reduces location shooting and set-building costs.

  • Increases production speed with faster set-ups and less reliance on post-production.

Deepfake Technology and AI assisted VFX

Five ways to spot a deepfake - Verdict

What it is:

Deepfake technology uses artificial intelligence (AI) and machine learning to create hyper-realistic alterations to video and images. This technology can be used to swap faces, alter performances, or even de-age actors. It’s being used increasingly in film and advertising, as well as in digital media.

Key Applications:

  • De-aging of actors: Recreating younger versions of actors (e.
  • g., Robert De Niro in The Irishman).
  • Digital recreation of actors: Bringing deceased actors back to life or continuing a character’s story after the actor is no longer available.
  • Entertainment and advertising: Virtual influencers or marketing campaigns using AI-generated characters.

Example:

  • The Irishman (2019): Used deepfake technology to digitally de-age actors, allowing them to portray younger versions of themselves with startling realism.
  • Star Wars: Rogue One (2016): Used deepfake technology to digitally resurrect Peter Cushing as Grand Moff Tarkin, and re-created Carrie Fisher as Princess Leia.

Why it’s a trend:

  • It allows filmmakers to craft incredibly realistic digital doubles, making characters look younger or appear as different people.
  • Offers an efficient and creative solution for certain storytelling needs.
  • It has become more accessible due to AI advancements and improved VFX software.

Photorealistic CGI characters and creatures

Game Of Thrones: Things You Didn't Know About Dragons

What it is:
Photorealism in CGI refers to creating digital characters or creatures that look indistinguishable from real-life actors or animals. This trend is driven by advancements in rendering technology, such as ray tracing and improved motion capture.

Key Applications:

  • Live-action films: Characters or creatures that seamlessly blend into live-action scenes.
  • Animated films: Creating highly detailed characters with photorealistic textures and movements.
  • Video games: Enhancing player immersion with realistic NPCs (non-player characters) and environments.

Example:

  • Avatar: The Way of Water (2022): Pushed the boundaries of realism in digital characters and creatures. The motion capture of actors, combined with photorealistic CGI, created some of the most lifelike computer-generated creatures and environments in film history.
  • The Lion King (2019): A hyper-realistic version of the beloved animated classic, featuring photorealistic CGI animals that looked almost indistinguishable from real wildlife.

Why it’s a trend:

  • The increasing capability of rendering engines to produce photorealistic visuals.
  • Creates more immersive experiences, especially in films and games.
  • Demands for high-quality visual storytelling that doesn’t sacrifice detail or realism.

Motion Capture and Performance Capture Advancements

The Hobbit: The Desolation of Smaug | Extended Edition - Smaug MoCap | Warner Bros. Entertainment - YouTube

What it is:
Motion capture (mocap) technology records the movements of live actors and translates them into digital characters, while performance capture (perf-capture) goes further by recording facial expressions, voice, and emotion. This technology has dramatically improved the realism and depth of CGI characters.

Key Applications:

  • Film and TV: Capturing the movements and expressions of actors to animate digital characters.
  • Video Games: Realistic character movements and emotions in games.
  • Virtual Avatars: Used in VR and AR to create lifelike avatars for digital interaction.

Example:

  • Avatar (2009) and Avatar: The Way of Water (2022): Both films used motion and performance capture to create the Na’vi characters, capturing every subtle nuance of the actors’ movements and facial expressions.
  • The Last of Us Part II (2020): The game’s lifelike character animations were achieved through advanced motion and performance capture, which created an emotional depth rarely seen in video games.

Why it’s a trend:

  • As capture technology becomes more accurate, it’s easier to create characters that respond naturally and expressively.
  • The blending of performance and CGI enables more nuanced storytelling in both films and games.
  • Advances in real-time mocap tools allow more immediate and seamless integration into productions.

Hyper Stylized Visuals in Animation

Spider-Man: Into the Spider-Verse (2018) - IMDb

What it is:
A trend towards distinctive, artistic, and visually unique animation styles that differ from the traditional CGI realism. These animations often combine 2D elements with 3D rendering, bold colors, and unconventional visuals to create a unique look.

Key Applications:

  • Animated Films: Creative visual styles that depart from realism and explore new art forms.
  • Commercials and Music Videos: Eye-catching and innovative styles that grab attention.
  • Video Games: Games using distinct, artistic visual designs to set them apart from photorealistic games.

Example:

  • Spider-Man: Into the Spider-Verse (2018): The film used a groundbreaking combination of 3D animation and 2D comic book-style visuals, creating a vibrant, unique aesthetic that pushed the boundaries of animated films.
  • Mitchells vs. the Machines (2021): A similar approach, blending 3D animation with hand-drawn textures and expressive designs, creating a visually engaging and original look.

Why it’s a trend:

  • Filmmakers and animators are embracing more creative freedom, exploring new ways to tell stories visually.
  • Audiences are seeking more distinctive, memorable visuals that stand out in a crowded media landscape.
  • Advances in animation technology have made it easier to experiment with new techniques and looks.

Virtual Humans and Digital Doubles

Digi-Doubles: what are they?

What it is:
Virtual humans, or digital twins, are hyper-realistic, computer-generated replicas of real people or entirely fictional characters. They are often powered by AI and machine learning to mimic human behavior and speech in real time.

Key Applications:

  • Advertising and Marketing: Virtual influencers like Lil Miquela have become popular in digital spaces.
  • Video Games and VR: Creating lifelike avatars for interactive media experiences.
  • Entertainment and Films: Digital actors or characters that are indistinguishable from humans.

Example:

  • Lil Miquela: A virtual influencer who has a massive following on social media, creating a new form of digital celebrity.
  • K-pop Group Aespa: The members of this group perform alongside digital avatars of themselves, creating a hybrid virtual/live performance.

Why it’s a trend:

  • The increasing realism of CGI and AI-driven avatars offers new possibilities for marketing, entertainment, and digital engagement.
  • Virtual humans allow for new forms of interaction, creating novel experiences for audiences and consumers.

5,800+ Reporter Camera Stock Videos and Royalty-Free Footage - iStock | News reporter camera

The age of the image

Why do we photograph things?

We photograph things because it’s a way to capture and hold onto moments, whether for memory, creativity, or connection. It’s like making a small time capsule that we can revisit later, allowing us to remember experiences, people, and places that might otherwise slip away. At the same time, photography is a way of expressing how we see the world, showing others our perspective or emotions through images. Sometimes, we take pictures to communicate something without words—to tell a story or share something important.

For many, photography also has a way of making us more present in the moment. When we stop to take a photo, we often pay closer attention to the details around us, whether it’s the light, the texture, or the feeling of the scene. It’s a way of appreciating the world more deeply, whether we’re capturing something beautiful, meaningful, or just something that feels right to us in that instant.

In a broader sense, photography is a tool for documenting life—our history, our culture, our everyday experiences. It’s how we leave traces of who we are and what we’ve seen, whether for ourselves or for others.

Memory Preservation

Photography allows us to capture moments, preserving them for the future. These images can serve as a way to hold onto important life events, people, and experiences. When we look back at photographs, they act as a visual record of memories, helping us remember details that might otherwise fade over time.

Expression and Creativity

Photography is a powerful form of artistic expression. Through composition, lighting, and subject matter, photographers can convey emotions, tell stories, or explore concepts. For many, photography is an outlet for creativity, offering a way to capture their unique perspective on the world.

Communication and Connection

Photographs can be a means of communicating without words. Whether it’s a family portrait, an image documenting an event, or an artistic representation, photographs convey messages, evoke emotions, and allow us to connect with others. For example, photojournalism captures social, political, or cultural moments that can shape public opinion and raise awareness.

Sense of Control

Taking photographs can offer a sense of control over the environment or moment. By framing a shot, adjusting settings, and choosing subjects, we get to decide what is worth capturing, and how to do so. This control allows us to emphasize certain elements and exclude others, shaping the viewer’s perception of the scene.

Documenting and Recording History

On a broader level, photography plays a critical role in documenting events, places, and cultures. Historical photographs serve as important records for understanding our past. In this sense, photography can act as both an artistic and an archival tool, preserving details for future generations.

Validation and Sharing

In the age of social media, photographs often serve as a way to validate experiences and share personal narratives. People take photos to showcase moments they want to share with others, whether it’s an achievement, a special moment, or just a snapshot of their daily life. This sharing creates connection, feedback, and a sense of community.

 

Why are we addicted to images?

We’re addicted to images because they’re more accessible than ever before and they tap into deep psychological needs. With smartphones and social media, we’re constantly surrounded by images—whether it’s a picture shared by a friend, an ad, or a stunning landscape posted by an influencer. The ease of taking photos and sharing them instantly has made images ubiquitous, and because they’re so easy to access, we can’t help but engage with them constantly.

Our brains are wired to process images quickly, which gives us immediate emotional gratification. Whether it’s a beautiful scene, something funny, or an intense moment, pictures communicate a lot in an instant, triggering strong feelings right away. This instant emotional impact makes images irresistibly engaging. On top of that, the constant availability of images—on social media, websites, and news feeds—creates a cycle of consumption that’s hard to break. The more we scroll, the more we crave that quick satisfaction, and this loop gets reinforced by the dopamine hits we get when we like, share, or see something that resonates with us.

Images also serve as a way to connect with others, gain social validation, and express our lives. Posting photos on platforms like Instagram or TikTok has become a major form of communication, and getting likes or comments becomes a way of measuring approval and connection. This social validation, combined with the visual stimulation, makes it even more addictive.

Moreover, images often present idealized versions of reality, creating feelings of FOMO (fear of missing out) or the desire to keep up with trends. We end up consuming more images to stay in the loop or compare ourselves to others. And because images offer us an easy way to escape or imagine new worlds—whether it’s through travel photos or curated lifestyles—they also provide a temporary escape from the routine of our own lives, making them even more appealing.

In short, the accessibility of images in today’s digital world, combined with their ability to evoke quick emotional responses, create connection, and offer validation, makes them incredibly addictive. They’re not only everywhere but also deeply woven into how we experience and interact with the world.

 

Edgerton, his work and his impact on visual effects;

Edgerton, Harold

Visual effects in films that could’ve been inspired by edgertons work:

Watch: How to Do That Slo-Mo 'Matrix' Bullet Time Effect on a Budget

The most iconic CGI and VFX scenes in movies - Fudge Animation Studios

How X-Men: Days of Future Past Pulled Off Quicksilver's Crazy Slow-Mo Scene | WIRED

The Best War Films to Stream this week on Lockdown - Manchester's Finest

Gravity

https://youtu.be/FZfOvvGV5Q4?si=1gXh5a9rYgMNuAcI

The Laws of Physics Do Not Apply to Legolas | WIRED

Written Post 1 – What do think Dr James Fox means by his phrase ‘The Age of the Image’

In Dr. James Fox’s documentary Age of the Image: A New Reality, he introduces the concept of “The Age of the Image,” suggesting that modern-day society has become reliant on and addicted to images. 

Fox points out that every historical era is defined by unique characteristics: the 18th century is recognized as the age of philosophy, and the 19th century is the age of the novel. In this context, he claims our current age is dominated by images. 

While images have existed since ancient times, the past century has seen a massive increase in their quantity and accessibility due to advancements in photography. This shift has transformed how we communicate, express ourselves, and make sense of the world. 

Fox emphasizes that we now document nearly every aspect of our lives through images, significant or not. In the past, photography preserved cherished memories, but with modern technology, it has shifted toward validation and proof. The introduction of the Brownie camera in 1900 revolutionized photography, making it accessible to the masses and turning stiff portraits into snapshots of genuine emotions and everyday moments. In our current age, the ease of capturing images with smartphones has led to photographs holding less meaning. Instead of serving as moments of frozen past, they often become evidence of experiences. 

In conclusion, Dr. James Fox’s phrase “The Age of the Image” describes a transformative period where images have become central to our understanding of the world. Their accessibility and manipulation have reshaped how we capture, perceive, and share experiences. While images remain vital for expression, they also challenge our understanding of reality and meaning. 

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Week 2 – The Photographic Truth Claim – Can we believe what we see?

Allegorie van de grot - Wikikids

The Allegory of the Cave

The Allegory of the Cave encapsulates several core aspects of Plato’s philosophy in a concise and accessible parable. It explores themes of knowledge, perception, and enlightenment, offering profound insights into human nature and the process of learning.

In the allegory, the prisoners represent common people—individuals who are confined to a limited understanding of the world. They are trapped in a cave, forced to face a wall where shadows are cast by objects behind them. These shadows symbolize the distorted reality that many people experience, as they only know what they can see and perceive, without a true understanding of the world around them. The cave, in this sense, represents the world we live in, filled with illusion and misperception, where individuals often rely on preconceived notions or surface-level experiences.

The prisoners’ ignorance is not necessarily their fault—it is their starting point. They do not know that there is more to reality because they have never seen beyond the shadows on the wall. This represents how people, by nature, are often limited by their own experiences, conditioning, and environment. Enlightenment, or true knowledge, requires breaking free from these limitations, but this process cannot be rushed or forced.

One of the central ideas of the allegory is that you cannot simply impose your point of view upon others. Instead, patience and empathy are key in guiding others toward greater understanding. The philosopher or enlightened individual, upon discovering the truth outside the cave, cannot force others to accept the truth. The allegory suggests that meaningful change comes through cooperative dialogue, where differing perspectives can be discussed openly and critically. By fostering a space for reflection and critical thinking, we encourage individuals to question their assumptions and explore deeper truths.

Ultimately, the allegory highlights the importance of intellectual growth, self-awareness, and the transformative power of knowledge, while also emphasizing the value of patience and dialogue in the pursuit of enlightenment. Through this method, we can stimulate critical thinking and open the path for others to move beyond the shadows of their own limited perceptions.

Plato’s Allegory of the Cave also offers profound insights into perception, reality, and enlightenment, themes that have been explored extensively in visual storytelling and VFX across various forms of media, from early cinema to modern video games and virtual reality (VR). While Plato’s allegory isn’t directly about visual effects, its core concepts have been skillfully represented in the way we perceive and interact with images in contemporary media.

Free Photo | Close-up man wearing virtual reality gadget

 

Representation of Plato’s allegory in visual storytelling and vfx

In Plato’s allegory, the prisoners are shackled in a cave, only able to see shadows cast on a wall, believing these shadows to be their entire reality. This reflects how we, as viewers, perceive what is presented to us through visual storytelling—whether in film, television, or even video games. In this context, visual effects (VFX) play a key role in shaping these perceptions. Just as the prisoners mistake shadows for reality, audiences are often “tricked” into seeing VFX-generated imagery as real. The powerful illusion of VFX creates a constructed world where the audience accepts what they see on screen as reality, even though they know it is a fabricated representation.

Example:
Early cinema, such as the famous scene in L’Arrivée d’un Train en Gare de La Ciotat (1896), represents the first cinematic use of illusions. When the audience saw the image of a train approaching the camera, many viewers, unfamiliar with cinema, were shocked and believed the train was about to crash into them. This is analogous to Plato’s cave: the viewers were seeing a representation of reality (the train) but mistook it for something they would experience in the physical world.

Similarly, animals in early television, seeing moving images on a screen, often mistake them for real-life objects, unable to discern the difference between reality and the images projected on the screen. This reflects the way the prisoners in the cave cannot distinguish between the shadows on the wall and the true world outside.

In modern contexts, such as virtual reality (VR) or video games, the allegory’s themes are even more pronounced. In VR, the user is immersed in an entirely different version of reality, which, although artificial, feels entirely real to the participant. Just as the prisoners in the cave are bound by their limited perceptions, VR offers an alternative reality, one that can be manipulated and shaped by designers. These immersive experiences trick the mind into believing that the virtual world is the true environment, reflecting Plato’s notion of the world being a mere representation of deeper truths.

How VFX relates to Plato’s allegory?

  1. Shadows as Illusions and Crafted Realities: In Plato’s allegory, the shadows cast on the wall represent a false reality. Similarly, VFX artists create illusions—such as fantastical creatures, impossible landscapes, or epic battle scenes—that become perceived “realities” within the context of the film or game. However, much like the shadows, these images are not the truth; they are carefully crafted representations of reality designed to evoke specific responses from the audience.

    • Example: In films like The Matrix or Inception, reality itself is questioned. The visual effects manipulate our perception of the world, showing that what we see may not be the truth, just as the prisoners are fooled by the shadows on the wall.
  2. The Pursuit of Knowledge and Striving for Truth: The prisoner’s escape from the cave and journey toward understanding the truth represents the quest for knowledge in Plato’s allegory. In the world of VFX, this mirrors the evolution of visual effects technology. As VFX tools and techniques advance, filmmakers strive to create increasingly accurate, immersive representations of reality. The more convincing these visual effects become, the more they challenge the audience to question what is real, echoing the allegory’s theme of moving from ignorance to enlightenment.

    • Example: The shift from traditional hand-drawn animation to photorealistic CGI in films like Avatar (2009) shows the progression of visual effects technology toward creating an ever-more detailed version of reality. Audiences, over time, become more sophisticated in their ability to distinguish between what is real and what is an illusion.
  3. Perspective and Perception in VFX: In the allegory, the prisoners are limited by their narrow perspective, seeing only the shadows on the wall. In visual storytelling, perspective plays a crucial role in shaping how the audience interprets the narrative. The way VFX is employed can guide the audience’s understanding of a story and influence their emotional response. VFX can either reinforce the narrative’s deeper meanings or obscure them, just as the prisoners’ view of reality is distorted by their limited perspective.

    • Example: In films like The Truman Show (1998), where the protagonist’s entire world is a constructed set, the use of visual effects highlights how our perception of the world is shaped by the environments around us. The manipulation of the visual world can make us question the authenticity of what we see.
  4. The Role of the Philosopher and the Filmmaker: In Plato’s allegory, the philosopher, having escaped the cave, represents someone who knows the greater truth and has a responsibility to enlighten others. In the realm of VFX, filmmakers and artists can be seen as “philosophers,” using their craft to shape perceptions and lead audiences toward deeper understanding. Their decisions on how to present visual effects can either enhance the story’s meaning or create misleading representations that keep audiences in the dark.

    • Example: In The Matrix (1999), the character Neo represents the philosopher who escapes the cave of illusion and comes to understand the true nature of reality. The film’s VFX, such as the iconic “bullet time” effect, visually represent the breaking of conventional reality and the journey toward knowledge.

VFX and Perception

In both film and video games, VFX manipulates our perception of reality in ways that challenge our understanding of what is true. VFX doesn’t just enhance the visual experience—it actively shapes how we interact with and interpret the narrative. Just as the prisoners in Plato’s cave mistake shadows for the truth, audiences are often tricked into believing in the reality of the images they see on screen, even though they are just carefully constructed illusions.

In scenes that use compositing or simulation techniques, VFX artists manipulate photographs, 3D models, or even real-world footage to create something that feels real, yet is completely fictional. This technique plays into the concept of simulated reality, where the boundaries between what is real and what is not become increasingly blurred. By asking the audience to question what they see, VFX encourages critical thinking about the nature of reality and the power of visual manipulation.

Example: In films like Inception or Doctor Strange, VFX bends and twists reality—buildings fold in on themselves, time and space warp—forcing the audience to question the nature of the world they are experiencing. These effects don’t just serve to impress, but to provoke thought about the distinction between the real world and the fabricated one being shown on screen.

VFX as the modern “cave”

VFX in modern cinema, video games, and VR acts as a metaphorical “cave,” where audiences are immersed in constructed realities that feel real yet are ultimately artificial. Through these mediums, visual storytelling takes the audience on a journey of perception, much like the prisoner who escapes the cave. The increasing sophistication of VFX leads us to question the boundaries between illusion and truth, echoing Plato’s message that our perceptions are not always aligned with reality. Just as the philosopher’s role is to help others understand the true nature of the world, filmmakers and VFX artists have the power to illuminate deeper truths or create elaborate illusions that may leave audiences questioning what is real.

Can you think of any examples of where this is happening?

Where are images becoming more like reality and reality becoming more like image.

  • Realistic CG animals

3D Creature Artists use real life pictures of animals for reference and then virtually bring it to life, just so that the audiences can think that the virtually brought to life animals are actually real

it takes a loop

 

See Framestore's VFX breakdown for s6 of 'The Crown' - befores & aftersFramestore delivers most ambitious VFX work to date for ‘The Crown ...

Life of pi, Life of pi tiger, Film

Recreating historic events or places through the use of vfx

Final result

The Crown, Season 5 - Rumble VFX

Final result

Final result

Architectual visualisation

 

Social Media – manipulating images

removing tourists from pictures

Deepfakes

Deepfakes: Hello, Adele – bist du's wirklich? | ZEIT ONLINE

 

The Photographic Truth Claim

The index oints towards an object that existed b efore the elens

Photographs as Indexical signs

Photograph holding a trace or footprit or fingerprint of reality

Referred to as indexicality

Photographic Index – Andre Bazin

The case for photography as an indexical medium was advanced by Andre Bazin in his famous paper the Ontology of the Photographic Image (1960)

Whilist Bazin did not refer to the term index, he asserted that the invention of photography wasthe most important invention of the plastic arts

Pin page

Semiotics

  • Icon – a sign that resembles or imitates what it represents. For instance, a photograph of a tree is an icon of a tree because it visually resembles it.
  • Index – refers to a sign that has a direct, causal connection to what it represents. For example, smoke is an index of fire; the presence of smoke indicates that there is fire.
  • Symbol –

 

Analysing the impact of VFX on Photographic Truth

In this activity, please search for VFX heavy images and analyze how visual effects challenge the photographic truth claim (in the images) using semiotic concepts.

 

Analyze an image

  • indexicality – Are there any traces of the real world, or is it fully fabricated?

  • iconicity does it resemble reality convincingly or is it fully fictional

  • photographic truth does it claim to represent reality and how does this challenge the traditional truth claims

Here's what makes the 'Jurassic World' Velociraptors so awesome

Jurassic World

  • indexicality – yes, the environment and the human

  • iconity – fully fictional but at the same time it represents how real dinosaurs could look like in the past convincingly

  • photographic truth – it’s a photorealistic scene but it’s fictional because of the addition of dinosaurs

What do you think is meant by Photography Truth Claim?

The “Photographic Truth Claim” is the belief that photography captures reality with high objectivity, distinguishing it from other art forms like drawing or painting, which are more open to interpretation. This idea stems from the way cameras work: they directly record light, giving audiences the impression of an unbiased depiction of the real world. 

 In Tom Gunning’s discussion of the “truth claim,” he brings up the semiotic theory, describing photographs as “indexical signs”—images made through a physical process linked to the objects they depict. This indexicality makes photography seem objective. However, Gunning challenges this view by pointing out that photographs can be manipulated or staged, undermining their reputation as pure reflections of reality.  

Technical choices, like the type of lens or film used and digital editing tools like Photoshop, allow photographers to alter images, sometimes making them more about interpretation than a clear depiction of reality. 

Gunning also highlights that photography has always been an art form that allowed for high levels of creativity. Since its early days, photographers have used techniques like staging scenes, choosing specific angles, and adjusting lighting, showing that photography has never been purely about documenting reality. 

In conclusion, while people have long seen photography as an objective medium, belief is increasingly challenged. Gunning’s argument shows that photography can be manipulated and crafted to convey emotion or influence rather than just factual accuracy. Its use in art, advertising, and persuasion illustrates that it has always been more than just a straightforward record of reality. 

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Week 3 – Faking photographs: Image manipulation, computer collage and the impression of reality

”One way or another, a photograph provides evidence about a scene about the way things were and most of us have a strong intuitive feeling that it provides better evidence than any other kind of picture

We feel that the evidence it presents corresponds in some strong sense to reality and in accordance with the correspondence theory of truth that it true because it does so” – Mitchel 1998 p.24

Faced with an image on a screen we no longer know if the image testifies to the existence of that which it depicts or if it simply constructs a world that has no undefended existence cassetti 2011

 

The quotes from Mitchell (1998) and Cassetti (2011) highlight a profound shift in how we perceive images, especially in the context of photography and visual media. They touch upon the tension between the truth a photograph is believed to represent and the potential for images—whether photographs or computer-generated visuals—to create realities that may not exist in the physical world.

The Photographic Image as evidence of reality (Mitchell 1998)

In Mitchell’s work, he points out that photographs have long been viewed as reliable evidence of the world as it is, offering a direct connection to reality. People often feel that a photograph is a truthful representation of what was in front of the camera at the moment it was taken. This perception is rooted in the correspondence theory of truth, which asserts that something is true if it corresponds to the way things really are in the world.

For example, when we look at a photograph, we intuitively believe that it provides evidence about the scene it depicts—the way things were at a particular time and place. Photographs, in this sense, seem to offer a window into the past, capturing the “truth” of a moment, unmediated by the manipulation of the photographer (or at least, minimally manipulated). This belief is so ingrained in our visual culture that photographs have a sort of “authenticity” that other forms of imagery, such as paintings or drawings, do not possess to the same degree.

However, this view begins to unravel when we consider the manipulations and constructions behind the creation of any image, whether it’s a photograph or a digitally rendered visual in modern media.

The crisis of certainty in visaul media (Cassetti 2011)

Cassetti’s quote introduces a more critical perspective. As technology advances, particularly with the rise of digital media, visual effects (VFX), and computer-generated imagery (CGI), we are faced with images that can no longer be easily trusted as representations of real, existing things. Today, an image might no longer testify to the existence of something in the world. Instead, it may be the product of a constructed world—one that has no real counterpart in the physical world at all.

This idea taps into the postmodern concern about the collapse of distinctions between reality and representation. With VFX and digital technologies, filmmakers and artists can create hyper-realistic worlds, characters, and objects that appear “real,” but are entirely fictional or digitally generated. The illusion of reality can be so convincing (e.g., in films like Avatar or The Matrix) that we start to question whether the images we are presented with correspond to anything outside of the screen.

For example, in VR environments, players might interact with digital objects or people that feel real in the moment but have no tangible existence in the physical world. The same is true for highly realistic computer-generated imagery in movies, where a character might look, move, and behave exactly like a real person, but be entirely made of pixels.

Cassetti’s point is that in such contexts, we face a crisis of certainty: Can we still trust the image to convey truth? Are these constructed worlds simply illusions, or do they represent new forms of reality? As technology progresses, the line between the real and the imaginary becomes increasingly blurred, and the “truth” that images present becomes more subjective, questioning whether an image really serves as a reliable testimony to the world or if it simply constructs a reality that we willingly accept.

The role of photography in digital age

With the ubiquity of social media and digital photography, many people now have access to advanced editing tools that can manipulate images, further complicating our relationship with photographic truth. The public’s increasing awareness of deepfake technology and image manipulation adds to this uncertainty, as we become more skeptical of whether any image—whether a photograph or a video—accurately reflects reality.

In a digital age, images are often created, modified, and shared without direct reference to actual events. They may represent a constructed version of reality, a curated truth, or a fictional world altogether. This erosion of trust in images as evidence of reality challenges the long-held belief that photographs are an unmediated reflection of the world.

The philosophical implications

The tension between the belief that photographs are truthful and the understanding that images can construct alternative realities invites deeper philosophical questions about the nature of truth and representation. In a world where images are no longer guaranteed to represent reality, we are forced to question how we define truth itself.

  • Is truth simply what we see?
    Mitchell’s perspective aligns with the correspondence theory of truth, where the image corresponds to reality and is accepted as true because it reflects what actually existed. Yet, as technology advances, this model becomes more complicated.
  • Or is truth constructed by the image?
    Cassetti’s view suggests a more constructivist approach: that images, particularly digital ones, are not mere reflections of the world but active constructions that shape our perception of reality. If reality can be constructed so convincingly through VFX and digital media, does it change our perception of what is “real”?

Conclusion

Mitchell (1998) and Cassetti (2011) highlight a profound shift in how we understand and interact with images. The trust we once placed in photographs as truthful representations of reality has been undermined by the increasing sophistication of digital technologies that create images that may or may not correspond to the physical world. This shift challenges our traditional understanding of truth and representation and invites us to reconsider the role of images in shaping our understanding of reality.

In the context of modern visual storytelling, VFX, photography, and digital media play crucial roles in this transformation. The power of images to construct, manipulate, and question reality challenges the intuitive belief that what we see on screen corresponds to the truth. As technology continues to advance, we may have to rethink the very nature of “truth” and “reality” in the visual world.

Hoax Photography

The examples of hoax photography like the Cottingley Fairies and the Loch Ness Monster (Nessie) are powerful illustrations of how photographs and other images can be manipulated to present false or fantastical narratives, creating illusions that people often believe to be true. These cases show how easily visual representations can shape public belief, even when the images themselves are later debunked.

The Cottingley Fairies

The Cottingley Fairies are one of the most famous examples of photographic hoaxes in history. In 1917, two young cousins, Elsie Wright and Frances Griffiths, from Cottingley, England, took a series of photographs that appeared to show them interacting with fairies. The photos featured small, ethereal creatures seemingly flying or standing near the girls. The images were so convincing at the time that they garnered significant attention and were taken seriously by many, including prominent figures like Arthur Conan Doyle, the creator of Sherlock Holmes, who was a strong believer in spiritualism and the supernatural.

For many years, the photos were thought to be authentic, and the idea that the girls had photographed real fairies was widely accepted by some sections of the public. However, in 1983, the two women admitted that the fairies in the photos were paper cutouts, and the entire hoax had been orchestrated for fun. Despite their admission, the Cottingley Fairies hoax had a lasting cultural impact, demonstrating how photographs—whether manipulated or genuine—can influence perceptions of reality. The belief in the photographs’ authenticity was largely driven by their strong emotional impact and the trust placed in photographic evidence.

Key Point: The Cottingley Fairies case highlights how photographs can be misinterpreted as undeniable evidence of reality, even when the images themselves are fabricated. It also illustrates how, once an image is presented as truth, it can be difficult to convince the public otherwise, especially when the visual evidence aligns with pre-existing beliefs.

Cottingley Fairies

Cottingley Fairies

The Loch Ness Monster

The Loch Ness Monster, affectionately known as Nessie, is another well-known example of a photographic hoax that captivated the public’s imagination for decades. The legend of Nessie dates back centuries, but the most famous photograph—often referred to as the “Surgeon’s Photo”—was taken in 1934 by a man named Dr. Robert Kenneth Wilson, a London physician. The photograph allegedly showed a large creature emerging from the waters of Loch Ness in Scotland, with a long neck and humps in the water, resembling descriptions of the Loch Ness Monster.

For many years, the “Surgeon’s Photo” was considered one of the best pieces of evidence supporting the existence of Nessie. It appeared to provide photographic proof of the creature’s existence. However, in 1994, a man named Christian Spurling, who had been involved in the hoax, admitted that the photograph was a staged event. The photo was created using a toy submarine with a model of a creature attached to it. Spurling revealed that the photo had been intentionally manipulated to create the illusion of a large, mysterious creature in Loch Ness, feeding into the growing mythology around the Loch Ness Monster.

Key Point: The “Surgeon’s Photo” is a prime example of how an image can be crafted and manipulated to fuel a narrative, and how it can be widely accepted as evidence of something extraordinary, even when it is later revealed to be a hoax. The belief in the Loch Ness Monster continues to persist in popular culture, demonstrating how powerful visual representations can shape myth and legend.

Loch Ness Monster

Loch Ness Monster

Marilyn Monroe And Elizabeth Taylor

A Photo Of Marilyn Monroe And Elizabeth Taylor

1945 photo of soldiers raising the Soviet flag over Berlin’s Reichstag building was staged and then doctored.

Flying The FlagRussian soldiers flying the Red Flag, made from table cloths, over the ruins of the Reichstag in Berlin. (Photo by Yevgeny Khaldei/Getty Images) photo hoaxes

The Power of Visual Manipulation

Both the Cottingley Fairies and Loch Ness Monster hoaxes illustrate a key concept in visual media and photographic evidence: the belief that photographs are trustworthy representations of reality. People have an intuitive sense that photographs offer truthful depictions of the world, and as Mitchell (1998) suggests, this belief aligns with the correspondence theory of truth, where the image is believed to directly correspond to a real-world event or object.

However, the manipulation of these photographs, whether through staging, trickery, or misdirection, reveals how easily photographic evidence can be falsified. These hoaxes play on the trust in images as an objective form of truth, showing how photographs and visual media can both shape and distort public perceptions of reality.

Cultural and psychological impacts

  1. Cultural Belief: Both the Cottingley Fairies and Loch Ness Monster hoaxes tapped into existing cultural myths and fears. For the Cottingley Fairies, there was widespread belief in spiritualism and the supernatural, while the Loch Ness Monster hoax capitalized on the fascination with mysterious creatures and the unexplored nature of Loch Ness. The photographs provided “proof” that people were eager to believe in, even though the images were fabricated.
  2. Psychological Impact: Once an image is presented as evidence of something extraordinary, it can be difficult to dismiss, even when the truth behind the image is revealed. People’s emotional attachment to the idea of something like the Loch Ness Monster or fairies in the real world can cloud their ability to critically assess photographic evidence. This shows how visual media not only reflects reality but can also create an emotional response that reinforces belief in the image’s truth.
  3. The Role of Trust: The Cottingley Fairies and Loch Ness photographs reveal the psychological power of trust in images. People place a great deal of trust in what they see, believing that photographs cannot lie. Once that trust is breached (as it was in both of these hoaxes), it is often difficult for people to fully accept the fabricated nature of the images. The emotional resonance these images create can override logical thinking.

Conclusion

The Cottingley Fairies and Loch Ness Monster hoaxes underscore the power of images to shape our beliefs and perceptions of reality. The widespread acceptance of these photographic hoaxes reveals how people often rely on visual evidence to form conclusions about the world around them, trusting that photographs offer an unmediated truth. However, as these examples show, photographs—like all media—can be manipulated, and what we see may not always correspond to what is real. These hoaxes serve as a reminder that visual images, whether photographs or digital representations, are not inherently truthful and should be critically evaluated, especially in an age where photo manipulation and VFX can craft entirely new realities.

Digital Fakes

30 Fake Viral Photos People Believed Were Real | Bored Panda

30 Fake Viral Photos People Believed Were Real | Bored Panda

Cow Chilling On A Car

Traditional Photomontage

Definition:
Traditional photomontage is a creative technique where various photographs, images, or parts of images are combined to form a new, unified composition.

Tools Used:

  • Scissors & Glue: Artists physically cut, paste, and arrange photographs or printed images.
  • Physical Photographs or Printed Images: The main materials used to create the collage.
  • Additional Materials: Sometimes, other textures and materials like fabric or paper were incorporated to add depth and texture.

Techniques:

  • Overlapping Images: Layers of images are combined in a way that can create new meanings or perspectives.
  • Hand-drawing or Painting: Artists often added personal touches to the photomontage with hand-drawn elements or painted details.
  • Textural Contrast: The use of different materials like fabric or textured paper added dimension to the artwork.

Transition: Photomontage to computer collage

With the advent of digital technology, traditional photomontage techniques have been adapted for digital creation.

Digital Tools:

  • Software: Programs like Adobe Photoshop, GIMP, and mobile apps enable artists to create and manipulate digital collages.
  • Enhanced Flexibility: Digital tools allow for quick adjustments such as resizing, rotating, and blending layers without compromising the integrity of the original images.
  • Effects & Filters: Unlike physical collages, digital tools offer a broad range of effects and filters to enhance the final artwork, giving more creative possibilities.
  • Layering: Digital collages benefit from the ability to have multiple, adjustable layers that can be independently manipulated, offering greater flexibility and complexity in composition.

While traditional techniques relied on physical materials and manual dexterity, digital tools allow for a more seamless process, increasing the potential for creative expression and refinement.

Digital Collage to Compositing

Digital Compositing:
Digital compositing takes the art of collage a step further, offering unparalleled control over image manipulation.

Key Differences from Traditional Methods:

  • No Degradation: Unlike traditional photographs, digital images do not degrade with repeated copying, ensuring that the quality of the final composition remains intact.
  • Greater Precision & Quality: Digital compositing allows for higher-quality results with more refined details, leading to professional-grade outputs.

Advanced Control:

  • Depth Information: Compositing offers control over the layering of images, including the ability to selectively adjust depth of field, giving the artist the power to decide what appears in front or behind other elements.

Composition Complexity:

  • Filmed & CGI Integration: Composites can now integrate filmed footage with CGI elements, creating seamless blends that combine real-world and virtual components into one cohesive image.

Consistency:

  • For a composite to be convincing, it must match the visual language of the other elements, including light sources, color tones, and depth of field, ensuring a seamless final result.

Traditional Matte Paintings

Traditional matte paintings are detailed, hand-painted backgrounds used in film and television to create the illusion of expansive environments. Typically painted on glass or canvas, these artworks are integrated with live-action footage to produce seamless visuals. While the technique was widely used in early cinema, it has largely been replaced by digital methods today, although many principles of traditional matte painting remain relevant in modern filmmaking. Iconic films like Star Wars and The Lord of the Rings showcase the enduring impact of this art form on visual storytelling.

  1. Painting on a large sheet of glass or canvas
  2. Leave gaps in specific positions
  3. Film the painting with the gaps blacked out
  4. Live action filmed separately and projected into those gaps
  5. Combining the matte painting and live action together seamessly

 

Examples of traditional matte paintings:

Lord of the Rings

Return of the King — Dylan Cole Studio

Wayne Haag on X: "Here is the matte painting of Rivendell. Unfortunately I don't have the matte of the bg mountain range in the second shot. https://t.co/ZcqPs0dVtG" / X

  • Titanic

r/titanic - Some photos showing how the Carpathia scene in the movie was put together. A matte painting of the Carpathia was combined with a live action shot of lifeboats at sea, and then finally enhanced with digital elements

r/titanic - Some photos showing how the Carpathia scene in the movie was put together. A matte painting of the Carpathia was combined with a live action shot of lifeboats at sea, and then finally enhanced with digital elements

r/titanic - Some photos showing how the Carpathia scene in the movie was put together. A matte painting of the Carpathia was combined with a live action shot of lifeboats at sea, and then finally enhanced with digital elements

Star Wars

17 Gorgeous Matte Paintings From The Original Star Wars Trilogy

Blending reality & VFX seamlessly

Jurassic Park | Films, Michael Crichton, Novels, Plots, Casts, Box Office, & Facts | Britannica

The evolution of special effects in cinema

The development of photomontage techniques has been pivotal in the evolution of film and special effects. Notable contributions include:

  • Voyage to the Moon: One of the early examples of match cutting and animation, pushing the boundaries of visual storytelling.
  • Model Making: Artists began using physical models for special effects shots, creating visual illusions for cinematic worlds.
  • Back Projection & Blue/Green Screen: Techniques that allow actors to perform in front of a screen while the background is digitally inserted, which paved the way for more immersive effects in film.

These innovations laid the groundwork for modern special effects, integrating traditional techniques with digital advances for unparalleled visual experiences.

Week 3 Written Post – Fakes or composites?

The Crown

Framestore delivers most ambitious VFX work to date for ‘The Crown ...

The deer in the shot was created in 3D with attention to realistic fur grooming and detailed textures. It was then composited into a natural background, with the lighting carefully matched to blend the digital deer seamlessly into the environment.

The team at Framestore used depth of field to mimic how a real camera would capture the scene, blurring the background behind the deer. This effect, along with realistic lighting and shadows, made the digital model appear as if it were part of the live-action shot.

The composition is highly believable, with the deer placed naturally in its environment. The lighting and shadows match the real-world setting, and the depth of the scene gives the shot a lifelike feel. While it’s unclear if the Rule of Thirds was strictly applied, the shot feels balanced and draws attention to the deer in a natural way.

Key elements in the scene include:

  • 3D Deer: Realistically modeled, textured and groomed.
  • Grass and Terrain: Realistic natural environment.
  • Background: A blurred, realistic backdrop matching the lighting.
  • Lighting and Shadows: Consistent with the environment, adding realism.
  • Realistic Textures: The deer’s fur and skin are carefully textured to feel lifelike.

The composite of the deer shot combines live-action plates and CGI elements. The live-action footage provides the background and natural environment, while the CGI elements include the 3D deer model, realistic textures, and added effects like depth of field. The compositing process integrates the digital deer with the live-action scene by matching lighting, shadows, and camera movement. The purpose of this composite is to create a seamless “impression of reality,” making the deer appear as though it truly exists within the natural environment. It achieves this by ensuring the lighting, perspective, and textures are consistent across both elements.

Framestore’s attention to detail in modeling, lighting, and compositing creates a convincing shot where the digital deer feels entirely integrated into the real-world setting, making it hard to distinguish whether it’s fully live action or CGI.

In the shot from The Crown (Season 4), the composite is made up of both live-action footage and CGI elements. The live-action plates include the footage of the Queen on horseback and her entourage, while the CGI components involve digital extensions of the background, a sky replacement, and an enhanced crowd that’s digitally spread out and enlarged to make the scene feel more populated and grand. Additionally, street extensions were added to create a larger, more expansive setting.

The composite works by seamlessly blending these elements together. The live-action footage is extended with CGI to create a bigger environment, and the lighting and shadows are carefully matched between the real and digital components to ensure consistency. The sky replacement is done to set the right mood and tone, while the crowd is enhanced to give the scene a sense of scale and liveliness. The composite also makes sure the depth and perspective feel natural, with everything working in harmony to create a cohesive shot.

The purpose of this composite is to give the impression of reality. By combining live-action with digital elements in a convincing way, the scene feels like it was filmed on location, even though parts of it were digitally created. The digital additions extend the environment and enhance the realism of the shot, making it seem like the Queen and her colleagues are truly riding through a bustling, expansive street.

The key elements in the scene include:

  • Composited Background: Digital extensions of the environment.
  • Sky Replacement: A new sky added to enhance the scene’s atmosphere.
  • Live-action Footage: The Queen, her horse, and colleagues, captured on set.
  • Lighting and Shadows: Carefully matched to ensure realism.
  • Crown Extension: CGI used to expand the royal entourage.
  • Street Extension: The street is digitally extended to create a broader setting.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Week 4 – Photorealism in VFX

Untold Studios and Andreas Nilsson Wind Up Virgin Media Campaign With a Walrus Whizzer - Motion design - STASH : Motion design – STASH

Virgin Media O2: Walrus Whizzer • Ads of the World™ | Part of The Clio ...

Key Characteristics of Photorealism:

  • Detail and Texture: Photorealistic works feature intricate details, including textures that mimic real-life surfaces—like skin, fabric, and natural elements—captured with precision.
  • Lighting and Shadows: Realistic lighting, including highlights and shadows, plays a crucial role. It often involves complex interactions of light sources, reflections, and ambient lighting.
  • Colour Accuracy: The use of colour is closely aligned with reality, employing a full range of tones and subtle gradations to replicate how colours appear in different lighting conditions.
  • Perspective and Depth: Accurate perspective creates a three-dimensional feel. Depth of field techniques can enhance realism by mimicking how cameras focus on different planes.
  • Complexity in Composition: Photorealistic images often include complex compositions with multiple elements that are meticulously arranged to reflect reality.
  • Attention to Realism in Representation: Unlike stylized or abstract art, photorealism focuses on depicting subjects as they appear in life, without exaggeration or alteration.

Distinguishing Photorealism from other styles of VFX

  • Intent: While many styles may interpret or stylize subjects (like Impressionism or Surrealism), photorealism aims for accuracy and fidelity to reality.
  • Technique: Techniques used in photorealism often involve meticulous planning, layering, and techniques like airbrushing or digital rendering to achieve a high level of detail, contrasting with looser or more expressive techniques in other styles.
  • Viewer Experience: Photorealistic works can evoke different emotional responses, often aiming to impress the viewer with the sheer technical skill and detail, while other styles might prioritize emotional expression or conceptual ideas over realism.
  • Medium Variety: Photorealism can be executed in various media, including painting, drawing, and digital art, whereas other styles may be more tied to specific mediums.

In your own words, define photorealism:

Photorealism is the artistic or technical goal of creating an image or scene that looks as realistic as possible, often to the point where it’s indistinguishable from a photograph. In visual effects and digital art, photorealism involves replicating the fine details of real-world materials, lighting, and textures to mimic how things appear in the physical world. This includes accurate reflections, shadows, surface imperfections, and the way light interacts with different objects and environments.

Achieving photorealism requires a deep understanding of how the real world works—how light behaves, how surfaces react to it, and how objects move or interact with their surroundings. In VFX, this means using advanced techniques like high-quality 3D modeling, realistic texturing, complex lighting setups, and precise rendering. The goal is to create digital scenes, characters, or objects that not only look lifelike but also behave naturally when integrated into a scene, making them feel part of the physical world.

In short, photorealism strives to create an illusion so convincing that the digital elements seem just as real as anything captured on camera, blurring the line between the digital and the real.

Trend of photorealism in VFX

The trend of photorealism in VFX really took off in the early 1990s with groundbreaking films like Jurassic Park (1993) and Terminator 2: Judgment Day (1991). These films proved that CGI could create lifelike effects that traditional methods struggled to achieve, like photorealistic dinosaurs and morphing metal, which could seamlessly interact with live-action elements. Jurassic Park especially showcased how CGI could blend with practical effects to create a fully believable world, which wowed audiences and set a new bar for realism in visual effects.

The late 1990s and early 2000s saw further innovation, with The Matrix (1999) and The Lord of the Rings trilogy (2001–2003) building on the foundation established by earlier films. The Matrix popularized the use of “bullet time” effects and brought hyperrealistic action sequences that felt groundbreaking. Meanwhile, The Lord of the Rings trilogy used motion capture to bring Gollum to life, marking one of the first times that a fully digital, photorealistic character had such an emotional and central role in a live-action film. This demonstrated how CGI could enhance character-driven storytelling, beyond just spectacular visuals.

These successes spurred demand for ever-more realistic effects, as audiences began expecting flawless, photorealistic integration of digital and live-action elements. Studios invested heavily in improving CGI techniques like ray tracing for more realistic lighting and reflections, as well as refining compositing techniques to ensure CGI blended perfectly with filmed environments. This era solidified photorealism in VFX as both a creative and technical goal, driving VFX studios to continuously push the boundaries of realism in film.

Today, photorealism in VFX has become essential, with studios using AI-driven tools, advanced simulations, and high-resolution texturing to create digital worlds that audiences believe are real.

How is photorealism in VFX achieved

Photorealism in VFX is achieved by using a blend of advanced techniques to make computer-generated elements look indistinguishable from real-world footage. It begins with creating highly detailed 3D models of characters, objects, and environments, often based on real-world data like scans or reference imagery. These models are then textured with intricate details, such as skin pores, fabric fibers, and subtle surface imperfections, and shaded to mimic how light interacts with various materials. Lighting plays a crucial role in photorealism, with digital lighting setups using global illumination to simulate realistic light behavior, such as how it bounces off surfaces and diffuses through materials.

Motion capture is used to record the natural movements of actors or animals, which are mapped onto CG models for realistic animation. Additionally, VFX artists simulate real-world physics, like gravity, fluid dynamics, and cloth behavior, to ensure digital elements interact believably with the environment. Once everything is created, compositing software is used to integrate the digital elements with live-action footage, making adjustments to lighting, shadows, and color to ensure everything blends seamlessly. The attention to small details, like reflections, atmospheric effects, and imperfections, finalizes the illusion of realism, creating a world where the digital and physical feel indistinguishable.

Compositing live footage with CGI

Compositing live-action footage with CGI for photorealism involves blending digital elements seamlessly into real-world scenes, making them look as if they were captured together. The process starts by matching the lighting and shadows of the CGI elements to the live-action footage. Artists replicate the direction, color, and intensity of the scene’s light sources so the CGI objects cast realistic shadows and interact with the environment. Camera matching is also crucial—using techniques like camera tracking, the 3D camera in the CGI scene is aligned with the live-action camera to ensure correct perspective and motion.

Color grading helps integrate CGI by adjusting the digital elements to match the color palette of the live footage, while depth of field and lens effects (like motion blur or distortions) are added to ensure the CGI blends naturally with the practical elements. Compositors also merge digital effects with practical ones, like explosions or smoke, so they interact convincingly with the environment.

Finally, subtle details like film grain, motion blur, and small imperfections are added to make the CGI feel as though it was shot with the same camera. When done correctly, these techniques create a seamless fusion of CGI and live-action that feels completely real.

Photorealistic CG renders

Creating photorealistic fully CGI scenes involves crafting every element—from environments to characters—entirely in the digital realm, without using live-action footage. It begins with highly detailed 3D modeling, where artists create lifelike digital representations of objects and environments, often based on real-world scans or reference materials. Textures are then applied to simulate realistic surfaces, including subtle details like dirt, scratches, and wear, which make the scene feel authentic.

Lighting plays a central role in photorealism. Artists use advanced techniques like global illumination to simulate how light behaves in the real world—bouncing off surfaces, casting shadows, and interacting with materials. This attention to light is crucial for making the scene feel grounded in reality. Once the lighting is set, rendering engines (such as Arnold or V-Ray) calculate these interactions to produce high-quality images.

Animation also adds realism by ensuring that digital elements move naturally. Motion capture may be used for characters, while physics simulations ensure objects behave realistically in the environment (like cloth movement or water). Finally, compositing techniques are used to add finishing touches, such as depth of field, motion blur, and subtle imperfections, which enhance the scene’s overall believability. When all these elements come together, they create a fully immersive and photorealistic digital world.

Techniques to maek fictional creatures look photorealistic for example dragons mimicking lizard, bird anatomy to get a close look

The Lion King overtakes Beauty and the Beast at the box office

Transition from 2D to 3D and the rise of ”live action” remakes

Example: Lion King 1994 vs Lion King 2019

The transition from the 2D animation of the original Lion King (1994) to the 3D CGI approach in the 2019 version represents a significant evolution in animation technology and storytelling. 

2D Animation (1994)

  • The original Lion King utilized traditional hand-drawn animation, characterized by vibrant colors and expressive character designs.
  • This style allowed for exaggerated expressions and fluid movement, creating an emotional connection with the audience.
  • The animation was complemented by a memorable soundtrack and a more stylized representation of the African savanna.

3D CGI Animation (2019)

  • The 2019 version, while often referred to as “live-action,” is entirely created using CGI. It was developed with the help of visual effects studio MPC, pushing the boundaries of realism in animation.
  • The photorealistic approach aimed to create a more immersive experience, showcasing lifelike animals and environments. This included meticulous detail in fur, skin textures, and natural lighting effects.

“Live Action” Misconception

  • Disney’s marketing described the 2019 film as “live-action,” which can be misleading. While it features realistic visuals, every element is digitally rendered. The term evokes the idea of real actors and physical sets, which is not the case here.
  • The film opens with a shot that mimics a live-action feel, showcasing the landscape in stunning detail. However, from that point on, the entire film relies on CGI, challenging the audience’s perception of what constitutes “live-action.”

Original Lion King (1994)

  • Great reviews: The original film was met with widespread acclaim for its storytelling, character development, and emotional depth. It became a cultural phenomenon and is often regarded as one of Disney’s greatest animated films.
  • Emotional Connection: Audiences connected deeply with the characters, thanks in part to their expressive animation and memorable music. The hand-drawn style allowed for exaggerated emotions, making pivotal moments feel impactful.
  • Nostalgia: Over the years, the film has built a strong nostalgic following, with many viewers holding fond memories of their childhood experiences with it.

Lion King (2019)

  • Mixed Reviews: The 2019 remake received mixed reviews from critics and audiences. While some praised its stunning visuals and technological achievements, others criticized it for lacking the emotional resonance of the original.

  • Photorealism vs. Expression: Many viewers felt that the realistic design of the animals limited their expressiveness. Unlike the original, where characters could convey a wide range of emotions, the CGI versions often appeared more subdued, which impacted audience engagement.

  • Nostalgia vs. Innovation: While some fans appreciated the fresh take on a beloved classic, others were disappointed, feeling that the remake did not capture the magic of the original.

Hyenas | Lion king movie, Lion king art, Lion king

Joshua Cann - Kamari - Lion King 2019

The CGI in the 2019 Lion King is undeniably impressive, showcasing cutting-edge technology that brings a stunning level of realism to the visuals. The attention to detail in the fur textures, lighting effects, and naturalistic movements of the animals highlights the advancements in animation techniques, creating breathtaking landscapes that feel almost lifelike.

However, despite these technological achievements, many viewers felt that the CGI version didn’t capture the magic of the original. The hand-drawn animation of the 1994 film allowed for exaggerated expressions and fluid character movements, which enhanced emotional storytelling. Characters like Simba and Mufasa were portrayed with vibrant personalities that resonated deeply with audiences, thanks to the flexibility of traditional animation.

In contrast, the realistic designs in the 2019 film, while visually striking, often resulted in more subdued expressions and less dynamic character interactions. The animals’ faces, despite being beautifully rendered, lacked the expressive capabilities that could convey a wide range of emotions. This made pivotal moments—such as Simba’s grief or his moments of triumph—feel less impactful than in the original, where the animation could emphasize these emotions more vividly.

Moreover, the original film’s blend of music and animation created an unforgettable emotional experience, with songs like “Circle of Life” and “Can You Feel the Love Tonight” enhancing the storytelling. The CGI version, despite its impressive visuals, sometimes felt more like a showcase of technology rather than a deeply engaging narrative experience.

In essence, while the 2019 CGI Lion King represents a remarkable leap forward in animation, it struggles to evoke the same emotional magic that the original achieved through its artistry and expressive character designs. This contrast highlights how technological advancements, while impressive, do not always translate to a richer emotional experience in storytelling.

 

Non-photorealistic Rendering – NPR

Non-photorealistic rendering (NPR) is a technique used in digital art and animation that prioritizes a stylized or artistic representation over lifelike realism. Instead of trying to replicate the exact details and textures of the real world, NPR embraces bold colors, simplified shapes, and exaggerated features to create a distinctive look.

This approach often draws inspiration from various art styles, such as illustration, painting, or comic books, allowing for creative expression and a unique visual identity. NPR can evoke emotions and convey character traits more effectively by emphasizing certain elements, such as facial expressions or dynamic movements, rather than focusing on realistic details.

Spider-Man: Into the Spider-Verse – NPR in film

Spider-Man: Into the Spider-Verse serves as a great example of non-photorealistic rendering by embracing a visually stunning style that draws heavily from comic book aesthetics. The film combines traditional 2D animation techniques with 3D elements, resulting in a unique look that feels both dynamic and engaging. Bold outlines, vibrant colors, and onomatopoeic text enhance the visual narrative, creating an experience that captures the essence of comic art while pushing the boundaries of animation. This stylization allows for exaggerated movements and expressive character designs, enabling the film to convey deep emotions effectively without relying on photorealism. The rich color palette and innovative visuals not only reflect the origins of the characters but also immerse the audience in a lively, imaginative world, showcasing how NPR can elevate storytelling and resonate with viewers in a powerful way.

Why Spider-Verse has the most inventive visuals you'll see this year! - fxguide

Spider-Man: Into The Spider-Verse

Spider-Man: Into The Spider-Verse

Marvel Rivals – NPR in games

In the case of Marvel Rivals, the game utilizes vibrant colors, bold outlines, and exaggerated features that evoke a comic book aesthetic, which is characteristic of many Marvel properties. This approach allows for expressive character designs and dynamic visuals that enhance the overall gaming experience, making it feel more like stepping into a comic book rather than a realistic environment.

By opting for NPR, the game can emphasize character personalities and actions, creating a fun and engaging atmosphere that resonates with the comic book and superhero themes. This style contrasts sharply with photorealistic rendering, which aims for lifelike details and realism, highlighting the versatility and creativity of different artistic approaches in visual media.

Marvel Rivals - Official Jeff the Land Shark Character Reveal Teaser Trailer | SDCC 2024

All 'Marvel Rivals' characters so far - full roster abilities and lore breakdown

How to play as Jeff the Land Shark in Marvel Rivals

 

Week 4 Weekly Post – Discuss types of photorealism

Photorealism in VFX varies in complexity between fully CG-created scenes and CG composites with live footage. CG composites, where digital elements are layered into live-action scenes, often achieve realism more easily because they draw on actual footage for reference. By matching lighting, shadows, and texture to the filmed environment, VFX artists can seamlessly blend digital elements, helping viewers accept the added effects as part of the real scene. This approach reinforces the narrative’s authenticity, as Martin Lister discusses in New Media: A Critical Introduction, by aligning digital effects with “narrative truth,” giving them a natural feel (Lister et al., 2018). 

On the other hand, fully CG movies can sometimes appear artificial, as every aspect is digitally generated without real-world anchors. Without the natural irregularities and subtle cues found in live footage, fully CG scenes can occasionally seem “too perfect” or stylized, which may pull viewers out of the experience. However, this all-digital environment also provides unmatched creative flexibility. In films like The Lion King (2019), artists could craft every detail, from atmospheric lighting to highly controlled character expressions, allowing for creative freedom and intricate world-building. Barbara Flueckiger notes in Photorealism, Nostalgia, and Style that fully CG photorealism can evoke nostalgia and emotional depth by recreating classic cinematic textures, enriching the digital storytelling experience (North et al., 2015). 

Ultimately, while CG composites may achieve realism more easily, fully CG scenes allow VFX artists to explore new, stylized, or imaginative worlds with a level of control and expression impossible in live-action composites. 

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Week 5 – Bringing indexicality to the capture of movement

Evolution of motion capture

The technique of using real-life movement for animation can be traced back to the early 1910s and the invention of rotoscoping, in which an actor is filmed and then drawn over by animators frame by frame to replicate the motion. Films such as Disney’s Snow White and the Seven Dwarfs (1937) used rotoscoping to create realistic movements for their characters. Moreover, this early motion-capture process helped to streamline production, and Snow White became one of the first feature-length animated films to be released in American theaters.

Disney also participated in the next breakthrough in motion-capture technology. Thanks to the invention of a rudimentary motion-capture suit by engineer and animator Lee Harrison III, Disney patented a system in the mid-1960s to record actors’ movements using potentiometers attached to the performers’ suits . These gauges gathered movement data, which could then be used on animatronics in Disney theme park attractions. However, the technology was overly cumbersome, making it impractical for most productions.

Nevertheless, with the continued development of smaller processing units, the evolution of proprietary software, and the decreasing cost of production elements, by the late 1980s and early ’90s motion capture had come to be seen as a new frontier with real creative potential. Today the technology is used in a wide variety of industries and entertainment outlets. Motion capture’s increased popularity has made the process less time-consuming for creators, and the accuracy of the captured data has given performers a powerful new way of communicating with their audience.

 

Capture Trends

  • Motion Capture
  • Facial Capture
  • Motion tracking, match moving
  • Scans (Lidar, Megascans)
  • Photogrammetry
  • HDRI

Unique examples of ways in which motion capture is being used:

Stray Game – Cat Motion Capture

Cyberpunk Video Game 'Stray' Is the Cat's Meow, Say Chinese Netizens - RADII - Transcend boundaries

Stray – Game

Cat motion capture : r/aww

The Call of the Wild Exclusive Behind the Scenes – Dogs in MoCap (2020) | FandangoNOW Extras

 

Written Post 5 – Compare keyframe animation to motion capture

 

Preparatory Reading: More than a Man in a Monkey Suit: Andy Serkis, Motion Capture, and Digital Realism by Tannie Allison (2011)
Preparatory watching: Andy Serkis Interview for WAR FOR THE PLANET OF THE APES – JoBlo Movie Trailers 

Animation from motion capture is data driven – Keyframe animation is created by hadn

Keyframe animation is iconic – its animation that represents something (we can think of a walk cycle representation)

Motion capture animation is indexical it points towards a walk that has happened (the motion capture would not exist without it)

Compare Motion Capture to Key Frame Animation. Consider the following:
    • How do you think the two approaches / technologies are the same or different, where do they align and where do they not?
    • Think about the motion data stored and used in motion capture files, do you think it is indexical? How does it bring the real
    • Feel free to illustrate your post with example images connected to the subject

Benedict Cumberbatch shoots The Hobbit: The Desolation Of Smaug scenes | Daily Mail Online

Motion Capture and Key Frame Animation are essential techniques in animation, each suited to different creative needs. MoCap, which records live actors’ movements and expressions, is ideal for realistic human motions and authentic emotions, while Key Frame Animation excels in animating stylized or non-human characters with imaginative, exaggerated movements. 

A fantastic example of MoCap’s effectiveness can be seen in the Planet of the Apes, where characters like Caesar convey complex emotions, such as anger and empathy. By capturing subtle details in facial expressions and body movements, MoCap allows audiences to form a genuine connection with non-human characters, adding significant emotional depth and relatability.  

However, for non-human creatures, MoCap may fall short, as shown in Mowgli, where using MoCap for animal characters resulted in unnatural expressions—a phenomenon known as the “uncanny valley.” In these cases, Key Frame Animation is often more effective, allowing animators to exaggerate movements and create more expressive, natural behaviors, avoiding the awkwardness of imposing human motion onto animals.  

A unique example of MoCap used effectively on a completely non-human character can be seen in The Hobbit, where the dragon Smaug was brought to life through combining both techniques. MoCap captured Smaug’s facial expressions, infusing him with personality, while Key Frame Animation controlled his dynamic flights and gestures. This blend—using MoCap for facial details and Key Frame Animation for body movements—struck a balance between realism and fantasy, making Smaug both relatable and vividly fantastical. 

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Week 6 – Reality Capture (LIDAR) & VFX

Reality Capture 

Reality capture is an umbrella term that encompasses a range of technologies and methods used to digitally document and recreate the physical world in three dimensions. This field has gained significant traction across various industries, including architecture, engineering, construction (AEC), film, gaming, and cultural heritage preservation, thanks to advancements in both hardware and software. The main techniques involved in reality capture include laser scanning, photogrammetry, depth sensing, 360-degree imaging, and motion capture, all aimed at creating accurate digital representations of real objects and environments.

What types of reality capture are there?

Lidar 

LiDAR, which stands for Light Detection and Ranging, is changing the landscape of visual effects (VFX) by allowing filmmakers to capture and recreate environments with remarkable precision.

The technology operates by sending out laser pulses that measure distances based on how long it takes for the light to bounce back after hitting surfaces. This capability enables the generation of highly detailed 3D models, which are crucial for achieving realism in visual effects.

The process starts with data capture, where LiDAR systems can be used in various ways, such as being mounted on drones or set up on the ground. As these laser pulses scan the surroundings, they create a point cloud—a dense collection of points in three-dimensional space. Each point corresponds to a specific location and includes information about its position and sometimes its color. This point cloud forms the basis for developing digital models, allowing artists to accurately represent everything from landscapes to intricate architectural details.

After the point cloud is created, artists convert it into a polygonal mesh, turning the raw data into a usable format for VFX production. This step is critical, as it transforms the abstract points into a tangible 3D model. Artists then apply textures, often using color data from the LiDAR readings or additional photographs. This meticulous process ensures that the digital models seamlessly integrate with live-action footage, creating a cohesive visual experience.

LiDAR is especially beneficial for crafting complex environments, like bustling cityscapes or lush forests. Its high level of accuracy allows for realistic representations of real-world elements, enhancing the overall quality of the visual effects. Additionally, because LiDAR can capture large areas quickly, it streamlines the modeling process, freeing artists to focus more on creative aspects rather than getting bogged down in technical details.

Beyond modeling, LiDAR data is also invaluable during pre-visualization (previs). Filmmakers can leverage this information to plan shots and understand spatial dynamics within a scene before actual filming takes place. This advance planning not only saves time during production but also helps ensure that the final product aligns with the director’s vision.

LiDAR is a powerful tool in contemporary visual effects, blending accuracy with artistic creativity. By equipping artists with detailed and reliable data, it enhances the quality of digital content and contributes to a more immersive cinematic experience. As technology progresses, the role of LiDAR in VFX will likely continue to grow, opening up new avenues for filmmakers and visual creators.

Jurassic World: Fallen Kingdom Review- Fallen Franchise

Example of usage – Jurassic World: Fallen Kingdom

In “Jurassic World: Fallen Kingdom,” LiDAR technology was instrumental in creating realistic environments and enhancing the film’s visual effects. The production team used LiDAR to scan real-world locations, capturing detailed 3D models of the landscapes where scenes were set. This allowed the VFX team to integrate digital elements—like dinosaurs and other CGI effects—more seamlessly into the live-action footage.

By utilizing LiDAR, the filmmakers could accurately represent the varied terrains of the fictional Isla Nublar and Isla Sorna, ensuring that the digital and practical effects matched in scale, lighting, and detail. The high level of precision provided by LiDAR helped create immersive environments that felt authentic and believable, contributing significantly to the film’s overall visual realism.

Additionally, the technology facilitated the creation of complex sequences, such as those involving the volcanic eruption and the destruction of the island, allowing for intricate background elements and dynamic interactions between live-action performances and CGI. The result was a visually stunning film that effectively blended reality and fantasy, showcasing the capabilities of modern VFX technology.

Example of usage – Ancient Invisible Cities 

In “Ancient Invisible Cities,” LiDAR (Light Detection and Ranging) technology plays a vital role in uncovering the secrets of archaeological sites. By using laser pulses emitted from drones or aircraft, LiDAR measures how long it takes for the light to bounce back after hitting the ground. This allows researchers to create incredibly detailed 3D maps of the terrain, revealing features that might be hidden by trees, vegetation, or soil.

One of the major advantages of LiDAR is its ability to penetrate dense foliage, which means it can uncover ancient structures, roads, and landscapes that would be difficult to see from the ground. This capability is especially useful in places like jungles, where overgrowth can conceal significant archaeological remains.

In the series, the data collected from LiDAR is combined with other research methods, such as historical studies and ground surveys, to build a fuller picture of these ancient cities. The resulting visualizations not only help reconstruct how these cities were laid out but also provide insights into their cultural and historical significance. Overall, LiDAR is an invaluable tool that enhances our understanding of ancient urban life, allowing us to explore and appreciate these lost civilizations in new ways.

Photogrammetry

Photogrammetry works by capturing a series of overlapping photographs of a real-world object or environment from multiple angles, which are then processed using specialized software to create a 3D point cloud that represents the structure of the subject. This point cloud is transformed into a mesh, defining the object’s shape, and the original images are used to generate realistic textures that enhance the model’s appearance. Once the detailed 3D model is created, it can be imported into VFX software for animation, lighting, and integration into live-action footage, allowing for seamless blending of digital and practical elements. Photogrammetry is particularly valuable for creating accurate environments, props, and character assets, enabling VFX artists to produce highly detailed and immersive visuals that elevate the storytelling experience in films and games.

Avatar (2009) - Decent Films

Example of usage: Avatar 

In “Avatar” (2009), photogrammetry was crucial in crafting the film’s breathtaking visuals, as it enabled the team to capture real-world textures and details from various locations to enhance the digital landscapes of the alien world of Pandora. By scanning actual foliage, rocks, and terrain, the visual effects team was able to create highly realistic 3D models that accurately reflected their real-world counterparts, contributing to the film’s immersive quality. This authenticity in the environment allowed audiences to feel as if they were truly exploring a vibrant and alive alien world. The detailed models created through photogrammetry were seamlessly integrated with computer-generated imagery (CGI), ensuring a cohesive visual experience where digital and practical elements matched in lighting, scale, and detail. Furthermore, while the film utilized advanced motion capture technology, the realistic environments produced by photogrammetry enhanced the believability of character interactions within these settings. Ultimately, photogrammetry played a significant role in setting a new standard for visual effects in cinema, bridging the gap between reality and fantasy and captivating audiences worldwide.

 

Depth Based Scanning

Depth-based scanning is a method used in visual effects to capture three-dimensional information about objects and environments, providing a crucial layer of detail that enhances the realism of digital assets. This technique typically involves using depth sensors or cameras, such as LiDAR (Light Detection and Ranging) or structured light systems, to measure distances between the sensor and various points in the scene.

The process begins with the depth sensor emitting signals—either lasers or infrared light—which bounce back after hitting surfaces. By calculating the time it takes for these signals to return, the system can determine the distance to each point, creating a dense point cloud that represents the shape and structure of the environment or object.

Once this data is captured, it can be processed into a 3D model, similar to photogrammetry, but with a greater focus on capturing depth information. The resulting models are often highly detailed, with accurate representations of geometry that can be used in VFX productions.

Depth-based scanning is particularly beneficial for capturing complex geometries and intricate details in real-time, making it ideal for virtual production and augmented reality applications. By integrating these detailed 3D scans with CGI elements, VFX artists can create more immersive and convincing scenes that seamlessly blend digital and practical effects.

Overall, depth-based scanning enhances the VFX workflow by providing precise spatial information, allowing for better integration of assets and a more realistic portrayal of environments and characters in film, television, and gaming.

Gravity (2013) - IMDb

Example of usage: Gravity (2013)

In “Gravity” (2013), depth-based scanning was crucial for creating the film’s stunning visual effects and realistic portrayal of space environments. The filmmakers utilized a virtual production process that seamlessly combined live-action footage with CGI. By employing depth-based scanning, they were able to create precise 3D models of the spacecraft and space debris, allowing for realistic interactions between the live actors and the digital environment.

Advanced camera tracking techniques captured the movement of the actors in a controlled studio setting, with depth sensors ensuring that the CGI elements aligned perfectly with the live-action footage. This depth information also enabled more sophisticated lighting effects, mimicking how light behaves in the vacuum of space and contributing to the film’s dramatic visuals.

Additionally, the accurate mapping of depth and movement helped simulate weightlessness, making the scenes of floating in space appear more believable. Overall, depth-based scanning was integral to blending live-action with CGI, resulting in an immersive and visually striking cinematic experience.

Find an example of a 3D scanning project of each type. Put an image of each kind on your sketchbook, caption the image with a title of the project and a line of description.

Gaining Perspective

Analogue Perspective 

Filippo Brunelleschi, a key figure of the Italian Renaissance, is credited with developing the system of linear perspective, which revolutionized the way space and depth were represented in art. His innovations laid the groundwork for realistic representation in painting and architecture, profoundly influencing the course of Western art.

The Principles of Linear Perspective

Brunelleschi’s system of perspective is based on a few fundamental principles:

  • Vanishing Points: Central to Brunelleschi’s perspective is the concept of a vanishing point, where parallel lines appear to converge in the distance. This point helps to create the illusion of depth on a flat surface.
  • Horizon Line: The horizon line represents the viewer’s eye level. Objects above this line recede upwards, while those below seem to sink downwards, establishing a sense of spatial orientation.
  • Orthogonal Lines: These lines lead to the vanishing point and guide the viewer’s eye into the depth of the composition. By aligning objects along these lines, artists could create a coherent spatial relationship among various elements in the scene.

Brunelleschi’s Experiment

Brunelleschi demonstrated his perspective techniques through a famous experiment involving a painting of the Baptistery in Florence. He created a small painting and used a mirror to reflect it, allowing viewers to compare the painted image with the actual scene. This ingenious method illustrated how his perspective principles created a convincing illusion of three-dimensionality.

Impact on VFX

Brunelleschi’s principles of linear perspective have had a profound impact on the development of visual effects and 3D space software, fundamentally shaping how artists create realistic digital environments. By establishing a systematic approach to depicting depth and spatial relationships, his ideas enable modern software like Maya, Blender, and 3ds Max to simulate real-world camera properties, allowing users to set vanishing points and adjust focal lengths to achieve convincing perspectives. This has led to the creation of immersive experiences in films and video games, where environments feel tangible and three-dimensional. Furthermore, the integration of accurate lighting and shadow techniques, derived from Brunelleschi’s insights, enhances realism in digital scenes. As a result, contemporary artists can craft intricate, dynamic worlds that resonate with audiences, effectively blurring the lines between reality and digital creation. The enduring influence of Brunelleschi’s perspective continues to guide innovations in VFX and 3D modeling, reinforcing the significance of foundational artistic principles in modern digital media.

De Pictura (1450) Principles

“De Pictura,” written by Leon Battista Alberti in 1450, is a seminal text that explores the principles of painting and the representation of space. In this work, Alberti emphasizes the importance of perspective, particularly linear perspective, which allows artists to create the illusion of depth on a flat surface. He introduces the idea of a vanishing point, where parallel lines seem to converge, helping to make paintings look more realistic.

Alberti also discusses how to compose a painting, suggesting that artists use geometric shapes to organize their scenes in a harmonious way. He believed that a well-structured composition could enhance the beauty and emotional impact of a work.

Another key aspect of “De Pictura” is Alberti’s focus on the viewer’s experience. He encourages artists to consider how the audience will engage with the artwork, likening a painting to a window that opens up to another world. This perspective invites viewers to immerse themselves in the scene.

Overall, “De Pictura” provides foundational ideas about perspective, composition, and viewer engagement that shaped the practice of painting during the Renaissance and influenced many artists in the years that followed.

 

The principles outlined in “De Pictura” by Leon Battista Alberti are foundational to the understanding of perspective and composition in painting. 

  • Linear Perspective: Alberti introduced the concept of linear perspective, where parallel lines appear to converge at a single vanishing point on the horizon. This technique creates the illusion of depth and three-dimensionality on a flat surface.
  • The Vanishing Point: The vanishing point is the point in the composition where lines converge, guiding the viewer’s eye into the depth of the scene. It is crucial for establishing spatial relationships in a painting.
  • Geometric Composition: Alberti emphasized the use of geometric shapes to structure a painting. He encouraged artists to arrange figures and objects in a way that creates harmony and balance, using triangles, squares, and circles to organize the composition.
  • Viewer’s Perspective: Alberti highlighted the importance of considering the viewer’s position when creating a painting. He believed that the artwork should invite the viewer to engage with the scene, making the viewer feel like they are peering through a “window” into another world.
  • Proportion and Scale: Maintaining proportion and scale is essential for achieving realism. Alberti advised artists to carefully consider the size of objects in relation to one another and their placement in the overall composition.
  • Light and Shadow: The treatment of light and shadow (chiaroscuro) is important for creating volume and depth. Alberti encouraged artists to observe how light interacts with forms and to replicate that in their work to enhance realism.
  • Emotional Impact: Alberti believed that a well-composed painting should evoke emotion. The arrangement of figures, the use of perspective, and the play of light should all work together to create a compelling narrative or feeling within the artwork.

Figure 3 from Bringing Pictorial Space to Life: computer techniques for the analysis of paintings | Semantic Scholar

PDF] Bringing Pictorial Space to Life: computer techniques for the analysis of paintings | Semantic Scholar

Pictoral Space

Pictorial space refers to the way artists create the illusion of depth and three-dimensionality on a flat surface, like a canvas. In “De Pictura,” Alberti explores several techniques that help achieve this illusion.

One key method is linear perspective, where lines converge at a vanishing point, making objects appear to recede into the distance. This creates a more realistic sense of space. Another important technique is foreshortening, which alters the size and angle of objects to suggest they are closer or further away, enhancing the three-dimensional effect.

Overlapping elements is another strategy; when one object overlaps another, it indicates which is in front, helping to establish spatial relationships. Atmospheric perspective also plays a role, where distant objects appear lighter and less detailed due to the effects of the atmosphere, while closer objects are richer in color and detail.

Scale and proportion are crucial as well—by adjusting the size of objects relative to each other, artists can create a convincing sense of depth. Finally, Alberti emphasizes that pictorial space should engage the viewer, inviting them to feel as if they are stepping into the scene.

Overall, pictorial space involves various techniques that work together to turn a flat image into a dynamic representation of a three-dimensional world, making the artwork more immersive and engaging.

Da Vinci Points

Activity: Linear Perspective | Leonardo Da Vinci - The Genius

Perspective Machines

Digital Perspective

Analogue Perspective vs Digital Perspective

 

 

Why scanned data needs 3D computer space?

 

A bridge between physical and digital worlds

 

 

Environments – LiDAR scanning captures detailed textures

Objects and props – Small to mid scale LiDAR and structured light scanners allow artists to scan individual props, costumes, or characters.

Human and creature capture –  increasing use of 3D scanning for digital doubles provides a foundation for realistic character animations.

 

Workshop Activity:

Analysis Activity – LiDAR Scan vs Photograph

Find a LiDAR scan image online. Look for scans of landscapes, buildings or famous landmarks like the Eiffel Tower or forests. Good sources include sketchfab or scientific/ architectural websites

Describe key visual characteristics:

  1. Point Cloud: LiDAR scans often appear as a collection of dots or points, not continuous surfaces.
  2. Depth and Structure: They show shape and depth well but lack colour and texture details found in photos.
  3. Wireframe Effect: Many scans have a skeletal, wireframe look, emphasising structure over surface details.

Compare to a photograph:

  1. Surface and Texture: Photos

 

Week 6 – Case Study on Reality Capture Technology in Preserving Ukrainian Cultural Heritage

In response to the ongoing conflict in Ukraine, advanced reality capture technologies, like 3D laser scanning and LIDAR, have become crucial tools for preserving the country’s cultural heritage. These technologies enable the creation of precise digital replicas of historic buildings, monuments, and landmarks, allowing for their preservation in virtual form, even if they are physically damaged or destroyed.

LIDAR works by emitting laser pulses that bounce back from surfaces, allowing it to measure distances and capture millions of data points to create an accurate 3D model of an object or site. This technology excels at documenting complex details, such as architectural features, that may be difficult to capture with traditional methods. When combined with photogrammetry (the use of photographs to add texture to the 3D model), LIDAR can generate a complete, realistic digital representation of the scanned environment. These 3D models are then stored digitally, making them accessible for future restoration or historical research.

A real-world example of this technology in action is the preservation efforts for Ukrainian landmarks like Kyiv Pechersk Lavra and St. Sophia Cathedral. These sites, which face the threat of destruction due to the ongoing war, have been digitally documented by organizations like CyArk and the Ukrainian Cultural Heritage Preservation Fund. The digital models not only help safeguard the sites’ historical value but also provide a tool for reconstruction if needed in the future.

Despite challenges like the high cost of LIDAR equipment, the impact of this technology is profound. It ensures that even if physical sites are lost, their cultural significance can still be preserved digitally for future generations.

Dominican Church, Lviv, Ukraine

Dominican Church, Lviv, Ukraine

Archangel Michael Church, Pidberiztsi

 

Dominican Church, Lviv v1

Sources:

Before the bombs fall: The race to digitize Ukrainian cultural heritage sites | Geo Week News | Lidar, 3D, and more tools at the intersection of geospatial technology and the built world

3D Memory: Scanning Damaged Heritage Sites in Ukraine | Leica Geosystems

(305 words)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Week 7 – Reality Capture (Photogrammetry) & VFX

Digital Facsimile

A digital 3D facsimile is a highly detailed digital replica of a real-world object, sculpture, or scene, created using 3D scanning technology. It captures the precise shape, texture, and features of the original, turning it into a digital model that can be viewed, studied, or reproduced. This digital version is not just a basic image; it’s a full three-dimensional representation, meaning it can be rotated, zoomed in on, and explored from different angles, just like the physical object itself.

The goal of a digital 3D facsimile is to preserve the original object in a digital form, which can be useful for everything from art conservation to virtual museums, to manufacturing or 3D printing. It’s like creating a “virtual twin” of the object, allowing it to be shared, analyzed, and even replicated without needing to handle the fragile or valuable original.

Digital Facsimile & Photogrammetry

A digital facsimile created through photogrammetry is a highly accurate 3D digital replica of a real-world object, scene, or environment. The process starts by taking multiple photographs of the subject from different angles. The more photos and angles, the more detailed and precise the final model will be.

Once the photos are captured, special software analyzes them to find common points across the images. It then uses this data to create a point cloud—a 3D map of all the surface points of the object. From this point cloud, a digital 3D mesh is formed, which outlines the shape and structure of the object. Textures from the original photos are applied to the model, bringing it to life with realistic color and detail.

The result is a digital facsimile—a virtual twin of the original object or scene that can be explored and manipulated on a screen. Photogrammetry is particularly useful because it’s a non-invasive way to capture intricate details, making it ideal for preserving artifacts, creating digital models for design or manufacturing, or even for visual effects in films. The digital replica can be shared, studied, or even 3D printed without ever needing to handle the original object, which is especially valuable in fields like archaeology or art conservation.

Quixel Megascans

Digitising the world for photorealism

quixel boasts global scanning teams

  • anything and everything in the world becomes a potential commercial asset to be captured and stored as a digital resource
  • catalogued and classed in ready to. use sets and libraries
  • Bergman states that photorealism is now incredibly simple because it can be created quickly using 3d scans from quixels library

 

a paradox?

 

Digital 3D Facsimile

using techniques lik ephotogrammetry scanning a digital 3d facsimile replaced the appearance shape texture and sometimes even the material properties of the original object as closely as possible

digital facsimiles created from real world objects or locations vfx artists can use these as photorealistic digital props

virtual sets and backgrounds can be created from detailed 3d scans of buildings landscapes or iconic settings that can serve as …

digi-double digidoubles

Digital Doubles

Digital doubles, created through photogrammetry, are highly detailed 3D models of real-life people or objects used in films and video games. Photogrammetry is a process that involves capturing a series of photographs from different angles and then using software to stitch them together into a 3D mesh. The result is a highly accurate digital replica with realistic textures, which can be animated or integrated into scenes as a stand-in for the actual person or object.

In filmmaking, digital doubles are often used for stunts, action scenes, or situations that are too dangerous or impractical for an actor to perform. For example, if an actor is meant to jump from a great height or be in a hazardous environment, a digital double can take over those physical risks. The process captures intricate details, like skin texture and facial features, making these digital doubles incredibly lifelike.

As photogrammetry technology advances, the digital doubles are becoming almost indistinguishable from real people, allowing filmmakers to push the boundaries of what’s possible in storytelling.

The first time, it seems, that a digi-double was used in a film, was in Batman Forever (1995), with cape and all.

Digital Replicas

Where does this scanning trend come from?

  • Need for Realism: As audiences demand more immersive and lifelike visuals, filmmakers have turned to these technologies to create highly detailed and accurate digital assets. Whether it’s a digital double of an actor or a virtual environment, these techniques allow for unprecedented realism, capturing every detail from skin texture to environmental nuances.
  • Efficiency and Speed: Traditional methods of creating 3D models by hand are time-consuming. Scanning technologies, on the other hand, can quickly capture real-world objects or locations in high detail, significantly speeding up the process of creating complex digital assets.
  • Cost-Effectiveness: Although the initial investment in scanning technology can be high, it ultimately saves money in the long run. By reducing the need for expensive set constructions or the risk of performing dangerous stunts, scanning can lower production costs.
  • Improved Integration with Live Action: Scanned assets, especially with technologies like LiDAR, allow for more accurate matching of digital elements with real-world footage. This makes it easier to blend CGI seamlessly with live-action shots, resulting in more convincing VFX.
  • Advances in Technology: The availability of better hardware (like high-resolution cameras, LiDAR scanners, and processing power) and more advanced software has made scanning more accessible and precise. This has made it a go-to method for creating complex digital assets that look real and fit seamlessly into films.
  • Flexibility in Virtual Production: In modern filmmaking, especially with virtual production techniques, having highly accurate scans of real-world elements allows for greater flexibility. Filmmakers can digitally manipulate scanned environments or characters without having to worry about the physical limitations of sets or actors.

The Digital Michelangelo Project : History of Information

The Digital Michelangelo Project

The Digital Michelangelo Project was an ambitious research initiative undertaken at Stanford University in the late 1990s and early 2000s, aimed at digitally capturing and preserving classical sculptures in stunning detail. Led by computer scientist Marc Levoy, the project used cutting-edge 3D scanning technology to create highly detailed digital replicas of famous sculptures, including works by Michelangelo.

The core goal of the project was to digitally preserve and analyze these works of art, capturing every subtle detail—from the smoothness of the marble to the finest cracks and imperfections—in a way that was previously unimaginable. Using laser-based scanners and photogrammetry, the team was able to create 3D models of sculptures with incredible accuracy, offering new insights into the artist’s process and preserving the sculptures for future generations.

One of the most notable achievements of the Digital Michelangelo Project was its work on Michelangelo’s David and the Medici Tombs. The team scanned these iconic sculptures in Florence, Italy, with such precision that they were able to create virtual models that captured even the texture and contours of the marble. These models could then be analyzed, studied, and reproduced in digital or physical form with extraordinary fidelity.

The project also had a significant impact on both art history and the fields of computer graphics and 3D scanning. By combining high-tech tools with the study of fine art, it opened up new possibilities for art conservation, restoration, and education. It also demonstrated the potential of 3D scanning and modeling in preserving not just historical objects but also creating digital archives of cultural heritage.

While it started with a focus on classical art, the techniques developed during the Digital Michelangelo Project have been adapted for use in a wide range of applications, from VFX in films to archaeological and architectural preservation. It was a pioneering effort that helped push the boundaries of what could be done with digital scanning technology and left a lasting legacy in both the art and tech worlds.

The 3D Fax Machine

The idea of a 3D fax machine was a vision for transmitting physical objects in digital form, similar to how traditional fax machines send 2D images. The concept was that you could scan an object in 3D using specialized technology, then send that data over a network to be received and reconstructed at the other end, either as a digital model on a screen or even a physical object through 3D printing.

In theory, this could revolutionize industries like design, manufacturing, and healthcare, making it easier to share and replicate objects remotely. However, the technology faced several challenges—large file sizes, slow data transmission, and the complexity of accurately reconstructing objects at the receiving end.

While the true “3D fax” never became widespread, its ideas evolved into today’s technologies like 3D scanning, cloud-based sharing, and 3D printing, which now allow us to digitally capture, share, and even create physical objects remotely.

Key aims of the 3D Fax Machine project:

  • Remote Object Transmission: To develop a way to send physical objects over long distances by turning them into digital data. This could allow people to “fax” a physical object, making it accessible remotely.
  • Digital Replication: To create highly detailed digital models of real-world objects that could be transmitted and reconstructed at the receiving end, either as a digital file or a 3D printed object.
  • Efficiency in Sharing: To make it easier and faster to share physical designs, prototypes, or medical data between locations, without needing to physically ship or transport items.
  • New Applications in Various Fields: The technology was intended to benefit industries like manufacturing, healthcare, and art conservation, where sharing and replicating objects accurately and quickly could have major practical uses.

Sybaris Collection © | What is Mimesis in Art?

Mimesis – Mirror Copy of Reality 

The success of a mimetic representation must lie in its resemblance to the thing represented however even the most optimally realistic of images are very different things from the objects they represent or depict for example the most realistic and straightest – least manipulated photograph differs from what it represents in obvious ways as a rectangular fragile silent 2d object that represents a spatially infinite

A 3D scan needs to be like a mirror copy.

Plato’s Idea of Mimesis

In Plato’s philosophy, mimesis refers to the idea that art is a form of imitation—art imitates the world around us, but in a way that is distant from reality. Plato believed that the physical world we experience through our senses is just a copy of a higher, perfect realm of Forms (or ideal concepts). For example, a physical tree is an imperfect representation of the perfect “Tree” in the world of Forms. In this view, art (like painting, sculpture, or poetry) imitates the physical world, which is already an imperfect copy of these higher ideals. Therefore, art is twice removed from the truth.

Plato was critical of mimesis because he felt that art could be misleading. Instead of helping us understand deeper truths, it often appeals to emotions and superficial appearances, distracting people from pursuing wisdom or reason. In The Republic, he even argued that artists could be dangerous, as their work might encourage people to focus on illusion rather than reality.

In short, Plato saw art as a distorted imitation of an already imperfect world, leading people away from the pursuit of truth.

Mimesis & Reality Capture

In the context of reality capture—using technologies like 3D scanning, photogrammetry, and LiDAR—mimesis refers to the process of creating digital replicas of real-world objects, environments, or scenes. It’s like making a high-tech imitation of the physical world, capturing its details in a virtual form.

Plato’s idea of mimesis was that art imitates reality, but it’s already a copy of something imperfect (the physical world), which in turn is just a shadow of the perfect “Forms” he believed existed in a higher realm. So, in reality capture, you’re creating a digital copy of something in the physical world, which is itself a distant reflection of some ideal. From a Platonic view, these digital replicas are twice removed from truth—first by the physical world being imperfect and then by the digital model being a copy of that.

However, unlike traditional art that Plato critiqued for being misleading or emotionally manipulative, reality capture aims to be as accurate as possible, preserving the real world in great detail for practical uses—like saving cultural heritage, building virtual worlds, or creating prototypes. So, while these digital facsimiles are still “copies,” they serve a different purpose: they preserve, analyze, or replicate the real world in a way that’s useful, not deceptive.

Theory of represaentation focused on imitation where the goal is to create an image or object that closely resembles its real world counterpart

Verisimilitude 

Verisimilitude is essentially the quality of seeming real or true within a story, artwork, or performance. It’s not about being perfectly accurate to the real world, but about creating something that feels believable or authentic to the audience. For instance, in a novel or movie, even if the events or characters aren’t directly based on reality, they might be portrayed in a way that feels plausible and consistent with the world the creator has built. It’s the sense that “this could happen” or “this feels like it could be true,” even if it’s fictional or exaggerated.

In short, verisimilitude is about making things in a work seem convincing enough that the audience can suspend disbelief and engage with it as though it were real.

Verisimilitude vs Mimesis

Verisimilitude is about how believable or realistic something feels in a work of art or literature. It doesn’t mean the thing has to be completely true or lifelike, just that it appears real enough for the audience to accept. For example, a fictional story set in a modern city might include made-up characters and events, but if they act in a way that feels plausible or true to real life, the story has verisimilitude.

Mimesis, on the other hand, is the concept of imitation or representation of reality. It’s about how art mirrors or copies the world—whether that’s nature, human actions, or historical events. In classical thought, art is seen as a way of reflecting the real world, like a painting that tries to look as much like an actual scene as possible.

So, while verisimilitude is about how “real” or “true” something seems within a work, mimesis is about how directly and faithfully the work imitates reality.

It assumes that meaning resides in the real things themselves and representations job is to mirror or copy that reality,

Faithful Copying

In mimetic representation the success of the representatio

Limitations of mimesis

even the most realistic straight unmanipulated images differ from their subjects in fundamental ways

physical nature iamages are 2d silent and static while the real world is 3d dynamic noisy and complex

artificial frame images such as photographs are bounded by rectangular frames whereas real world objects exist in spatial environment

Optical Realism vs actual Likeness

Mimesis & Indexicality 

 

 

Indexicality

Framestore delivers most ambitious VFX work to date for 'The Crown' Season 4 | UK Screen Alliance

Photorealism

Photorealism is an art style that strives to reproduce an image as precisely and accurately as possible, mimicking the sharp detail and clarity of a photograph. In photorealism, the artist attempts to create a painting or drawing that is indistinguishable from a high-quality photograph, focusing on the minutiae of light, texture, and surface detail.

What distinguishes photorealism is its commitment to technical precision and its use of photographic reference material. Artists work meticulously to replicate every detail, from the reflections in glass to the play of light and shadow, with the goal of achieving a level of visual fidelity that can often be mistaken for actual photographs.

The focus is on accuracy rather than artistic interpretation, meaning that photorealist artworks often depict everyday subjects—such as landscapes, portraits, or still lifes—in a way that emphasizes the reality and objectivity of the visual world. Unlike hyperrealism, which can exaggerate certain details for emotional or conceptual effect, photorealism aims to capture the world with clinical exactness, celebrating both the inherent beauty of the ordinary and the technical skill required to reproduce it with such precision.

In essence, photorealism is about visual truthfulness—producing art that is not just realistic, but convincingly lifelike to the point where it challenges the viewer’s ability to distinguish between the artwork and a photograph.

A deserted island with interesting alien plants, hyperrealistic VFX render : r/midjourney

Hyperrealism

Hyperrealism is an art style that pushes realism to its limits, focusing on creating an intensely detailed and heightened version of reality. Instead of just replicating what a photograph might show, hyperrealism amplifies small, often overlooked details—like the texture of skin, the way light falls on an object, or the subtle lines on a face—to make them feel more vivid and lifelike than what we see in everyday life.

Artists working in this style use incredible precision, capturing every aspect of a subject with such clarity that even the most ordinary things appear extraordinary. The goal is to draw attention to the beauty in the smallest features, transforming familiar objects or scenes into something that feels fresh and new.

What sets hyperrealism apart is that it often evokes an emotional reaction. The level of detail can be so intense that it creates a sense of awe, and sometimes even unease, because the artwork can look more real than reality itself. It invites the viewer to engage with the subject on a deeper level, almost as if they are experiencing it firsthand.

Hyperrealism can be found in painting, sculpture, and digital art, and its power lies in its ability to make us see the world in a more vivid, intimate way—highlighting the beauty in things we might otherwise ignore, and challenging our perceptions of what’s real.

Hyperrealism vs Photorealism

Hyperrealism and photorealism are both styles of art that focus on creating highly detailed, lifelike images, but they differ in their approach and purpose.

  • Photorealism:
    Photorealism aims to make an artwork look exactly like a high-quality photograph. Artists use reference photos and strive to replicate every detail with technical precision. The goal is to create an image that could easily be mistaken for a real photograph, capturing things like light, texture, and color as they appear in real life. The focus is on accuracy—everything looks just like it does in the photo, with little to no exaggeration or artistic interpretation.
  • Hyperrealism:
    While hyperrealism also emphasizes extreme detail, it goes beyond just replicating reality. In hyperrealism, artists intensify or exaggerate certain elements to make the subject look even more vivid or striking than in real life. The goal is to create an enhanced version of reality, one that can feel more intense, emotional, or surreal. Hyperrealist art might focus on emphasizing textures, lighting, or expressions in a way that makes the subject feel more alive or dramatic than what we actually see.
  • Key Difference:
    The main difference is that photorealism sticks closely to what’s seen in a photo, while hyperrealism takes that level of detail and adds a layer of enhancement, creating an almost “super-real” or exaggerated version of reality. Hyperrealism often aims to evoke a stronger emotional reaction by intensifying details, while photorealism stays more grounded in technical precision and accuracy.

3D Scanners and scannong

The scanner is a digitiser it takes millions of precise measurements of points in space effectively harvesting spatial and image data

Cameras photography and reality capture

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Week 8 – Simulacra & Simulation

Simulation

 

VR

Game Worlds

Google Street View, Google Maps

The image precededes reality

Simulacra & Simulation

Simulacra and Simulation by Jean Baudrillard is a key work in postmodern philosophy, where Baudrillard examines how modern society has shifted from experiencing reality to living in a world of simulations—copies that no longer refer to anything real. His central argument is that, in the age of mass media and technology, we are increasingly surrounded by “simulacra,” which are representations or images that have no original referent or reality behind them.

Baudrillard outlines four stages of simulacra:

  1. The image is a faithful copy of a real thing.
  2. The image distorts reality, masking a basic truth.
  3. The image pretends to be a reality, but hides the absence of any original.
  4. The image has no connection to any reality, and becomes its own truth (hyperreality).

In hyperreality, simulations—like media, advertisements, and virtual worlds—are experienced as more real than reality itself, to the point where people can no longer distinguish between the two. Baudrillard argues that this shift leads to a collapse of meaning and truth, as we live in a world dominated by images that create their own version of reality.

The book critiques how media and consumer culture shape our perceptions, replacing authentic experience with manufactured representations. Baudrillard’s work challenges the idea that media simply reflects reality; instead, it creates a new reality that people accept as the truth. Ultimately, Simulacra and Simulation explores how this detachment from the real undermines our ability to critically engage with the world.

The map comes before the world

In Simulacra and Simulation, Jean Baudrillard introduces the idea that “the map comes before the territory” as a way of illustrating how, in contemporary society, representations no longer merely reflect or mirror reality—they precede and construct it. This concept challenges the traditional idea that a map (a representation) is a tool used to depict an existing, real world (the territory). Instead, Baudrillard argues that in the postmodern era, representations, such as media images, advertisements, and digital simulations, no longer just reflect the real world; they actively shape and define how we perceive and experience reality itself.

This shift has profound implications. Baudrillard’s assertion suggests that the “map” (simulacra, or simulations) is no longer subordinate to the “territory” (the real). In fact, the map can become so dominant that it creates its own version of reality—what Baudrillard calls hyperreality. In this state, simulations are experienced as more real than the actual world, and people begin to live in a world of images and representations that define their experience, rather than a world grounded in objective, tangible reality.

Thus, the idea that “the map comes before the territory” reflects Baudrillard’s broader critique of modern society: that we are increasingly unable to distinguish between reality and the representations that shape it. This collapse of the distinction between the real and the simulated, he argues, is one of the defining features of postmodern life.

in vfx digital models and environments often take precedence over physical counterparts blurring the line between the real and virtual

The precession of simulatra

The concept of the precession of simulacra is central to Jean Baudrillard’s Simulacra and Simulation and refers to the idea that in contemporary society, representations (simulacra) no longer simply reflect or distort reality—they actively precede and generate it. In other words, simulacra do not mirror an underlying, real world; instead, they come first and shape what we understand as “reality.”

This precession occurs in stages, as Baudrillard argues. Initially, a representation is a faithful copy of something real, but over time, it becomes disconnected from its original referent, until it eventually creates its own reality. As the simulacra multiply and intensify, they no longer correspond to any external reality at all. They create a hyperreal world, where the distinction between the real and the imaginary collapses. In this state, simulations govern our experience, defining what we perceive as real.

Baudrillard uses the example of media and consumer culture to illustrate this precession. For instance, advertising, television, and social media no longer simply depict reality but actively construct our perceptions of it. In a world dominated by such representations, the images and symbols we encounter daily become more significant than the actual, lived experiences they supposedly represent. As a result, people engage more with the simulacra—the images, narratives, and ideologies—than with any objective reality.

The precession of simulacra, therefore, signifies a profound shift in how we experience the world: rather than reacting to or interpreting reality, we are increasingly immersed in a world of self-referential simulations that create the conditions for our experience and understanding of the world. In this sense, Baudrillard suggests, the simulacrum not only precedes the real, but effectively replaces it.

Elon Musk thinks we're living in the Matrix | Dazed

Simulacra and Simulacrum: Definitions

  • Simulacrum refers to an image, representation, or imitation of a person, object, or experience. The term is often associated with something that is a copy of a copy—distanced from the original and perhaps no longer representing what it once was.
  • Simulacra refers to multiple such copies or representations. In philosophical terms, particularly in the work of Jean Baudrillard, simulacra are seen as images or representations that have lost their connection to any original referent. They become self-referential and no longer correspond to a real world object or idea.

Simulacra in VFX:

In VFX, the concept of simulacra is profoundly relevant because modern digital effects often create realities that are entirely detached from the physical world. Here are a few ways simulacra manifest in VFX:

  1. Creation of Realities that Never Existed:
    With the advent of digital technologies, filmmakers can now create entire worlds that have no physical counterparts—such as alien planets, fantasy landscapes, or abstract virtual environments. These environments and creatures are simulacra, as they mimic the appearance of real objects or places but have no original in the real world. They exist only within the context of the film or virtual space.
  2. Hyper-Realistic Effects:
    Modern VFX often strives for hyper-realism—creating visuals that look indistinguishable from reality, such as photorealistic CGI characters or environments. However, these creations are simulacra in the Baudrillardian sense. Despite their realistic appearance, they are constructed from algorithms, pixels, and synthetic data, not real-life experiences. Thus, they do not “reference” any single real-world object but instead generate a new, self-contained reality.
  3. Endless Reproduction of Images:
    Digital effects allow for the endless reproduction, alteration, and recombination of images. The ability to replicate and manipulate simulations ad infinitum (such as reanimating actors, creating digital doubles, or resurrecting historical figures) means that VFX can construct realities in which there is no “original” to be reproduced. This is akin to Baudrillard’s idea that, in a world filled with simulacra, there is no “original” or “truth” to return to.

Simulacrum in VFX

On the other hand, the concept of simulacrum as a specific instance of representation also plays a crucial role in VFX:

  1. Digital Characters as Simulacra of Real People:
    A common VFX practice is creating digital doubles of real actors or characters (e.g., in action films, animated movies, or even using actors’ likenesses in CGI form). These digital representations, while resembling real people, are not real—they exist as simulations, representations, or “simulacrum.” For instance, the resurrection of a deceased actor through VFX (as in the case of CGI recreations in Star Wars or The Fast and the Furious) produces simulacra that appear to be real but are mere representations without the authentic person.
  2. Virtual Actors & Performance Capture:
    Motion capture (or performance capture) allows actors to “become” characters that are entirely virtual (such as in Avatar or The Lord of the Rings). The resulting digital version of the character is a simulacrum, a copy or imitation of a human actor that may bear little to no resemblance to the original once fully transformed into a 3D model.
  3. The Hyperreality of VFX in Film:
    Films that blend VFX with live-action footage can create a world where the boundary between what is real and what is a simulacrum becomes increasingly blurred. Think of films like The Matrix, where digital elements blend seamlessly with the physical world. These images and sequences do not directly reference the real world but create a reality of their own that becomes more “real” to the audience than the reality outside the screen.
  4. Simulacrum as Identity Construction:
    In the digital era, where VFX can reconstruct human identities (through de-aging technology, for example, or entirely fabricated characters), the idea of simulacrum extends to questions of identity. Are these representations of actors or characters “authentic,” or are they simply digital avatars, simulacra, that serve specific narrative purposes? The audience is presented with versions of people and places that are “real” in a digital sense but remain simulations rather than reflections of an original reality.

The Impact of Simulacra on the Viewer:

The implications of simulacra in VFX go beyond technical considerations. Philosophically, they point to the way we, as viewers, interact with and interpret media:

  1. Blurred Boundaries Between Reality and Fiction:
    As VFX creates increasingly realistic simulations, the distinction between the real and the represented becomes unclear. In films like Inception or Ready Player One, the audience navigates through layers of virtual and real experiences, with VFX playing a key role in creating the simulacra that form these layers.
  2. Doubt About What Is Real:
    Baudrillard suggests that in a world dominated by simulacra, we begin to lose our sense of what is authentic. In VFX-heavy films, this can lead to a situation where we, as viewers, may question the authenticity of what we see on-screen. Are we watching a real event captured on film, or are we observing a digital illusion? As visual effects grow more sophisticated, the distinction becomes increasingly difficult to discern.

Key examples of simulacra and simulacrum in digital world & VFX

The concepts of simulacra and simulacrum in VFX can be observed in various modern digital technologies and visual media. Here’s how these ideas manifest in specific examples like Virtual Reality (VR), Google Maps, and other digital platforms:1. Virtual Reality (VR):

  • Simulacra in VR: Virtual reality experiences, such as those created for immersive video games or training simulations, are a direct example of simulacra. In VR, users can navigate through entire digital environments that mimic the real world (or fantasy worlds) but are purely synthetic. These environments are simulacra because they represent a “reality” that exists only within the virtual space. They might feel real to the user, but they are not connected to any physical location or tangible object.
  • Simulacrum in VR: When VR simulations create avatars or entire worlds based on real-world objects or places, these become simulacrum. For example, VR applications might simulate famous landmarks, cities, or even people, but the version you experience in VR is an imitation or a copy of the original, not the original itself. The avatar you create in VR represents a version of yourself, but it isn’t your physical self—it’s a simulation of your identity.

2. Google Maps (and Street View):

  • Simulacra in Google Maps: Google Maps, particularly its Street View feature, offers users a virtual version of real-world places. Although these locations are rooted in actual geography, the images presented are digital recreations of physical spaces, filtered through cameras and algorithms to create a virtual map. This is a simulacrum because the map itself doesn’t show the “real” world—it shows a digital copy, one step removed from the reality it represents. This kind of representation can create a hyperreal experience where users may feel as though they are physically present in the place, despite being miles away.
  • Simulacrum in Google Maps: When you navigate using Google Maps, the directions and imagery are representations of the real world—roads, buildings, and terrain—but they are designed to fit a digital context. In the case of Street View, Google uses photographic data to simulate real places through a digital interface, and the resulting experience is a simulacrum: a digital stand-in for a real-world journey or location. As users interact with these digital maps, they might lose touch with the actual reality of the place and engage instead with the simulated version presented to them.

3. Digital Avatars in Gaming and Social Media:

  • Simulacra in Gaming: In many modern video games, players control avatars or characters that may be highly detailed simulations of people or creatures. Games like The Sims, Second Life, and Fortnite are all examples of environments where players interact with and create digital versions of themselves or others. These avatars are simulacra—they represent players, but they do not fully capture the nuances of their real-life counterparts. The characters in these virtual worlds are copies or simulations, creating a separate “reality” from the physical world.
  • Simulacrum in Gaming: Many VR games and augmented reality experiences, such as Pokemon GO, offer experiences where players encounter virtual objects or characters placed in the real world. These digital characters or elements are simulacrum, digital versions of real-world objects or creatures, interacting with the user as if they were part of the physical world. However, these digital elements do not actually exist in the real world—they are simulations created through the game’s technology.

4. Deepfake Technology:

  • Simulacra in Deepfakes: Deepfake technology uses AI to create hyper-realistic videos where people’s faces are digitally swapped or manipulated to portray them doing things they never actually did. For example, a deepfake might show a famous actor delivering a speech they never actually gave or acting in a scene they were never part of. These deepfake images and videos are simulacra—they resemble the original person but are fabricated, often without any direct connection to the actual event or person. The “original” is no longer the reference point; what is presented is an entirely fabricated digital image or simulation of that person.
  • Simulacrum in Deepfakes: When a deepfake is created to represent a specific person, the resulting video or image is a simulacrum. The technology mimics the person’s appearance and speech, but it’s not the actual individual; it’s a synthetic copy. In this case, the simulacrum can be indistinguishable from reality to the viewer, challenging the boundaries between real and constructed images.

5. Augmented Reality (AR):

  • Simulacra in AR: Augmented reality, which overlays digital information or objects onto the real world through devices like smartphones, smart glasses, or AR headsets, also creates simulacra. For example, in an AR game like Pokémon GO, virtual creatures appear superimposed on the real environment through the screen of your phone. These Pokémon are simulacra—while they may appear to be real within the context of the game, they have no existence outside the digital interface.
  • Simulacrum in AR: AR apps that allow you to place virtual objects in your real surroundings (like seeing how a piece of furniture might look in your living room through an app) are presenting a simulacrum. The item is a digital representation of a physical object, and while it looks real in the context of the augmented space, it’s still a simulation, a version of the object that doesn’t physically exist in your space.

6. Synthetic Media and CGI:

  • Simulacra in CGI (Computer-Generated Imagery): In blockbuster films like The Avengers or Avatar, VFX teams use CGI to create entire characters or environments that are digital representations of real-world phenomena or entirely fantastical creations. These CGI characters, like the Hulk or Na’vi from Avatar, are simulacra—they imitate or resemble living beings but are entirely fabricated through digital technology.
  • Simulacrum in CGI: In some films, VFX teams use digital simulations to create replicas of real actors or places (such as the de-aging of actors in movies like The Irishman). These digital doubles or representations are simulacrum because, though they closely resemble the original actors or settings, they are not physically real but digital imitations, constructed for narrative purposes.

In each of these examples, simulacra and simulacrum are at play as digital technologies like VR, Google Maps, AR, deepfakes, and CGI create realities and objects that simulate real-world counterparts. However, they often exist as copies, detached from the physical originals they reference. These digital creations not only blur the lines between the real and the virtual but also force us to reconsider what authenticity and reality mean in an increasingly mediated world. Through VFX and related technologies, we are presented with a hyperreal experience—one that feels real but is, in fact, entirely constructed, constantly reshaping our understanding of the “real.”

Conclusion:

In the realm of VFX, simulacra and simulacrum highlight the shifting relationship between reality and representation. Filmmakers use digital effects to create worlds, characters, and experiences that are increasingly detached from the real world. These visual creations are simulacra, and the digital compositing techniques used in VFX are simulacrum—representations and imitations that have no “original” to refer back to, existing purely as part of a constructed, hyperreal world. This transformation challenges our perceptions of what is real and invites us to reconsider the very nature of authenticity in a media-saturated culture.

Redefining the Fabric of Reality: The Growing Evidence for a Simulated Universe

Hyperreality Journal

list your personal encounters with simulacra and hyperreality

write a short journal entry on your digital sketchbook on a time you felt you encountered hyperreality this could be through a video game a movie an advertisement ror an experience that felt more real than real

try to use baudrillards terminology – simulation simulacrum and hyperreal to describe the experience

discussion – will share experiences and reflect on how vfx might play a role in crafting similar hyperreal moments

The Four Phases of the image

The Four Phases of the Image:

  1. The image is a reflection of a profound reality.
  2. The image masks and distorts a profound reality.
  3. The image masks the absence of a profound reality.
  4. The image has no relation to any reality whatsoever; it is a pure simulacrum.

Disneyland Example:

Disneyland is a theme park designed to present a simulated reality.

The fake castles in Disneyland appear to resemble real castles, suggesting a connection to historical or architectural forms.

However, the castles, streets, and all the artificially created environments within Disneyland are designed to look original and authentic, yet they are not true representations of any real-world counterparts.

In the end, Disneyland itself becomes a hyperreal space, where the distinction between reality and simulation blurs entirely.

Illustrate Baudrillard’s concept of the four phases of the image and hyperreality;

  •  Find a famous painting [1st phase]

  • Find a photograph of that painting [2nd phase]

  • Find a digitally altered version of that painting [3rd phase]

  • Find a fully computer generated version of that painting with no real world referent [4th phase]

Lady with an Ermine

Photographs of Lady with an Ermine

Digitally altered versions of Lady with an Ermine

Fully computer generated version of Lady with an Ermine – Lady with Lego

VFX shot

Inception shifting city

Simulations as Real Entities:

Simulations are not mere representations of reality; they are real in themselves.

They exist as entities rather than as reflections of something else, and like any process of production, simulations can contribute new elements to the world rather than simply representing the pre-existing.

Simulations can create symptoms of illness—symptoms that, in themselves, are perceived as real. This leads to the phenomenon of hyperreality, where the distinction between what is real and what is simulated becomes increasingly blurred.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Week 9 – Virtual Filmmaking

What is Virtual Production?

Virtual production is a modern filmmaking technique that blends physical and digital worlds to create realistic environments during filming. Instead of relying solely on traditional green screens or post-production visual effects, virtual production enables filmmakers to capture live-action scenes with virtual environments integrated in real time. This means that actors can interact with digital backgrounds and settings on set while shooting, allowing for a more immersive and dynamic production process.

At its core, virtual production is about using technology—such as real-time rendering engines (like Unreal Engine), LED screens, and motion tracking—to create virtual worlds that appear to be part of the physical set. This approach makes it possible to shoot scenes in stunning, computer-generated locations without needing to physically travel to those locations or spend extensive time in post-production to add visual effects.

So, virtual production is not just a collection of tools or effects; it’s a new way of making films that integrates these technologies into the shooting process itself, allowing filmmakers to see and adjust their virtual environments in real time.

ArtStation - Avatar: The Last Airbender Virtual Production

What are the benefits of Virtual Production?

Pros of Virtual Production:

  • Enhanced Creative ControlVirtual production allows filmmakers to control the environment in real time, offering immediate feedback on how digital elements interact with live action. Directors can adjust lighting, camera angles, or even the entire background without waiting for post-production. This leads to faster decision-making and more creative flexibility during filming.

  • Immersive Actor ExperienceActors benefit from performing in real-time, fully immersive environments. Unlike traditional green screen setups where actors imagine the surroundings, virtual production places them in interactive, digital worlds, making their performances feel more natural and connected to the scene.

  • Time and Cost Efficiency in Long-TermWhile the initial setup for virtual production can be expensive, it often leads to cost savings over time. It reduces the need for extensive on-location shooting, complex set builds, and expensive visual effects in post-production. The ability to film in virtual environments without leaving the studio also cuts down on travel and logistical costs.

  • Seamless Integration of Real and DigitalThe real-time rendering technology makes blending live-action and digital environments smoother. It eliminates the discrepancies that often arise when trying to match green-screen footage with digital elements, as everything is captured simultaneously.

  • SustainabilityVirtual production reduces the environmental impact of filming. Since it cuts down on location shoots and large-scale set constructions, it minimizes travel, resource usage, and waste associated with traditional filming.

Cons of Virtual Production:

  • High Initial Setup Costs
    Setting up a virtual production system, especially with LED stages, motion tracking, and real-time rendering software, requires a significant investment. The cost of hardware, specialized equipment, and skilled professionals can be prohibitive for smaller productions or independent filmmakers.
  • Technical Complexity
    Virtual production involves a steep learning curve and a specialized skill set. The technology is still evolving, and the complexity of integrating live action with digital assets in real time can create challenges. This may lead to technical glitches or require a high level of expertise to achieve optimal results.
  • Limited to Certain Genres
    While virtual production is highly effective for sci-fi, fantasy, or action films that require elaborate environments, it may not be as beneficial for simpler, more intimate dramas. For grounded, natural settings, traditional filming methods may still be more efficient and effective.
  • Increased Post-Production Demands
    Although virtual production reduces some post-production work, it can create additional challenges in other areas, such as fine-tuning digital environments and ensuring perfect integration of live-action with virtual elements. Post-production teams still need to handle elements like digital effects or compositing, which can add to the workload.

Virtual production vs Green Screen

While virtual production offers exciting possibilities, traditional green screen remains a cheaper and more flexible option for many filmmakers. Green screen is inexpensive to set up and can be used in nearly any location, making it highly adaptable. It requires fewer resources, with just a green backdrop, some lighting, and compositing software. This makes green screen an attractive choice for smaller productions or those with limited budgets.

The flexibility of green screen also allows filmmakers to shoot in any environment, and later add in any digital background or effect during post-production. It’s a tried-and-true method for creating fantastical or complex environments without needing specialized real-time rendering technology.

However, the tradeoff is that green screen often requires more time and effort in post-production to match lighting, shadows, and camera angles with the virtual elements. Unlike virtual production, where these adjustments can be made during the shoot, green screen footage may look less seamless and natural if not carefully executed in post-production.

Virtual Film Production Pipeline

Virtual Production Pipeline Vs Traditional Film Production Pipeline

Previsualization (Previs)

Pitch Visualization (Pitchvis)

Technical Visualization (Techvis)

Stunt Visualization (Stuntvis)

Post-Visualization (Postvis)

In-Camera VFX (ICVFX)

 

Traditional Film Production Pipeline

Elements of Virtual Production:

  • Dynamic Backdrops LED screens arranged in large volumes display real-time rendered environments from Unreal Engine.

  • Realistic Lighting LED screens provide realistic lighting and reflections on actors and sets

  • Seamless Integration Combines live-action and virtual elements, creating a seamless blend between physcial and digital worlds.

  • Immediate Adjustments Allow for real-time changes to the virtual environment, such as time of day or weather, without extensive post-production.

  • Motion Tracking Ensures the virtual background moves in sync with the camera, maintaining correct perspective and scale.

Creative Possibilities Opens up new creative opportunities, making complex and visually stunning scenes more accessible and cost-effective.

LED Screens & Unreal Engine

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Assignment 1

Written Post 1 – What do think Dr James Fox means by his phrase ‘The Age of the Image’? 

In Dr. James Fox’s documentary Age of the Image: A New Reality, he introduces the concept of “The Age of the Image,” suggesting that modern-day society has become reliant on and addicted to images. 

Fox points out that every historical era is defined by unique characteristics: the 18th century is recognized as the age of philosophy, and the 19th century is the age of the novel. In this context, he claims our current age is dominated by images. 

While images have existed since ancient times, the past century has seen a massive increase in their quantity and accessibility due to advancements in photography. This shift has transformed how we communicate, express ourselves, and make sense of the world. 

Fox emphasizes that we now document nearly every aspect of our lives through images, significant or not. In the past, photography preserved cherished memories, but with modern technology, it has shifted toward validation and proof. The introduction of the Brownie camera in 1900 revolutionized photography, making it accessible to the masses and turning stiff portraits into snapshots of genuine emotions and everyday moments. In our current age, the ease of capturing images with smartphones has led to photographs holding less meaning. Instead of serving as moments of frozen past, they often become evidence of experiences. 

In conclusion, Dr. James Fox’s phrase “The Age of the Image” describes a transformative period where images have become central to our understanding of the world. Their accessibility and manipulation have reshaped how we capture, perceive, and share experiences. While images remain vital for expression, they also challenge our understanding of reality and meaning. 

Fox, J. (2021). The Age of the Image: Redefining Literacy in a World of Screens. Harper, New York. 

Written Post 2 – The Photographic Truth claim: What is it and does it matter? 

The “Photographic Truth Claim” is the belief that photography captures reality with high objectivity, distinguishing it from other art forms like drawing or painting, which are more open to interpretation. This idea stems from the way cameras work: they directly record light, giving audiences the impression of an unbiased depiction of the real world. 

 In Tom Gunning’s discussion of the “truth claim,” he brings up the semiotic theory, describing photographs as “indexical signs”—images made through a physical process linked to the objects they depict. This indexicality makes photography seem objective. However, Gunning challenges this view by pointing out that photographs can be manipulated or staged, undermining their reputation as pure reflections of reality.  

Technical choices, like the type of lens or film used and digital editing tools like Photoshop, allow photographers to alter images, sometimes making them more about interpretation than a clear depiction of reality. 

Gunning also highlights that photography has always been an art form that allowed for high levels of creativity. Since its early days, photographers have used techniques like staging scenes, choosing specific angles, and adjusting lighting, showing that photography has never been purely about documenting reality. 

In conclusion, while people have long seen photography as an objective medium, belief is increasingly challenged. Gunning’s argument shows that photography can be manipulated and crafted to convey emotion or influence rather than just factual accuracy. Its use in art, advertising, and persuasion illustrates that it has always been more than just a straightforward record of reality. 

Gunning, T. (2017)  PLENARY SESSION II. Digital Aestethics. What’s the Point of an Index? or, Faking Photographs . Nordicom Review, Vol.25 (Issue 1-2), pp. 39-49. 

Written Post 3 – Fakes or composites? 

The deer in the shot was created entirely in 3D, with meticulous attention to realistic fur grooming and detailed textures. It was composited into a natural environment, with lighting carefully matched to integrate the digital deer into the live-action setting seamlessly. 

Framestore enhanced realism by using depth of field to replicate how an actual camera lens would focus, subtly blurring the background. Combined with lifelike lighting and shadows, the digital model felt naturally embedded in the scene. 

The composition feels balanced, placing the deer in a way that draws attention while maintaining an organic flow. By aligning lighting, perspective, and textures, the composite achieves a seamless “impression of reality,” making the deer appear as though it truly exists in the environment. 

In The Crown, the shot combines live-action footage and CGI elements to create a seamless and visually stunning composition. The live-action portions include the Queen riding horseback alongside her entourage, while the CGI adds depth and grandeur with digitally extended backgrounds, a replaced sky, and a larger, more vibrant crowd. Digital street extensions further expand the setting, making it feel more open and majestic. 

The magic of this composite lies in how naturally these elements blend. CGI enhances the live-action footage, extending the environment while ensuring lighting and shadows are perfectly matched. The sky replacement sets the mood, while the digitally enhanced crowd adds energy and scale. Aligning depth and perspective ensures every element feels cohesive. 

Written Post 4 – Photorealism in VFX 

In VFX, photorealism can be achieved using two distinct types and techniques, depending on whether a scene is entirely CG or live-action footage blended with CG elements. Composites, where digital elements are layered into live-action footage, achieve realism more easily and convincingly because they use the actual footage as a reference. By carefully matching the filmed environment’s lights, shadows, and textures, VFX artists can seamlessly integrate CG elements into live-action shots, making the composite feel believable to the audience. This approach supports the story’s authenticity, aligning digital enhancements with the “narrative truth” of the world, as discussed by Martin Lister in New Media: A Critical Introduction (Lister et al., 2018).  

In contrast, fully CG movies sometimes risk looking less realistic because every aspect is digitally created, lacking natural imperfections and subtle cues found in real footage. This can sometimes make CG scenes appear too perfect or stylized, distracting viewers from the story. However, this approach provides a high level of creative freedom. In films like The Lion King (2019), artists had complete freedom to design every element, from atmospheric lighting to character expressions, crafting a unique world that would be hard to find in reality. As Barbara Flueckiger notes in Photorealism, Nostalgia, and Style, fully CG photorealism can evoke nostalgia and emotional depth by imitating classic cinematic textures, enriching the digital narrative (North et al., 2015).  

Ultimately, while composites often effortlessly achieve realism, fully CG scenes open up new possibilities for stylized storytelling and imaginative world-building that composites can’t match. 

Lister, M., Dovey, J., Giddings, S., Grant, I., and Kelly, K., 2018. New Media: A Critical Introduction. 2nd ed. London: Routledge, pp. 137–138. 

Flueckiger , B. (2015)  ‘Photorealism, Nostalgia and Style. Material Properties of Film in Digital Effects’, North et al. Special effects: new histories/theories/contexts. London: Bloomsbury, pp. 78-98.

Written Post 5 – Compare keyframe animation with motion capture 

Motion Capture and Key Frame Animation are two fundamental tools of animation, each serving specific creative purposes. MoCap captures the nuances of human movement and expression, perfect for realistic portrayals, while Key Frame Animation excels in animating stylized or non-human characters with imaginative, exaggerated movements.   

A great example of MoCap’s effectivness can be seen in the Planet of the Apes, where characters like Caesar convey complex emotions, such as anger and empathy. Capturing these subtle details in facial expressions and body movements of the actor enhances our ability to emotionally connect with non human characters, which makes them feel more authentic and relatable. 

However, for non-human creatures, MoCap may not be the best option, as shown in Mowgli, where using MoCap for animal characters resulted in unnatural expressions – a phenomenon known as the “uncanny valley.” In these cases, Key Frame Animation is often more effective, as it allows animators to exaggerate movements and create more expressive, natural behaviors, avoiding the awkwardness of imposing human motion onto animals.  

On the other hand, The Hobbit is an interesting example of MoCap being used effectively on a non-human character, where the dragon Smaug was brought to life through combining both techniques. Motion capture brought Smaug’s face to life, allowing every subtle expression to shine through and giving the dragon a unique personality, whilst keyframe animation grounded his movements in realistic muscle dynamics, drawing on inspiration from animals like birds and lizards. This combination ensured that Smaug’s expressions portrayed intense emotions, while his movements stayed true to a creature that could believably exist.

Written Post 6 – Reality Capture Case Study 

In response to the ongoing conflict in Ukraine, advanced reality capture technologies, like photogrammetry and LIDAR, have become crucial tools for preserving the country’s cultural heritage. These technologies enable the creation of precise digital replicas of historic buildings and landmarks, allowing for their preservation in virtual form, even if they are physically damaged or destroyed.  

LIDAR emits laser pulses that bounce back from surfaces, allowing it to measure distances and capture millions of data points to create an accurate 3D model of an object or site. This technology excels at documenting complex details, such as architectural features, that may be difficult to capture with traditional methods. When combined with photogrammetry (using photographs to add texture to the 3D model), LIDAR can generate a complete, realistic digital representation of the scanned environment. These 3D models are then stored digitally, making them accessible for future restoration or historical research.  

Examples of this technology are the preservation efforts for Ukrainian landmarks like Kyiv Pechersk Lavra and St. Sophia Cathedral. These sites, which face the threat of destruction due to the ongoing war, have been digitally documented by organizations like CyArk and the Ukrainian Cultural Heritage Preservation Fund. The digital models not only help safeguard the sites’ historical value but alsoprovide a tool for reconstruction if needed in the future.  

Despite challenges like the high cost of LIDAR equipment, the impact of this technology is profound. Even if physical sites are lost, their cultural significance can still be preserved digitally for future generations. 

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Assignment 2 – Essay & Presentation

New Trends of Capture and Real-Time Filmmaking lead us to consider if the traditional approaches to 3D Modelling, Animation and Rendering are still be needed? Discuss.
This question evokes the words of Paul Delaroche in 1840, who upon seeing his first photograph supposedly claimed that from that moment on painting, as a medium, was dead.
Can we say the same for 3D modelling, animation and or rendering? What is the impact of these new technological trends?
Think about how: New and established tends of motion capture, reality capture and virtual filmmaking are impacting how 3D photoreal models are produced, characters are animated, and rendering is constructed, and indeed how we both practically work with and view these areas through a theoretical lens.
With this essay you could choose an approach by focusing on one of the following relationships:
  • Keyframe animation and motion capture.

  • 3D scanning and modelling.

  • Rendering and virtual filmmaking.

In the past decade, rapid technological advancements have significantly transformed digital visual production, especially in the realms of 3D modeling, animation, and rendering. Techniques such as motion capture, 3D scanning, and real-time rendering have introduced a new era of efficiency and realism in visual effects. This evolution raises a crucial question: Are traditional methods of 3D modeling, keyframe animation, and offline rendering still relevant, or are they becoming outdated? By analyzing the influence of these emerging technologies on animation, modeling, and rendering, this essay will explore the changing role of traditional practices in a time of extensive technological development. 

Much like Paul Delaroche’s famous statement in 1840—”from this day on, painting is dead”—made after observing the rise of photography, the rapid advancements in digital visual technologies provoke similar debates. While Delaroche’s words were an exaggeration, they highlight how new technologies can challenge established practices. 

Tom Gunning’s concept of the “cinema of attractions” provides a compelling parallel here. Gunning’s framework underscores how early cinema’s appeal lay not in narrative storytelling but in the sheer wonder of technological novelty. Similarly, modern innovations in digital production often captivate audiences with their technological prowess, prompting creators to rethink their artistic approaches. 

The rise of technologies such as motion capture, 3D scanning, and real-time rendering prompts similar discussions regarding the future of traditional 3D modeling, animation, and rendering techniques, which have long been integral to film, media, and gaming. However, just as photography did not render painting obsolete, these advanced technologies are unlikely to replace traditional methods. Instead, they complement established techniques by enhancing creative flexibility and realism. Furthermore, they streamline workflows, enabling artists to concentrate more on the creative aspects of their work rather than on time-consuming technical tasks. 

Keyframe animation has historically been an essential part of digital animation. In this approach, animators create keyframes to define important moments in a sequence, with software interpolating the frames in between to produce smooth motion. While this technique allows for precise control over the animation, it can be time-consuming, particularly for long, complex sequences. 

Motion capture, on the other hand, has significantly transformed the animation process. By recording the movements of live actors and converting them into digital data, mocap allows animators to capture highly realistic representations of movement, including intricate details such as shifts in body weight and facial expressions. A known example of this technique is in The Lord of the Rings (2001), where mocap was used to bring the character of Gollum to life through Andy Serkis’s detailed performance. 

However, despite its advantages in enhancing realism and efficiency, mocap does not entirely replace traditional keyframe animation. While mocap effectively captures authentic movements, it may lack the artistic stylization and exaggerated gestures that characterize conventional animation. For instance, live-action adaptations such as The Lion King and The Jungle Book have been criticized for their perceived lack of expressiveness compared to their animated predecessors, often resulting in an uncanny visual effect. 

Consequently, many contemporary productions have adopted a hybrid approach, combining mocap with keyframe animation. In this method, mocap is used to capture fundamental movements and realism, while keyframe animation refines, exaggerates, and adds an artistic touch to the final product. This integration leverages the strengths of both techniques, ensuring a balance between realism and creative expression and ultimately enhancing the overall quality of animated works. 

Traditional 3D modeling involves creating digital objects by manipulating points in a three-dimensional space, granting artists unparalleled creative control. This approach enables the creation of imaginative and highly detailed models, with continuous adjustments and stylistic choices ranging from realistic to stylized designs. However, traditional modeling can be time-consuming, especially when producing realistic assets for films or games. 

Innovations like 3D scanning technologies, such as LiDAR and photogrammetry, have transformed this landscape by allowing filmmakers and game developers to capture highly accurate digital representations of real-world objects. These technologies use cameras and sensors to convert physical objects into digital models. For instance, in Avengers: Endgame (2019), 3D scanning was employed to create realistic digital doubles of actors for de-aging and stunt work. 

According to Autodesk, 3D scanning is poised to redefine modeling workflows by automating the creation of high-fidelity digital assets. The article emphasizes that scanning excels in capturing intricate surface details, making it invaluable for replicating objects quickly and accurately. This efficiency is particularly relevant for projects with tight deadlines or those requiring photorealistic accuracy in secondary assets, such as props or natural elements. 

Despite its advantages, traditional modeling remains irreplaceable. While scanning precisely captures surface details, it struggles with internal structures, intricate designs, and artistic nuances that manual modeling provides. Scanned models often require refinement to meet aesthetic standards, and artists may need to enhance them to align with specific visual styles or address scanning imperfections. 

Moreover, 3D scanning is insufficient for projects requiring imaginative elements like fantasy creatures or environments that do not exist in the real world. In such cases, 3D artists design original content based on their creative vision and the conceptual designs provided by their teams. Even in photorealistic productions such as The Hobbit or Game of Thrones, where scanning is utilized for many elements, mythical creatures, and other imaginative assets must still be modeled and animated by artists to captivate viewers. 

Gunning’s argument that technological innovation often fuels rather than diminishes creative experimentation is evident here. The evolution of 3D scanning expands the toolkit available to creators, enabling a dynamic interplay between capturing reality and sculpting fantasy. 

Looking ahead, 3D scanning will likely become essential for creating secondary assets like small objects and natural elements. However, as highlighted by Autodesk, the future of modeling will involve a collaborative workflow, combining the speed and precision of scanning with the artistry and creativity of traditional techniques. This synergy will enable filmmakers and game developers to achieve both efficiency and creativity, paving the way for richer stories and more immersive visual experiences. 

One of the last points of this debate is the impact of real-time rendering on traditional rendering, which is one of the most resource-intensive stages in production. It involves simulating how light interacts with objects to produce realistic images. While technological advancements have introduced significant improvements, traditional and real-time rendering methods present distinct strengths and challenges that shape modern workflows. 

Traditional rendering techniques, such as ray tracing, excel in simulating light reflection and refraction to achieve unparalleled visual fidelity. However, these methods are computationally expensive, often requiring hours to render a single frame on powerful systems or render farms. This limitation has driven the exploration of alternative approaches, particularly in fast-paced industries like gaming and virtual production. 

Real-time rendering engines, like Unreal Engine and Unity, have quickly transformed production pipelines by generating high-quality visuals. This breakthrough enables applications such as virtual filmmaking, where rendered environments replace physical sets. The Mandalorian (2019) exemplifies how real-time rendering is reshaping the industry. By projecting immersive virtual environments onto LED screens, actors can interact directly with their surroundings during filming, significantly reducing the need for extensive post-production compositing. 

However, traditional techniques, such as keyframe animation, 3D modeling, and offline rendering, were still essential for specific visual effects in The Mandalorian. This integration of real-time and traditional methods highlights their complementary nature, enhancing creative possibilities while maintaining the high quality expected in modern productions. 

Virtual production pipelines vastly differ from conventional workflows. Instead of adding visual elements during post-production, much of the work is done before filming, allowing actors to see and interact with their surroundings in real time. While this approach offers unparalleled immediacy and immersion, it limits flexibility for later adjustments, as footage is not captured on a green screen and is harder to manipulate once integrated with live-action elements. 

Despite these innovations, real-time rendering fails to achieve the intricate detail, complex lighting effects, and high-fidelity textures possible with traditional offline rendering. As Manovich (2001) argues, digital cinema does not replace traditional workflows but rather redefines them, a concept reflected in the coexistence of real-time and offline rendering techniques. These approaches complement each other, expanding creative options and redefining how visual effects are produced. Traditional methods remain indispensable for scenes requiring the utmost realism, while real-time rendering improves efficiency and flexibility in earlier stages of production. 

Ultimately, while real-time rendering introduces unprecedented speed and flexibility, it cannot yet fully replace traditional techniques. Instead, the two approaches complement each other, with real-time rendering driving innovation in production processes and traditional methods ensuring the highest quality in final outputs. Together, they represent a synergy that continues to push the boundaries of visual storytelling. 

The evolution of digital visual production, fueled by advancements in motion capture, 3D scanning, and real-time rendering, has transformed how we create and experience visual effects. These technologies have introduced unprecedented efficiency, realism, and creative opportunities, challenging the traditional methods of 3D modeling, keyframe animation, and offline rendering. However, as history has shown with photography and painting, technological advancements rarely entirely replace traditional techniques. 

Instead, the synergy between emerging technologies and traditional practices allows for richer, more versatile workflows. Motion capture enhances realism but relies on keyframe animation for artistic stylization. 3D scanning precisely captures the physical world but requires the creative touch of traditional modeling for imaginative designs. Similarly, real-time rendering expedites production but cannot yet match the fidelity of traditional rendering for complex visuals. 

In conclusion, the future of visual effects lies in the harmonious collaboration between tradition and innovation. As tools and techniques continue to evolve, they empower creators to achieve new heights of realism and artistry while honoring the foundational methods that shaped the industry. The question is not whether traditional techniques will survive but how they will adapt to coexist with emerging technologies, ultimately driving the craft forward and opening doors to untapped creative possibilities. 

 

References: 

 

Autodesk, 2023. A look at what 3D scanning means for the future of modeling. [online] Autodesk Inventor Blog. Available at: https://blogs.autodesk.com/inventor/look-3d-scanning-means-future-modeling/ 

 

Gunning, T., 2006. Re-Newing Old Technologies: Astonishment, Second Nature, and the Uncanny in Technology from the Previous Turn-of-the-Century. In: D. Thorburn and H. Jenkins, eds. Rethinking Media Change: The Aesthetics of Transition. Cambridge, MA: MIT Press, pp. 39-60. 

 

Manovich, L. (2001). What is Digital Cinema? [online] Available at: https://www.manovich.net 

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━