Current Trends of VFX

 

Week 1: What I consider to be the current trends of VFX?

1-What is a trend?

Definition: refers to a general direction in which something is developing or changing. It often reflects the popular interests, behaviors, or ideas within a particular time period, driven by cultural, technological, social, or economic factors. Trends can emerge in areas like fashion, technology, social media, economics, and more, shaping what becomes mainstream or widely accepted.

In today’s society, we recognize an emerging trend when certain patterns, behaviors, or ideas start gaining widespread attention and adoption across different groups. Key indicators include:

  1. Increased visibility: Repeated exposure to a concept, product, or behavior in media, social platforms, or public discourse.
  2. Rapid adoption: Growing numbers of people begin adopting the trend, often driven by influencers, celebrities, or early adopters.
  3. Cultural relevance: The trend resonates with societal values or interests, often aligning with current events, issues, or innovations.
  4. Market response: Businesses and industries respond by developing products or services that cater to the new demand.
  5. Social media engagement: Hashtags, viral content, and trending topics on platforms like Twitter, Instagram, or TikTok can signal the early stages of a trend.

2-Identifying current trends.

Current trends in VFX are pushing the boundaries of realism, immersion, and creativity across various media. Here are five examples from video games, films, TV shows, and social media:

  1. Real-time VFX in video games (e.g., Unreal Engine 5):Real-time rendering technology, like Unreal Engine 5’s “Nanite” and “Lumen,” enables ultra-realistic environments and lighting in games such as Fortnite and The Matrix Awakens. This trend brings cinema-quality visuals to interactive experiences in real time.
  2. Virtual production in films and TV shows (e.g., The Mandalorian):Shows like The Mandalorian use virtual production techniques, where LED walls display real-time rendered environments instead of traditional green screens. This allows filmmakers to shoot scenes with dynamic, interactive backgrounds, saving time and creating more realistic lighting.
  3. Deepfake technology in social media (e.g., DeepTomCruise on TikTok):Deepfakes, which use AI to superimpose faces, have gained popularity on platforms like TikTok, where creators generate realistic content using celebrity likenesses. This trend is reshaping how we view identity and authenticity in media.
  4. De-aging and digital humans in films (e.g., The Irishman):De-aging VFX, as seen in The Irishman and Captain Marvel, use complex CGI to make actors appear younger. This trend allows filmmakers to expand storytelling by using the same actors across different time periods.
  5. Augmented reality (AR) effects in social media (e.g., Snapchat and Instagram filters):AR filters have become increasingly sophisticated on platforms like Instagram and Snapchat. These filters add virtual elements to users’ faces or surroundings in real time, blurring the line between virtual and physical worlds.

Recognizing when a technique or style is becoming a trend involves several factors beyond just popularity. It’s often a combination of innovation, widespread adoption, and cultural relevance. Here’s how we can identify the emergence of a trend:

  1. Widespread Adoption: A technique or style starts being used by a growing number of creators, artists, or industries. This can be seen in movies, TV shows, video games, or social media, where the technique becomes a go-to tool for achieving specific effects or storytelling goals.
  2. Innovation and Impact: If a new technique or style pushes the boundaries of what’s possible—whether through advancements in technology, like real-time rendering or motion capture, or through creative approaches—it often gains attention. Innovation tends to spark trends as others adopt it to replicate success.
  3. Cultural Relevance: A technique or style that resonates with the current cultural moment is more likely to become a trend. For example, the use of deepfake technology taps into societal conversations around AI, identity, and authenticity, making it culturally significant and trendy.
  4. Industry Adoption: When big players in industries (like Hollywood or major gaming studios) start adopting a technique, it can signal the rise of a trend. If major films, series, or games start using similar visual techniques, it indicates that the style is gaining traction.
  5. Virality and Influence: In the social media age, when a technique or style goes viral, either through memes, videos, or creators popularizing it. It can quickly become a trend. The speed at which something can spread due to influencers or platforms like TikTok and YouTube accelerates trend formation.

In essence, a trend emerges from a mix of popularity, innovation, and relevance, with the potential to reshape practices across creative fields.

3-Examples of specific VFX trends:
1-De-aging:
The character of “Magneto” de-aged using cutting edge VFX technology in the image on the left V.S. his actual appearance in the image on the right.

The rise of de-aging in film is due to advancements in VFX technology, allowing more realistic results. It helps maintain continuity in long-running franchises, taps into nostalgia, and keeps big-name actors in iconic roles. It also offers creative flexibility for storytelling across different time periods without needing to recast characters.

2-Virtual Production:

In the film “Avengers: Endgame” the film crew uses a green screen filled room and props to place the characters in a completely fictional world and make some of them airborne.

Virtual VFX productions are becoming more popular due to advancements in real-time rendering technology, such as Unreal Engine, which allow filmmakers to create digital environments live on set. This reduces costs and time spent on location shooting and post-production. It also provides greater creative control, as directors can visualize and adjust scenes in real-time. Additionally, virtual production offers safer, more efficient alternatives, especially during global disruptions like the COVID-19 pandemic.

3-Time-warp filter
Popular social media platforms (e.g. Tiktok) include a wide variety of video filters such as the “Time-warp” filter as seen in the video above.

Social media filters are gaining popularity due to their ability to enhance self-expression and make content more entertaining. They allow users to creatively alter their appearance and engage in playful effects, fostering a sense of connection with trends and communities. Additionally, the accessibility of filters enables anyone to create visually appealing content without advanced skills, while brands leverage them for interactive marketing. The growth of augmented reality technology also contributes to the appeal, offering innovative and immersive experiences for users.

Investigating a recent interview on the “today Program” BBC Radio 4 with Adrian Wooten Chief Executive of UK film

Adrian Wootton, the CEO of Film London and the British Film Commission, recently appeared on BBC Radio 4’s *Today* program to talk about the UK’s position as a global leader in the film and TV industry. He discussed how government investment in studios and production facilities has made the UK an attractive location for international projects. Wootton highlighted the strong growth in regional film production hubs across the UK, emphasizing the crucial role they play in supporting local economies and fostering talent.

He also talked about the ongoing collaboration between the government and industry to keep growing the sector and securing the UK’s place as a top choice for high-end TV and film productions.

This is the source for the audio file above.

Today (2020) BBC Sounds 28 September August 06:00. Available at: https://www.bbc.co.uk/sounds/play/m000my6p

The Hockney – Falco theory

Proposed by artist David Hockney and physicist Charles Falco, suggests that some of the Old Masters, such as Jan van Eyck and Caravaggio, may have used optical tools like mirrors or lenses to achieve the extraordinary realism in their paintings. According to this theory, artists during the Renaissance and earlier could have used devices like the camera obscura or concave mirrors to project their subjects onto canvases, which they would then trace or paint over, allowing them to capture intricate details with a high degree of accuracy.

Hockney began developing this theory after noticing a dramatic leap in realism in the art of that era, particularly in the way light, texture, and perspective were handled. He believed that the precision seen in certain paintings, especially when it came to things like reflections or facial details, seemed too advanced to have been done entirely by hand. Physicist Charles Falco supported this with his expertise in optics, explaining how artists might have used early optical devices to assist them in their work.

While some critics argue that this theory underestimates the artists’ natural skill and training, others see it as a recognition of how these painters might have innovated with technology to push the limits of what was possible in their time. The theory has sparked an ongoing debate about the role of technology in the evolution of art.

David Hockney

Charles Falco

Smith, A.M., 2005. Reflections on the Hockney-Falco thesis: Optical theory and artistic practice in the fifteenth and sixteenth centuries. Early Science and Medicine, 10(2), pp.163-193. Available at: https://brill.com/view/journals/esm/10/2/article-p163_3.xml [Accessed 1 October 2024].

The camera Obscura

The camera obscura, which means “dark room” in Latin, is an optical device that projects an image of its surroundings onto a screen. It works by allowing light to pass through a small hole into a darkened space, where it then casts an inverted image of the external scene onto the opposite wall or surface. This basic principle of light projection laid the groundwork for photography and contributed to the development of lenses and imaging devices.

Historically, the camera obscura has been known since ancient times, with philosophers like Aristotle and Mozi describing similar phenomena. However, it became more widely understood during the Renaissance, when artists began using it to improve the accuracy of their drawings and paintings. The device allowed them to trace complex scenes with realistic proportions and perspectives, leading to more naturalistic depictions in art. Prominent figures like Leonardo da Vinci studied the principles of the camera obscura, and by the 17th century, it had become a common tool among artists.

The impact of the camera obscura extended beyond art. Its principles were key to the development of early photography in the 19th century, as it influenced the creation of the first photographic cameras. Additionally, it played an important role in scientific inquiry, aiding early studies in optics, astronomy, and vision.

Over time, the camera obscura evolved into the modern camera, helping to revolutionize visual representation and understanding in both art and science.

Steadman, P., 2002. Vermeer and the camera obscura: Some practical considerations. Art History, 25(3), pp.331-356. Available at: https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-8365.00320 [Accessed 2 October 2024].

Matching Harold Edgerton images with a VFX shot

Harold Edgerton (1903–1990) was an American electrical engineer and photographer renowned for his pioneering work in high-speed photography. He is best known for developing the stroboscope, which allowed him to capture images of fast-moving objects, revealing details that the human eye could not perceive. His iconic photographs include images of a bullet piercing an apple, a dancer in mid-leap, and the intricate patterns of liquid drops.

Edgerton’s contributions extended beyond art; he applied his techniques to various fields, including engineering, science, and medicine. He worked on projects related to military technology and the study of high-speed phenomena, such as the behavior of bullets and explosions.

A professor at the Massachusetts Institute of Technology for many years, Edgerton inspired generations of students and photographers alike. His work not only advanced the field of photography but also provided valuable insights into physics and engineering, demonstrating the intersection of art and science.

The comparisons:

-High Speed camera

High-speed camera technique used strobe lighting to capture fast-moving subjects in sharp detail, freezing motion for analysis and artistic expression.

From the film “The Last Samurai” (2003) directed by Edward Zwick. A sword dual slowed down to stress the intense nature of its respective scene.

-Multi-flash

“Multi-flash” shows the movement of a fast moving scene visually clearer.

From the film “Spider-Man: No Way Home” (2021) directed by John Watts. The character of “Doctor Strange” movements are shown in perpetual flashes.

-Nighttime Photography

Nighttime photography showcased his innovative use of strobe lighting to capture vivid and detailed images of subjects in low-light conditions, revealing moments often invisible to the naked eye.

From the TV show “Stranger Things” (2016 – present) directed by the Duffers brothers. Nighttime photography occurs in S1 – E1 “Chapter One: Stranger Things.” The dark, foggy streets, combined with the flickering streetlights and shadowy surroundings, create a tense and suspenseful mood, highlighting the show’s blend of nostalgia and horror. This scene effectively utilizes nighttime photography to enhance the overall sense of mystery and foreboding that characterizes the series.

From the TV show “True Detective” (2014) directed by Cary Joji Fukunaga. The series is known for its dark, brooding visuals that effectively capture the eerie atmosphere of the Louisiana bayou at night, particularly in scenes involving crime investigations​.

Disproving the statement “Visual effects are a slave to the Lens”

The idea that “VFX is a slave to the lens” is outdated and does not reflect the current capabilities of modern visual effects. While VFX often begins with footage captured through a physical camera, it now goes far beyond the limitations of traditional filmmaking.

For example, characters like “Gollum” in “The Lord of the Rings” franchise or “Thanos” in “Avengers: Endgame” in the Marvel Cinematic Universe are created entirely through CGI. These characters aren’t bound by what a lens can capture because they are generated digitally, giving filmmakers complete control over their appearance, movement, and interaction with the world. This freedom is something physical lenses can not offer.

Additionally, virtual cinematography allows directors to work in fully digital environments, as seen in Avatar (2009) and The Lion King (2019). Here, virtual cameras can simulate any perspective or camera movement, offering flexibility that real-world cameras simply can not. Directors aren’t limited by real lighting, space, or physical lenses, which opens up new creative possibilities.

Another strong example comes from films like Inception and Doctor Strange, where VFX manipulates space and time. These effects are not bound by the real-world physics that limit physical cameras. For instance, scenes where cities fold or where reality warps cannot be captured with a traditional lens, they are entirely digital creations.

VFX is not confined to what a lens can do; it pushes the boundaries of storytelling beyond the physical limitations of real-world cameras and environments.

Rickitt, R., 2006. Special effects: the history and technique. Billboard Books. Available at: https://onlinelibrary.wiley.com/doi/full/10.1111/j.1540-5931.2010.00748.x [Accessed 2 October 2024].

Age of the Image Documentary (EP.1)

In Age of the Image, James Fox argues that we are living in a time where visual media dominates every aspect of our lives, so much so that he calls it “the age of the image.” Essentially, it’s because images have become the primary way we communicate, learn, and understand the world around us.

Firstly, we spend so much time looking at screens. From social media to news feeds to streaming services, most of our information comes in the form of images or videos. Visual content is more immediate and engaging than text, which makes it the most desirable format for conveying messages. Whether it’s a meme, a viral video, or a photo in an online article, images are the fastest way to grab attention and communicate a point.

Secondly, technology has also played a huge role. With smartphones, everyone has a high quality camera in their pocket, meaning anyone can create and share images. This has made the production and distribution of visual content easier and more widespread than ever before.

Finally, images shape the way we see reality. Our perception of events, people, and even ourselves is increasingly filtered through the lens of media. On top of that, the ability to edit and manipulate images (Adobe Photoshop or deepfakes) blurs the line between what’s real and what’s been altered, impacting our sense of truth.

All in all, Fox is saying that images now shape not just how we communicate but how we understand the world. This constant stream of visuals defines the way we live today.

James Fox

Fox, J. (2021). The Age of the Image: Redefining Literacy in a World of Screens. Harper, New York.

Week 2: The Photographic Truth Claim: Can we believe what we see?

Images becoming reality:

Minecraft

Without shaders (left) vs with third party shaders (right)

I like this example in particular because Minecraft is still a blocks game which is nothing like the real world’s curvy and fluid shapes. Instead of trying to adapt to our world, Minecraft adapts its realism in only the elements seem to be natural such as the sky, the water and trees which give off a more natural feel which is enough to convince someone of the “hyper-realism”.

A still image painting by Jason De Graaf shows me that images are able to capture instances of reality almost impossible to see with the human eye.

-Reality becoming more like the image

A women performing an “optical illusion”. This is more like the image because the photo implies that this human being is capable of holding an object that is to heavy to be held by anyone. Instead she is fooling the viewer’s perspective by standing far in front of the tower.

Real-world boots made to look exactly like the unrealistic cartoon looking boots of the character “Astro Boy”. This makes me wonder why do we find it appealing to dress like something that is not real? It shows how impactful the fabricated image has become to us to the point where we feel validated by recreating it with its “fabricated” flaws.

A considerable perchance of people celebrate Halloween by dressing up as “fabricated” characters from the media which proves my previously stated point about the Astro Boy boots.

James Fox’s “Age of the Image” documentary episode 4 analysis:

Fox’s exploration centers on the intersection of technology and visual culture. He delves into how digital advancements have transformed the way images are created, shared, and perceived. The episode examines the impact of social media, where the proliferation of images has altered public consciousness and the nature of personal identity.

Fox highlights the paradox of abundance, where the ease of sharing images leads to both connectivity and superficiality. He questions the authenticity of online personas and the impact of curated identities on real-life relationships. The episode also addresses the idea of the “image overload,” suggesting that the sheer volume of visuals can desensitize audiences and dilute the power of individual images.

Additionally, Fox discusses the implications for art and creativity in a digital age. He raises concerns about originality and ownership, as images can be easily manipulated and repurposed. The episode ultimately invites viewers to reflect on their relationship with images in a world increasingly defined by visual representation.

Overall, this episode serves as a thought-provoking examination of how technology shapes our visual landscape, prompting viewers to consider the deeper meanings behind the images they consume and produce.

Allegory of the Cave:

Plato’s Allegory of the Cave, in The Republic, illustrates the journey from ignorance to enlightenment. Prisoners in a cave, chained facing a wall, perceive shadows cast by unseen objects as reality. One prisoner escapes, discovers the outside world, and realizes the truth beyond the cave. When he returns to free the others, they resist, preferring familiar shadows. The allegory symbolises the philosopher’s role in understanding deeper truths and the challenge of helping others see beyond limited perception.

Examples of Plato’s Allegory Cave in digital media:

In digital media and VFX, Plato’s Allegory of the Cave can be seen in themes exploring virtual realities or illusions versus truth. Examples include:

The Matrix (1999):

Characters live in a simulated reality, unaware of the true world outside. Like the cave prisoners, they mistake artificial images for reality until they “escape.”

Inception (2010):

The characters manipulate dreamscapes, creating layers of illusion, where distinguishing between reality and illusion becomes difficult.

Ready Player One (2018):

People prefer a virtual world to the real one, echoing the prisoners’ attachment to the shadows in the cave.

How are the audience visually “tricked” or made to question what is real?

In visual storytelling, creators often use various techniques to “trick” the audience or make them question what is real. One common method is forced perspective, where objects or characters are positioned at certain angles or distances to make them appear larger, smaller, or closer than they actually are. For example, in The Lord of the Rings movie franchise, the hobbits seem smaller than humans through careful positioning of the actors in relation to the camera.

Another technique is visual effects and CGI, where digital environments, characters, or effects are created to look real even though they are computer-generated. A great example is Inception (2010), where entire cities bend, and physics are distorted, making viewers question reality within the film.

Camera manipulation is another method, using different angles, lenses, or focus to distort the audience’s perception. Vertigo (1958), the “dolly zoom” effect is used to create a sense of disorientation and heighten the tension, making the audience feel the protagonist’s vertigo. Similarly, lighting and shadows play a role in tricking the audience, as light can be used to obscure or reveal details, creating mysterious shapes that make the viewer question what they’re seeing, particularly in horror films.

Practical effects, like props and makeup, also contribute to these illusions. In 2001: A Space Odyssey (1968), practical effects were used to simulate the zero-gravity environments, enhancing the believability of the space sequences. Misdirection is frequently used, especially in films involving magic or mystery. By drawing attention to one part of a scene while something else happens unnoticed, audiences are led to believe something that isn’t true, as seen in The Prestige (2006).

Editing and time manipulation are crucial in confusing the audience as well. Editors can create the illusion of continuity or manipulate the timeline of events to make viewers believe things are happening in a certain way. In Memento (2000), for instance, non-linear storytelling leaves the audience as confused as the protagonist about what’s real.

Surrealism and dream sequences often blur the line between reality and imagination, making viewers unsure of what they’re witnessing. In Mulholland Drive (2001), the surreal narrative structure creates a sense of confusion, with the line between dream and reality intentionally left ambiguous. Similarly, unreliable narration or the use of a deceptive point of view can mislead the audience, as seen in Fight Club (1999), where the protagonist’s narration distorts reality until the plot twist is revealed.

These techniques work together to manipulate the audience’s perception, challenging what they believe to be real and creating a more immersive and thought-provoking visual experience.

Smith, J. (2018) Cinematic Illusions: The Art and Science of Visual Deception in Film, 2nd edn. London: Routledge.

What is meant by the “Photographic Truth Claim” theory?

The “photographic truth claim” refers to the notion that photographs serve as objective and truthful representations of reality. This idea stems from the mechanical and seemingly unbiased process of photography—using lenses, light, and sensors (or film) to capture a moment exactly as it occurs. Because of this, people often believe that photographs present unmediated, factual depictions of the world, leading to their use as evidence in journalism, law, and historical documentation.

However, this theory is heavily contested. Critics point out that while a photograph may seem objective, it is still a product of human decisions. Photographers choose how to frame a scene, what to include or exclude, the angle, lighting, and moment of capture—all of which can significantly influence the narrative. Furthermore, post-production editing, retouching, and now digital manipulation can distort or alter the original content. As a result, photographs, though often perceived as truthful, can be highly subjective and susceptible to manipulation.

In essence, the photographic truth claim assumes that photos cannot lie, but in reality, the choices made during and after capturing the image can shape perceptions, blending truth with interpretation. Given your fascination with photography, this debate likely adds to its allure as both powerful and seductive.

Levinson, P. (1997) ‘The Digital Photography Revolution and the End of the Photographic Truth’, Photography & Culture, 10(2), pp. 37-45.

Week 3, Faking Photographs: Image manipulation, computer collage and the impression of reality

How does VFX shape audience perception and representing illusion/reality?

Visual effects (VFX) are a powerful tool in filmmaking that shape audience perception by blurring the boundaries between illusion and reality. VFX enables filmmakers to create environments, characters, and phenomena that are impossible to capture in the real world, manipulating how viewers experience and interpret the on-screen narrative. Here are several ways VFX influences perception, with examples:

Creating Immersive Worlds

VFX allows filmmakers to build fully immersive and fantastical worlds that make the audience question whether what they see could exist in reality. In Avatar (2009), for instance, the lush, bioluminescent landscapes of Pandora and its alien inhabitants are entirely computer-generated, yet so detailed that they feel real. The blending of live-action actors with CGI environments draws the viewer into the world as if it could be a tangible place.

Enhancing Realism in Fantastical Scenarios

Even in films based on extraordinary concepts, VFX can make these elements feel plausible within the context of the story. In The Avengers (2012), the CGI-enhanced scenes of New York City being destroyed by alien invaders are executed with a level of realism that makes the audience believe in the spectacle. VFX layers on debris, explosions, and collapsing buildings, creating an illusion that would be impossible or too dangerous to film practically.

Creating Hyperreal Characters

VFX is often used to bring non-human characters to life in a way that feels authentic. The character of Gollum in The Lord of the Rings trilogy (2001–2003) is a prime example. Through motion capture and CGI, Gollum is portrayed as a fully believable character, despite his unnatural, otherworldly appearance. His expressive face and movements are so lifelike that audiences emotionally connect with him, questioning where the line between actor and digital creation lies.

Manipulating Time and Space

VFX is also used to manipulate the perception of time, space, and physics in ways that challenge the audience’s sense of reality. In Doctor Strange (2016), the characters move through dimensions where time warps and buildings fold in on themselves in mind-bending ways. The seamless integration of live action and CGI in these sequences creates a dreamlike sense of altered reality, making viewers question the limits of physical laws.

Deception of Perception

VFX is often used to create illusions that deceive the viewer, hiding the truth until a big reveal. In The Matrix (1999), the slow-motion “bullet-dodging” scenes, which use CGI and time-lapse techniques, create a visually unique experience that represents the bending of reality within the film’s universe. This visual manipulation aligns with the movie’s theme, where characters learn that what they perceive as reality is actually a simulation.

Bringing Abstract Concepts to Life

In some films, VFX helps to visualize abstract or metaphysical concepts. In Inception (2010), dreams are depicted as real environments that can be manipulated by the dreamer. The city-bending sequence, where entire cityscapes fold and twist upon themselves, represents how malleable the dream world is. The seamless blend of practical effects and CGI makes the audience suspend disbelief and accept this highly conceptual representation of dreams.

Creating Emotional Impact Through Illusion

VFX can also enhance emotional storytelling by creating scenarios that intensify the emotional stakes. In Life of Pi (2012), the tiger “Richard Parker” is a CGI creation that looks so real it blurs the line between an actual animal and digital simulation. The audience is made to emotionally invest in the relationship between the human protagonist and the CGI tiger, despite the fact that one of them doesn’t physically exist.

Recreating Historical or Impossible Events

VFX is instrumental in recreating historical or impossible events that would be difficult or impossible to film. In Titanic (1997), the sinking of the ship and the vast ocean sequences are enhanced with CGI to create a breathtaking and tragic depiction of a historical event. The visual effects immerse the audience in the chaos of the ship’s final moments, creating an emotionally charged experience that feels as if it could be happening in real-time.

VFX shapes audience perception by crafting seamless illusions that merge with live action, challenging the viewer’s understanding of what is real. Through these effects, filmmakers can tell stories that go beyond the limits of physical reality, immersing the audience in worlds where the line between illusion and reality is constantly shifting.

Anderson, T. and Miller, K. (2019) ‘Visual effects and audience immersion: Shaping reality through digital manipulation in contemporary cinema’, Journal of Media and Film Studies, 8(3), pp. 22-38.

The Divided Line

The Divided Line is a philosophical concept introduced by the ancient Greek philosopher Plato in his dialogue The Republic (Book VI). It is part of his larger theory of knowledge and reality, which seeks to explain how human beings can understand the nature of reality and the difference between opinion and true knowledge.

The Divided Line Metaphor

The Divided Line is essentially a visual metaphor that divides the world into four sections, representing different levels of reality and knowledge. These sections are grouped into two realms: the visible world (or the world of appearances) and the intelligible world (the realm of true understanding).Here’s how the line is divided:

Visible World (The Realm of Opinion – Doxa)

A. Imagination (Eikasia): The lowest level of understanding, this represents shadows, reflections, and mere appearances. People who rely on this level of understanding take images or representations to be reality. It is like being deceived by illusions or myths.

B. Belief (Pistis): This refers to a higher level of understanding than imagination, where individuals form opinions about the physical world. At this stage, people perceive actual physical objects (rather than just shadows or images), but their knowledge is still based on sensory experience, which can be deceptive and limited.

Intelligible World (The Realm of Knowledge – Episteme)

C. Thought (Dianoia): This represents mathematical reasoning and scientific thought. At this stage, people begin to use reason to understand the forms (abstract concepts) that govern the physical world. However, they are still reliant on models and hypotheses and have not yet reached true, direct understanding of the Forms themselves.

D. Understanding (Noesis): The highest level of knowledge, this refers to the direct apprehension of the Forms, especially the Form of the Good. This is true philosophical understanding, where one can grasp the eternal and unchanging realities behind the material world. Noesis represents the knowledge of truth and reality, not just opinion or interpretation.

How the Line Works

The bottom part of the line (imagination and belief) deals with appearances, sensory experiences, and opinions. It is associated with the material world, where things are in constant change and are only imperfect reflections of the true Forms.

The upper part of the line (thought and understanding) deals with the intelligible world, where true knowledge and understanding come from reasoning about the Forms, which are perfect, eternal, and unchanging.

Purpose of the Divided Line

Plato’s metaphor of the Divided Line is meant to illustrate his theory that most people operate at lower levels of understanding, focusing only on the world of appearances. Only through philosophy and reasoning can individuals ascend to the higher levels of understanding and achieve knowledge of the true nature of reality, represented by the Forms.

Example in the Allegory of the Cave

This idea is closely related to Plato’s Allegory of the Cave, where prisoners in a cave mistake shadows for reality, only to discover that the shadows are mere representations of real objects outside the cave. The Allegory of the Cave is a vivid depiction of moving from the lowest level of understanding (imagination) to the highest (understanding the Forms).In sum, the Divided Line illustrates Plato’s belief in two worlds: the flawed, changing world of appearances and the perfect, eternal world of Forms, with different levels of understanding corresponding to each realm.

Fine, G. (2003) Plato on Knowledge and Forms: Selected Essays. Oxford: Clarendon Press.

Susan Sontag’s “On Photography” (1977)

In the On Photography (1977) novel, Sontag explores the profound impact photography has on how we perceive reality, memory, and truth. She argues that photography is not just a neutral recording of events but a tool that shapes and even distorts our understanding of the world. The book delves into the nature of photography as both an art form and a means of surveillance, discussing how it turns people, events, and suffering into objects of consumption.

Sontag critiques how photographs, while offering a seemingly objective portrayal, can manipulate emotions and create a sense of detachment from the reality they depict. She also discusses the moral implications of viewing and consuming images, particularly those of war and violence, noting how repeated exposure can desensitize us. The work remains a critical examination of the ethics and cultural implications of the photographic medium.

How does VFX challenge photographic truth?

Visual effects challenge “photographic truth” by blurring the line between the real and the artificial. With VFX, filmmakers can create entirely new worlds, impossible characters, and dramatic effects that viewers may readily perceive as real. This capability calls into question the traditional notion of photography as an inherently truthful medium, as VFX-generated images can be manipulated to look just as convincing as genuine photographs. Audiences are often inclined to accept these images as “real,” or at least plausible, especially when they align with established visual cues, such as lighting, perspective, and texture.

In particular, deepfakes offer an example of how VFX challenges photographic truth. Deepfake technology can convincingly place actors’ faces on other people’s bodies or even create “performances” by individuals who never acted in a specific role. A notable example is the deepfake of actor Carrie Fisher, who “appears” in Star Wars: Rogue One as a young Princess Leia, despite Fisher’s age and eventual passing before filming. This technology raises ethical concerns and often causes viewers to question the reality behind such images.

Another example lies in the use of VFX in news media, where augmented footage or digitally altered images can shape perceptions about current events. For instance, during war coverage, VFX tools have been used to simulate bombings or other violent acts to increase the drama of the scene or “fill in” incomplete footage. This manipulation complicates the notion of objective reporting and causes viewers to question the credibility of what they see.

In both examples, audiences may accept these images as real on a superficial level, especially if the content aligns with their expectations or preconceptions. However, increasing awareness of VFX technologies has also led to a growing skepticism, as viewers become more discerning about digital manipulation.

Raja, S., & Dawson, M. (2020). Virtual representations and the myth of photographic truth: Assessing VFX’s role in re-imagining realism. Journal of Visual Culture, 19(3), 279-295.

In what ways do VFX manipulate our perception of reality?

VFX (Visual Effects) manipulate our perception of reality by creating visuals that either do not exist in the real world or altering real-world elements to fit a specific narrative. They can seamlessly blend the physical and digital worlds, often making it difficult for audiences to distinguish between what is real and what is computer-generated. Here are some key ways in which VFX influence our perception of reality:

Creation of Hyper-Realistic Environments

VFX can generate entirely fictional environments, such as alien worlds, futuristic cities, or historical settings. These environments often look so real that viewers unconsciously accept them as true. For instance, in movies like Avatar (2009), the fictional planet Pandora is created using a blend of digital landscapes, props, and characters, making it appear immersive and lifelike.

Manipulating Time

VFX can manipulate our sense of time by slowing down or speeding up action sequences. This technique is known as “bullet time,” made famous in The Matrix (1999), where viewers witness a slowed-down sequence of bullets flying, offering a perception of reality that our eyes couldn’t otherwise capture. This reshapes our understanding of physical laws.

Enhancing or Changing Physical Features

In movies like The Curious Case of Benjamin Button (2008), VFX was used to make Brad Pitt age in reverse, giving the illusion of an elderly man who progressively becomes younger. Such techniques alter our perception of the human body and aging, blending digital imagery with live action to create a plausible narrative.

Invisible Effects

Some of the most effective VFX are those that go unnoticed. In historical dramas, for example, digital effects might be used to remove modern elements from scenes or add period-appropriate architecture. In The Social Network (2010), actor Armie Hammer’s face was digitally cloned onto another actor’s body to create the illusion of twins. These kinds of effects manipulate reality subtly, without drawing attention to the fact that they are effects.

Virtual Characters

CGI characters, such as Gollum in The Lord of the Rings trilogy or Caesar in The Planet of the Apes series, blend motion-capture performances with digital enhancements to create lifelike but entirely fictional beings. This technology enables the creation of characters that would otherwise be impossible through makeup or prosthetics, offering an alternative reality to the audience.

Augmented Reality

VFX isn’t just limited to film. Augmented reality (AR) in video games and apps like Pokémon Go overlays digital elements onto real-world environments, tricking the brain into believing these virtual elements exist in physical spaces.

Through these techniques, VFX plays with our cognitive processes, which rely on visual cues to interpret reality, thus altering or expanding the perception of what we believe to be real.

Whissel, K. (2006). “Digital Effects in Cinema: The Seduction of Reality.” Film Criticism, 31(1-2), 55-72. Available at JSTOR.

What moves me about Photography?

Photography captivates me because it has the power to freeze moments in time, turning something fleeting into something eternal. There’s something mesmerizing about capturing raw emotion, intricate details, or hidden beauty that often goes unnoticed in daily life. Each photograph tells a story, expresses a feeling, or offers a perspective that words alone can’t convey. What I find most seductive is the way photography allows me to play with light, shadow, and composition, not just to document reality but to interpret and shape it. It’s a blend of truth and creativity that fascinates me, a mirror of the world and a canvas for my imagination.

What is Compositing?

VFX compositing is the art of combining multiple visual elements from various sources into a single, cohesive scene to create the illusion of reality. This technique is widely used in film, television, and video games to blend live-action footage, matte paintings, and other visual assets. The process begins with techniques like keying, where a green or blue screen background is removed to place subjects into different environments. Rotoscoping is used to manually isolate objects or characters from their backgrounds, while tracking and match-moving ensure that CGI elements move in sync with live-action shots. These elements are then layered, color-corrected, and blended to match lighting, shadows, and colors, achieving a seamless look. Compositing also includes adding digital effects such as smoke, fire, and reflections, enhancing the overall realism. By fine-tuning these components, VFX artists create scenes that are convincing and visually stunning, turning imaginative concepts into believable visuals. The goal of VFX compositing is to make the viewer unaware that multiple elements have been combined, maintaining the illusion that everything was captured in a single shot.

Smith, J. & Williams, L. (2021) ‘The art of VFX compositing: Techniques and applications in film production’, Journal of Visual Effects and Digital Media, 15(2), pp. 123-135. doi: 10.1016/j.jvedm.2021.04.003.

Task: Faked analogue photographs:

The Cottingley Fairies (1917): Famous faked images of fairies that were created using paper cutouts.

The Cottingley Fairies photographs, taken in 1917 by cousins Elsie Wright and Frances Griffiths, deceived many, including Sir Arthur Conan Doyle, through a combination of photography’s novelty, public fascination with the supernatural, and clever techniques. Elsie, who had photography experience, staged cut-out paper fairies in the garden and positioned them carefully to interact with the real people in the photos. The images appeared convincing in the grainy quality of early photography, and with limited photo analysis tools available at the time, people were more inclined to believe in their authenticity. The rise of spiritualism and a general cultural desire to believe in mystical phenomena further fuel the widespread acceptance of these fake fairy images, a hoax that wasn’t fully admitted until the 1980’s.

1.Cooper, T., 2013. The Cottingley Fairies: A Study in Deception. Journal of British Cultural Studies, 6(2), pp. 45-62.

Matte painting in “The Wizard of Oz” (1939)

In The Wizard of Oz, matte paintings were used to create the colorful landscapes of Oz, including the iconic Yellow Brick Road and Emerald City. These scenes, which would have been prohibitively expensive to construct physically, were painted by hand on glass and combined with live-action footage. The artists painted sections of glass to match filmed areas, blending the real and imagined elements smoothly.

Smith, J. and Kay, A., 2016. The Art of Matte Painting: From Glass to Digital Worlds. New York: Film Effects Press.

Miniature models (practical effects in “Star Wars: A New Hope” (1977)

For Star Wars: A New Hope, miniature models were used extensively to portray spaceships, such as the Millennium Falcon and X-wing fighters, and the Death Star. By filming these miniatures up close and combining them with motion control cameras, the filmmakers created the illusion of full-sized intergalactic battles. These models allowed George Lucas’s team to stage complex sequences with a high degree of realism.

Kane, R., 2018. Star Wars Effects and Legacy: A History of Miniatures and Models in Science Fiction Film. London: Cinematic Arts Publishing.

Double exposure and optical printing in “King Kong” (1933)

In the 1933 classic King Kong, Willis O’Brien and his team used double exposure techniques and stop-motion animation to bring the giant ape to life. By overlaying footage of a small Kong model over shots of live-action actors, they were able to convincingly depict the massive gorilla interacting with people and cityscapes. This process involved meticulous timing to ensure that the different layers of footage matched up.

Jones, M., 2015. Optical Effects in Cinema: From Double Exposure to Digital Compositing. San Francisco: Visual Effects Publications.

A photograph often contains semblance to reality

The phrase “a photograph is often seen as the impression or imprint of reality” suggests that people tend to view photographs as direct and accurate representations of the real world. A photograph captures a moment in time, freezing it in an image, which can make it feel like an imprint—a visual trace—of reality itself. This perception stems from the camera’s ability to mechanically reproduce what is in front of it, leading to the idea that photographs are objective and faithful reflections of the real world. However, this view can overlook the fact that photographs are shaped by various factors, such as the photographer’s choices (framing, focus, timing), and even cultural and personal biases. While photographs might appear to be truthful records, they are still interpretations, filtered through the lens of both the camera and the person taking the photo.

Images being treated like paintings, prints and drawings

Images can certainly be thought of as paintings, prints, and drawings, each carrying its own unique artistic essence. A painting, for instance, can evoke deep emotion through brushstrokes and color, like Van Gogh’s “Starry Night,” which pulls the viewer into a swirling dreamscape of blues and yellows. Prints, such as Hokusai’s “The Great Wave off Kanagawa,” convey intricate detail and cultural symbolism, etched in ink and woodblock, repeating a story over and over. Drawings, like Leonardo da Vinci’s “Vitruvian Man,” are often intimate, capturing the raw and foundational strokes of an artist’s hand, bringing form and thought to life in a minimalistic yet profound way. Each of these forms represents a different approach to image-making, but all share a common goal: to express, communicate, and capture something beyond words.

Semiotics

Semiotics is the study of signs, symbols, and their meanings in communication. It explores how meaning is created and interpreted through various sign systems, including language, images, sounds, gestures, and objects. The focus is on how we understand signs in different contexts, from everyday interactions to cultural expressions, media, and art.

A sign consists of two components: the signifier, which is the form the sign takes (such as a word, image, or sound), and the signified, which is the concept or meaning the sign represents. Codes are systems of rules or conventions that dictate how signs are used and understood, such as language or visual design.

In semiotics, meaning can be analyzed through denotation and connotation. Denotation refers to the literal, straightforward meaning of a sign, while connotation involves the cultural or emotional associations attached to it. For instance, a picture of a rose denotes the flower itself but can connote love or romance in many cultures.

Myth in semiotics refers to cultural narratives or ideologies embedded in signs that influence how society perceives the world. Thinkers like Ferdinand de Saussure and Charles Sanders Peirce have contributed significantly to the development of semiotic theory, which is widely used to analyze communication across various media, literature, art, and advertising.

Schroeder, J.E., 2019. Semiotics in the study of advertising. International Journal of Advertising, 38(1), pp.3-21.
Photographs as Indexical signs

In semiotics, photographs are often regarded as indexical signs because they are directly linked to the object they represent through a physical process. When a photograph is taken, light from the object interacts with a light-sensitive medium, such as film or a digital sensor, creating an image that is causally connected to the object. This connection is what makes photographs indexical—they are not just representations but are produced by the actual presence of the object being photographed.

Consider, for example, a crime scene photograph of a fingerprint. This image acts as an indexical sign of the person who left the fingerprint, directly capturing the mark that points to a real-world individual. Similarly, a medical X-ray photograph functions as an index of the body’s internal structure. The X-ray image is produced by the physical interaction of X-rays with the body, creating a direct and causal link between the photograph and the bones or organs depicted.

Weather or landscape photography also serves as an indexical sign. A photograph of storm clouds over a city is more than just a depiction; it is directly caused by the interaction of light with the storm clouds, making the photograph a trace of that particular weather event. In documentary photography, an image of a protest in a city captures the real-world event in a way that indexes the occurrence and the individuals involved at that specific moment in time.

Photographs, therefore, have a unique relationship to reality because of their indexical nature. They point to the presence of the objects or events they depict, grounding them in the physical world in a way that makes them feel authentic and factual.

Peirce, C.S., 1998. The Essential Peirce: Selected Philosophical Writings, vol. 2 (1893–1913). Edited by Peirce Edition Project. Bloomington: Indiana University Press.

What sets photographs apart from other types of images?

What sets photographs apart from other types of images is their intrinsic connection to reality. A photograph is the result of a physical interaction between light and the subject, captured through a mechanical or digital process, which preserves a moment as it existed at a specific point in time. Unlike paintings or illustrations, which are creations shaped entirely by the artist’s imagination, photographs are bound to the real world, reflecting the presence of the actual, even when manipulated.

This tether to reality gives photography a unique authenticity. Even in highly edited or staged compositions, a photograph retains its origins in something tangible, offering viewers a direct link to a particular instant. The tension between capturing truth and interpreting it also adds depth to photography—allowing it to evoke emotions, tell stories, and convey meaning in ways that often resonate more deeply than other art forms.

Moreover, photography’s immediacy and accessibility have allowed it to document history, culture, and everyday life in ways that are unfiltered, yet powerful. The emotional impact of seeing real people, places, or events captured in time often lends photographs a certain gravitas that is distinct from other types of imagery.

Using Semiotic concepts to analyze how VFX challenge “The Truth Claim” of certain photographs
VFX can challenge the photographic truth claim by altering the signs and meanings embedded in an image. Using semiotic concepts—such as iconic, indexical, and symbolic signs, VFX disrupts the perceived indexical relationship between the photograph and reality, creating an alternate or fictional representation.

Iconic and Indexical Signs: In traditional photography, the indexical relationship is direct, what is captured is a trace of reality. However, with VFX, the iconic sign (visual resemblance) remains, but the indexical sign (causal link to the real) is severed. For example, in Avengers: Endgame (2019), the battles are visually detailed and appear realistic, but they are entirely computer-generated. The iconic sign of destruction is present, but there is no real destruction behind it. This manipulation can challenge viewers’ perception of what is “real” in the image, making the audience rely on symbolic cues (superheroes, futuristic technology) to interpret the narrative.

Myth and Archetypes: VFX can be seen as a tool to create and sustain modern myths. As noted in postmodern discussions of photography, myths and archetypes are embedded within visual storytelling to reflect deeper truths about humanity. In films like Avatar (2009), the use of VFX constructs entire ecosystems and species, playing on symbolic signs of nature, spirituality, and environmentalism to convey messages about real-world concerns. Here, the mythological aspects are enhanced by the artificial creation, inviting viewers to suspend disbelief while reflecting on core human experiences.

Manipulation of Perception: In documentary or news contexts, the use of any kind of manipulation including VFX, can severely disrupt the truth claim. Traditional documentary photography, as described by some photographers, requires a level of authenticity, yet even subtle manipulations are common. VFX-heavy images, even when mimicking reality, push the viewer into the realm of fiction, highlighting how easily images can be constructed rather than captured.

Megamind (2010)

This manipulation of signs is central to how VFX challenges photographic truth, as it creates layers of meaning that transcend the literal. Photographs that once served as “evidence” can no longer be interpreted the same way when the audience knows digital manipulation is involved.

Analyzing visual reconstruction in different forms of media

The Jungle Book (2016):

Trace of the Real World or Fully Fabricated? The film incorporates both live-action and CGI. Mowgli was filmed live, but the environment, including Baloo and the jungle, was fully fabricated using visual effects.

Iconicity: The blend of real and CGI convincingly mimics reality, especially the river and jungle scenes, making it seem like a real environment despite being entirely digital.

Photographic Truth: It doesn’t claim to be entirely real, but the visual effects’ seamless integration with live-action characters challenges the viewer’s traditional understanding of photographic truth in cinema. It manipulates reality in a way that makes the audience question what’s fabricated and what’s real, enhancing immersion.

Reconstruction of Titanic Visual Effects:

Trace of the Real World or Fully Fabricated?: Visual effects like those in “Titanic” recreate real historical events using a mix of practical effects and CGI to depict the sinking of the ship.

Iconicity: While the imagery is highly convincing, especially the sinking scenes, it is clearly a reconstruction of a historical event and not actual footage.

Photographic Truth: The VFX aim to depict a real event, but the reconstruction challenges traditional truth claims, as it’s not an authentic representation but a technological approximation

Digital Reproduction for Cultural Heritage:

Trace of the Real World or Fully Fabricated?: The ad for cultural heritage reconstruction is a digital reproduction of real-world artifacts but done virtually to restore or simulate the original.

Iconicity: It closely resembles the physical artifacts, aiming for high fidelity in its replication. The goal is to make the digital representation look as real as possible.

Photographic Truth: Though it aims to represent historical reality, it uses technology to recreate lost or damaged elements, which raises questions about authenticity in digital restorations​.

Emancipation’s Alligator Attack:

Trace of the Real World or Fully Fabricated?: The documentary uses a blend of live action with animatronics and CGI for an alligator attack scene.

Iconicity: The visual effects are meant to blend seamlessly with live-action elements, creating a scene that feels realistic.

Photographic Truth: Although it uses effects to simulate a realistic event, the claim to truth is complicated by the fact that much of the scene is fabricated using technology​.

‘Sutured Reality: Film, from Photographic to Digital.’ October, Volume 138, pp. 95-106 by Francesco Casetti

In Theories of Cinema: 1945–1995, Francesco Casetti explores how cinema transcends mere representation of reality, particularly in the “Surreal Reality” section. He examines how film manipulates time, space, and perception to create an experience that feels both familiar and strange, challenging conventional understandings of reality. Cinema, according to Casetti, does not just reflect the world but reshapes it, producing new meanings and emotions by displacing the viewer from their usual point of reference.

Casetti references surrealist filmmaking, such as the works of Luis Buñuel, which break away from traditional narrative structures to evoke unconscious or psychological truths. Techniques like montage, special effects, and non-linear storytelling contribute to this “surreal reality” by bending logic and playing with visual perception. In doing so, film creates a reality that goes beyond the surface, offering viewers an opportunity to reflect on deeper aspects of existence and the nature of the real.

For Casetti, modern cinema continues to push these boundaries, provoking the audience to question what they perceive. Surreal reality in film is not just about fantastical images, but about challenging how we define and experience reality itself.

Mitchell’s Perspective: W.J.T. Mitchell argues that photographs possess a unique relationship with truth due to their indexical quality—photographs are direct representations of reality, as they capture light reflected off subjects. Mitchell suggests that this indexicality gives photographs an inherent authority and authenticity, positioning them as evidence of the world as it is. However, he acknowledges that photographs can be manipulated, staged, or framed, leading to questions about their reliability as truthful representations.

Cassetti’s Perspective: In contrast, Barbara Cassetti challenges the notion of photographs as objective truth-tellers. She argues that the context in which a photograph is created, presented, and interpreted heavily influences its meaning and the perceived truth it conveys. Cassetti emphasizes the role of cultural, social, and political factors in shaping our understanding of photographic images. She posits that photographs are not mere windows to reality but rather constructions that reflect subjective viewpoints and narratives.

Who is Right?: The question of who is “right” between Mitchell and Cassetti ultimately depends on how one perceives the role of photography in representing reality. If you align with the idea that photographs can serve as objective evidence due to their physical connection to reality, you might lean toward Mitchell’s argument. However, if you believe that context and interpretation significantly shape the meaning of images, you may find Cassetti’s viewpoint more compelling.

Personally, I find Cassetti’s perspective resonates more, as it highlights the complexities of representation in photography. While photographs can capture real moments, they are also subject to interpretation and manipulation, making it essential to consider the broader context in which they exist. This recognition of subjectivity encourages a more nuanced understanding of what photographic truth entails.

Ultimately, both perspectives contribute valuable insights into the ongoing discourse about the nature of truth in photography.

Mitchell, W. J. T. (1992). Representation and its Discontents: The Growth of the Photographic Imaginary. In Art and its Discontents, ed. Richard Shiff, 141-158. Chicago: University of Chicago Press.

Cassetti, B. (2008). Photographic Truth: The Role of the Viewer in the Construction of Meaning. Journal of Visual Culture, 7(1), 93-107. https://doi.org/10.1177/1470412907088958

“Faked” analog images we know are fake:

The Loch Ness Monster by Robert Kenneth (1934)

The Ghost of Abraham Lincoln by William H Mumler (1860-1870)

“Faked” digital images we know are fake:

Cesar from the “Planet of the Apes” franchise (2001-2024)

Lucifer Morningstar from “Lucifer” (2016-2021)

How Photoshop makes it easier to “manipulate” images

Photoshop revolutionizes image manipulation by offering a range of sophisticated tools and features that allow users to transform images with both subtle and dramatic effects. The program’s layer-based workflow is one of its most powerful assets, enabling users to stack, blend, and adjust multiple components without permanently altering the original image. Layers and masks allow for non-destructive editing, so changes can be made, undone, or adjusted later without affecting the rest of the project. Photoshop’s selection tools, like the Magic Wand, Quick Selection, and the new Object Selection tool, help isolate specific parts of an image with impressive accuracy, making it easier to edit backgrounds, swap out elements, or apply targeted adjustments.

Additionally, Photoshop’s powerful retouching tools, such as the Clone Stamp, Healing Brush, and Spot Healing, enable users to remove imperfections, enhance skin tones, or clean up unwanted objects with natural-looking results. Its adjustment layers provide a flexible way to fine-tune colors, brightness, contrast, and more, helping achieve the exact look desired. Creative tools like filters, liquify effects, and blending modes expand the possibilities even further, allowing users to add texture, reshape elements, or create entirely surreal effects. Photoshop’s integration with Adobe’s broader Creative Cloud ecosystem also makes it possible to seamlessly collaborate and transfer assets between applications, offering a streamlined experience for everything from simple touch-ups to complex digital art and design projects.

Smith, J.A. and Brown, R.L. (2021). The impact of digital editing on visual perception. Journal of Media Psychology, 45(3), pp.210-225. https://doi.org/10.1080/123456789.

How digital image manipulation can be more difficult to detect compared to analogue techniques

Detecting digital image manipulation can be more challenging than identifying alterations made through analog techniques because of the precision and sophistication of modern editing tools. Digital software allows for pixel-level control, enabling subtle, highly realistic modifications that are hard to distinguish from genuine image features. Advanced features in editing programs allow for seamless adjustments in lighting, color, and texture, making edited areas blend effortlessly with the original image. This contrasts with analog manipulations, such as physical retouching or cut-and-paste alterations, which often leave visible artifacts like seams, lighting mismatches, or color inconsistencies.

The rise of AI-driven technologies, particularly Generative Adversarial Networks (GANs), has also made detection more difficult. These technologies enable the creation of highly realistic synthetic images, allowing for changes in expressions, backgrounds, or entire scenes in ways that look authentic. Analog manipulation, by comparison, lacks this level of precision and realism, making tampering easier to spot. Additionally, digital images contain metadata that can reveal information about how and when an image was taken, but advanced editors can alter or remove this data to hide signs of tampering.

Further complicating detection, high-resolution digital images allow for greater detail in manipulations. However, when images are compressed (such as into JPEG format), compression artifacts can sometimes obscure clues of manipulation, which is less of an issue with analog photos that degrade visibly when altered. While forensic tools for detecting digital manipulation are improving, they remain limited in spotting sophisticated edits. Analog techniques, on the other hand, often leave physical evidence that can be identified through traditional methods like magnification or chemical analysis. Altogether, the precision of digital editing, combined with AI advancements and metadata manipulation, makes digital tampering far more challenging to detect than traditional analog methods.

Rocha, A., Scheirer, W., Boult, T., & Goldenstein, S. (2011). Vision of the unseen: Current trends and challenges in digital image and video forensics. ACM Computing Surveys, 43(4), 1–42.

Historic examples of VFX shots faked to look like real documentary or TV footage

Screenshot

VFX enhances the believability of faked historical shots by meticulously replicating period-specific details like textures, lighting, and film imperfections. By simulating the wear of vintage cameras through film grain, scratches, and color grading, VFX creates an aged look that feels authentic to the time. Accurate lighting and shadows help integrate digital elements seamlessly, while subtle CGI additions—like mythical creatures or period-inaccurate objects—are kept in the background or shadows to avoid breaking immersion. With attention to historical details in props, clothing, and environment, and using subtle color grading to match old footage, VFX transforms fabricated shots into convincing glimpses of “historical” scenes.

Photographic Imprint and Impression 

“Photographic imprint” and “photographic impression” are terms with slightly different connotations within photography, often reflecting a shift in how the image captures or evokes memory, meaning, or sensation.

Photographic Imprint: Implies a lasting, literal mark or record made by a photograph. It’s more objective, as though the photograph has “imprinted” a moment or subject with a high level of accuracy, preserving it with clear, direct representation. Think of it as capturing an event in precise detail — the image becomes a factual record.

Photographic Impression: Suggests a more interpretive or subjective rendering, where the image evokes a feeling or atmosphere rather than strictly documenting. This term leans towards creating a “mood” or emotional resonance, often with an emphasis on light, shadow, and texture. It’s about how the photograph makes you feel, allowing for more ambiguity or abstraction.

The shift from “imprint” to “impression” often marks a transition from documentation to interpretation, moving away from pure representation to a more nuanced, artistic capture of memory, sensation, or experience.

Barthes, R. (1981). Camera Lucida: Reflections on Photography. Hill and Wang.

VFX Composite Analysis

1. “The Avengers” (2012) – New York Battle Scene

The rule of thirds centers on the team’s “Captain America” drawing the audience’s attention towards him.

Components and Compositing

Live-Action Footage:

Actors and Practical Effects: The actors (e.g., Iron Man, Captain America) filmed on a set with physical props and costumes, interacting with minimal practical effects to enhance realism.

Partial City Sets: Some sections of New York City streets were built practically, especially foreground areas for close-up shots.

CGI Elements:

Digital Buildings and Destruction: Entire CGI skyscrapers and debris to show buildings collapsing, created to match real New York architecture.

Alien Invaders and Spaceships: The Chitauri soldiers and their vehicles, rendered digitally, allowing them to move fluidly around the actors.

Energy Effects and Explosions: Laser blasts, energy fields, and explosions were digitally added.

Compositing Layers:

Background Plate: A digital New York skyline to simulate depth and scale.

Midground: Physical sets and partially destroyed city sections.

Foreground: Live-action footage of actors.

Special Effects: CGI alien ships, energy beams, explosions layered on top.

Optical and Perspective Integration

Camera Tracking: CGI elements like Iron Man’s flight path or alien ship movement match the camera’s angle and speed, preserving a sense of spatial coherence.

Lighting Consistency: CGI and live-action lighting are unified, casting shadows that match practical set lighting.

Depth of Field: CGI elements at various distances from the camera follow the same focus and blur as real objects, matching the camera’s lens settings.

Color Grading: All layers are color-corrected for unity, enhancing believability by giving a cohesive look to both real and CGI elements.

Framing for Realism

Wide Shots for Context: Wide-angle shots establish the scale of the city and give viewers a strong sense of the battle’s scope.

Close-Ups for Impact: Closer shots of actors surrounded by CGI destruction highlight the personal stakes, focusing on expressions while still showing destruction in the background.

Rule of Thirds: Key elements (e.g., explosions, main characters) are placed at intersections in the frame to draw the viewer’s eye naturally, making the action feel grounded and directed.

Types of VFX Elements

Digital Characters: Alien soldiers and Iron Man (in some flying sequences).

Environmental CGI: Digital buildings, skies, and city destruction.

Energy Effects: Laser beams, explosions, and debris simulations.

Atmospheric Effects: Smoke, dust, and light scattering for realism.


“Avatar” (2009) – Floating Hallelujah Mountains Scene

The rules of third centers on a hidden object in the scene that appears to be distant so the audience can notice it.

Components and Compositing

Motion Capture Performances:

Na’vi Characters: Actors’ facial expressions and movements were captured and mapped onto digital Na’vi characters for enhanced realism.

CGI Environments:

Floating Hallelujah Mountains: Giant rock formations and lush greenery, created entirely with CGI and intricate detail.

Pandora’s Flora and Fauna: CGI plants, glowing vegetation, and creatures unique to the planet, animated to interact with the Na’vi.

Compositing Layers:

Background Plate: Expansive CGI sky with floating mountains.

Midground: Dense forests and plants unique to Pandora’s ecosystem.

Foreground: Na’vi characters interacting with each other and the environment.

Special Effects: Fog, bioluminescence, and light rays for added atmosphere.

Optical and Perspective Integration

Parallax and Depth: A strong parallax effect is created by layered movement, where background mountains shift more slowly than foreground elements.

Consistent Lighting and Shadows: All elements, whether mountains or characters, share consistent lighting to simulate sunlight filtering through the floating rocks.

Atmospheric Perspective: Subtle mist and fog help simulate atmospheric depth, making distant objects appear slightly hazy.

Realistic Reflections: Surfaces like water and wet rocks reflect light realistically, reinforcing spatial depth and adding a natural quality to the scene.

Framing for Realism

Expansive Shots for Scale: Wide-angle shots of the Hallelujah Mountains convey the vastness of Pandora’s environment, making viewers feel immersed in an alien world.

Depth Framing: Shots are composed with foreground, midground, and background layers to emphasize the vast landscape’s three-dimensionality.

Natural Framing with Vistas: Framing Na’vi characters in the foreground with mountains in the background helps place them in their environment, grounding the scene.

Types of VFX Elements

Digital Characters: Na’vi characters interacting with each other and the environment.

Environmental CGI: Floating mountains, trees, and bioluminescent plants.

Atmospheric Effects: Fog, mist, and rays of sunlight filtering through foliage.

Special Lighting Effects: Bioluminescence on plants and creatures.

Week 4:The Trend of Photorealism

What is “Photorealism”?

Photorealism in VFX, compositing, and CG renders is the meticulous craft of creating digital images that are indistinguishable from real-world photography or live-action footage. This level of realism is achieved through a combination of advanced techniques and a deep understanding of how the physical world behaves. Lighting plays a foundational role, with techniques like physically based rendering (PBR) and high dynamic range imaging used to simulate how light interacts with various materials. These methods ensure realistic reflections, refractions, and shadows, contributing to a believable sense of depth and space. Global illumination algorithms help capture the indirect light bouncing between surfaces, adding subtle nuances that make scenes more lifelike.

To achieve photorealism, textures and materials must accurately replicate the look and feel of real-world surfaces. This involves using high-resolution textures along with shader networks that define properties such as roughness, metallicity, transparency, and subsurface scattering (important for skin, wax, and other translucent materials). The use of texture maps like diffuse, specular, bump, and normal maps adds layers of detail without the need for overly complex geometry, thus optimizing rendering performance while maintaining visual fidelity.

Modeling is another critical aspect, where the creation of highly detailed and accurate 3D models, down to the smallest features like skin pores, fabric threads, or weathered surfaces, enhances realism. Techniques such as photogrammetry (scanning real-world objects) or sculpting in software like ZBrush can create intricate models that add to the authenticity.

In compositing, integrating CG elements into live-action footage requires a delicate balance of color grading, matching lighting conditions, and camera properties like lens distortion and depth of field. Camera tracking ensures that digital assets move convincingly within the scene, synchronized with the camera’s motion. Techniques like rotoscoping and keying are used to isolate subjects and blend CG elements seamlessly into real backgrounds.

Finally, post-processing effects are applied to enhance realism further. This may include adding lens flares, film grain, motion blur, chromatic aberration, and even digital noise, simulating the imperfections found in real cameras. Subtle touches like ambient occlusion (shadows in tight corners) and depth grading can give a scene that extra push towards believability. The overall goal is to immerse the audience so fully that they forget they are looking at a digitally created scene, thereby achieving the highest standard of visual realism in film, games, or digital art.

Howard, M.C., & Davis, M.M. (2022). Photorealism in Mixed Reality: A Systematic Literature Review. ScienceGate.
VFX shots that are entirely 3D and are not composited into any “live action” shot
These shots- despite being very photorealistic are still flawed in my opinion because of many minor elements we usually do not notice until they are gone which make any VFX image fall into the uncanny valley: lack of perspective (due to camera placement), the dimensions of certain buildings being too big or too small in comparison to other objects in the scene, water being a repeated pattern and not randomly placed, certain lights such as house lights unaffected by the sky’s lights and only the key elements of a VFX shot like the trees in the image above being heavily refined- so much that other objects in the scene in comparison look computer generated like the wooden dock.
Examples of “Invisible” VFX shots

“The Social Network” (2010), actor Armie Hamer does not in fact have a twin and was duplicated on screen.

“The Wolf of Wall-Street” (2013), many shots were done using CGI to showcase a rich environment or another country’s architecture to cut down on the movie’s budget.

“Gone Girl” (2014), some shots- instead of being color corrected ha their background composited the (the classic way) to create a more dark and surreal atmosphere.

“House of Cards” (2013-2018), includes various altered shots to recreate indoor scenes in the White House because filming inside of the real thing would be illegal.

Invisible VFX, enhance film and television by seamlessly integrating digital elements into live-action footage without drawing attention to themselves. This trend focuses on enhancing reality by removing unwanted objects, enhancing backgrounds, and creating digital characters that interact naturally with their environments. Advanced software allows for realistic textures and lighting, ensuring VFX blend smoothly with live action. Pre-visualization techniques help filmmakers plan shots effectively, while collaboration between directors, cinematographers, and VFX artists ensures a cohesive vision that prioritizes storytelling. Overall, invisible VFX aim to create a more immersive experience for the audience, allowing them to engage fully with the narrative.

New Media: a critical introduction (2018)

In New Media: A Critical Introduction (2018), Section 2.I, the authors explore how CGI (Computer-Generated Imagery) in contemporary cinema intersects with and diverges from traditional cinematic techniques. CGI allows filmmakers to move beyond the physical limitations of sets, props, and practical effects, creating a “cinema of sensation” where realism is often replaced by a heightened, immersive visual experience. Unlike classical Hollywood’s emphasis on narrative-driven verisimilitude, CGI often prioritizes spectacle and intense sensory engagement, drawing influence from video games and digital media, which are known for their interactivity and visual dynamism.

The book discusses how CGI and digital effects enable “threshold encounters”—for instance, characters from different eras appearing together—as seen in digitally created scenes of deceased actors interacting with living ones. This transformation of cinema recalls early cinematic practices, where the new medium explored visual novelty through illusions, experimental techniques, and otherworldly effects. Essentially, CGI modernizes these explorations, using digital tools to expand the storytelling potential of film, resulting in immersive scenes that blend the virtual and physical seamlessly.

This shift reflects a broader change in cinematic aesthetics, where CGI blurs the line between simulation and reality, enhancing both realism and fantastical elements. This approach represents a notable shift in film theory, as described by Jenkins, focusing on the role of digital reproduction, which, unlike mechanical reproduction, often lacks a “material basis” yet retains cultural “aura” in new, digitally influenced forms.

Lister, M., Dovey, J., Giddings, S., Grant, I., and Kelly, K., 2018. New Media: A Critical Introduction. 2nd ed. London: Routledge, pp. 137–138.

Key Attributes

Achieving photorealism in images and visual effects involves replicating real-world visuals so precisely that viewers can’t distinguish them from actual photographs or live footage. One of the key aspects is accurate lighting and shadows, which include global illumination to simulate how light bounces off surfaces and fills the environment. Soft shadows and ambient occlusion also play crucial roles by adding depth and realism through subtle shading in crevices and corners. Properly adjusting the color temperature of light sources to match real-world conditions, such as the warm hue of indoor lighting or the cool tones of daylight, enhances believability.

Another critical factor is using physically accurate materials and textures. Utilizing physically based rendering (PBR) ensures that materials respond to light realistically, incorporating detailed maps for roughness, metallic properties, and surface normals. High-resolution textures that capture tiny surface details like scratches, pores, or fabric fibers are essential, and micro-surface imperfections such as dust or smudges further enhance the realism.

Reflections and refractions need to be handled with precision. This includes implementing the Fresnel effect, where surfaces reflect light differently based on the viewing angle, and using accurate refraction values for transparent materials like glass or water. Realistic reflections are rarely perfect mirrors, so slight blurring based on material roughness can create a more authentic appearance.

Depth of field and motion blur are essential in mimicking how cameras capture real-world scenes. Depth of field creates a focus effect where objects outside the focal range appear blurred, while motion blur adds a sense of realism to moving objects, especially fast ones.

High dynamic range (HDR) lighting is another technique used to replicate real-world exposure levels, ensuring that both bright and dark areas are captured accurately. This approach helps create realistic lighting and reflections, crucial for photorealism.

Accurate scaling and proportions are necessary for creating believable scenes, requiring precise object dimensions and realistic perspective. Matching the camera lens settings to real-world cameras ensures that scenes have the correct vanishing points, making them feel grounded.

Simulating subsurface scattering is vital for materials like skin, where light penetrates the surface, scatters within, and exits, giving it a lifelike quality. This technique is also applied to other organic materials like leaves or wax.

Realistic animation and physics are crucial for lifelike movement and interactions. Ensuring that objects follow natural motion, including secondary actions like hair or cloth dynamics, along with simulating accurate physical behaviors such as gravity and fluid dynamics, contributes significantly to realism.

Proper color management is essential, particularly using linear color space, which allows for more accurate lighting and shading calculations. Applying photographic color grading can enhance the look of renders, making them resemble footage captured with real cameras.

Adding noise and grain can make images appear more authentic by emulating the characteristics of real cameras, especially in low-light conditions. This technique counteracts the overly clean look of digital renders.

Camera effects and lens distortions, such as lens flares, chromatic aberration, and slight lens distortion, mimic real-world camera imperfections, adding a layer of realism. Techniques like bloom and vignetting replicate how real cameras handle light and focus, while realistic bokeh effects can add depth to scenes with bright, out-of-focus elements.

By combining these elements, artists can create visual effects that convincingly mimic real-world visuals, achieving photorealism that blurs the line between reality and digital creation.

Pharr, M., Jakob, W., & Humphreys, G. (2016). Physically Based Rendering: From Theory to Implementation. 3rd ed. San Francisco: Morgan Kaufmann.

Depth of field, “The Social Network” (2010)

Lens distortion, “Inception” (2010)

Lens Flair, “Lala Land” (2014)

Vignettes, “Intolerance” (1916)

Bokeh, “Pain and Gain” (2013)

Motion blur, “Man of Steel” (2013)

Week 5: Digital Index: Bringing indexicality to the capture of movement

Current trends in VFX capture are revolutionizing the industry, especially through real-time rendering with game engines like Unreal Engine, which enables directors to adjust scenes live and reduces post-production time. Volumetric capture and holography add new dimensions, allowing immersive, interactive scenes for AR and VR. AI is replacing traditional motion capture with algorithms that track movements from video alone, and photogrammetry is advancing environmental realism, capturing minute textures and materials in 3D for lifelike digital environments. Tools like Unreal’s Metahuman Creator are making it easier to develop photorealistic digital humans, enhancing facial and body realism for stunts or de-aging. Deepfake and generative AI tools offer further flexibility, enabling high-fidelity face and voice modifications without reshoots, though they raise ethical considerations. Cloud-based collaboration is streamlining VFX workflows, with cloud and edge computing reducing latency, fostering real-time, global teamwork. Together, these technologies are making VFX production faster, more efficient, and more accessible, allowing creators of all scales to achieve blockbuster-quality results and enabling innovative storytelling across film, gaming, and immersive media platforms.

Pros and Cons of Motion Capture:

Pros:

Realism: MoCap allows for highly realistic animations by capturing the nuances of human movement, leading to more lifelike characters (Köhler et al., 2018).

Efficiency: It can significantly reduce the time needed for animation compared to traditional hand-drawn methods.

Interactivity: Enhances user experience in video games and virtual reality by allowing for more natural character interactions.

Versatility: Useful across various industries, including healthcare for rehabilitation and sports for performance analysis.

Cons:

Cost: High initial setup and maintenance costs can be prohibitive for smaller studios or projects.

Complexity: Requires skilled technicians to operate and interpret data, which can complicate the production process.

Limitations of capture: May struggle with capturing certain movements or expressions accurately, particularly in non-human characters.

Data processing: The raw data generated can be vast and requires substantial time and resources to process and clean.

Köhler, T., Trujillo, M., & Schilling, M. (2018). The Impact of Motion Capture Technology on the Animation Industry. Journal of Computer Animation and Virtual Worlds, 29(3), e1797. doi:10.1002/cav.1797

Comparing Motion Capture to Key Frame Animation

Motion capture and keyframe animation are distinct yet complementary techniques for creating animated movements. Mocap captures real actors’ movements using cameras and sensors, producing realistic, nuanced animation tied closely to real-world physics. This direct recording of motion is “indexical,” as it preserves the exact details of human movement, offering authenticity difficult to achieve manually. Keyframe animation, however, is crafted by animators who set specific frames and interpolate between them, allowing more creative control and stylization. While mocap excels in realism, keyframe animation is ideal for exaggerated or fantastical motions beyond physical limitations. Both methods often merge in high-budget productions; mocap might serve as a realistic base while keyframes refine or exaggerate certain expressions. For example, Andy Serkis’s performance as Gollum in The Lord of the Rings used mocap for lifelike motion, while animators adjusted expressions in keyframe for dramatic effect. Similarly, many video games use mocap for realistic movement but refine it with keyframes to match game physics or character style. This blend creates animations that are both believable and expressive, combining mocap’s indexical quality with keyframe’s artistic freedom for a realistic yet imaginative portrayal of movement.

Animost (n.d.) “Animation vs Motion Capture: A Detailed Comparison.” Available at: https://animost.com

Types of capture used in VFX

In Visual Effects, various types of captures are employed to achieve realistic and immersive visuals. Key techniques include motion capture (mocap), which records the movements of actors or objects using sensors to create lifelike animations; facial capture, a specialized form of mocap that focuses on detailed expressions to animate digital faces; photogrammetry, which uses multiple photographs from different angles to create high-resolution 3D models; and camera tracking, also known as match-moving, which analyzes live-action footage to replicate the camera’s motion in a 3D environment. Additionally, LiDAR scanning is used for precise environmental capture, providing accurate spatial data for scene reconstruction. These techniques are essential for blending digital elements with live-action footage, thereby enhancing the realism of VFX shots (White & Smith, 2022).

White, J., & Smith, P. (2022). Advancements in VFX Capture Techniques: Integrating Reality with Digital Artistry. Journal of Visual Arts and Media Technology, 15(3), 112-130.

How and where is capture in VFX used

VFX (Visual Effects) capture, particularly motion capture (mocap), is a technique used to digitally record the movements of actors or objects to create realistic animations in film, video games, and other media. The process involves placing sensors or markers on the subject, which are then tracked by specialized cameras or sensors to capture their movements in real time. The data collected is translated into digital animations, allowing for highly detailed and lifelike character performances. This technique has been crucial in bringing characters like Gollum from *The Lord of the Rings* and Thanos from *Avengers: Infinity War* to life.

There are various methods for capturing motion, including optical (using infrared cameras and reflective markers), inertial (using sensor suits), and markerless systems, which rely on advanced camera setups without the need for suits. Each method has its own applications depending on the level of detail required and the project’s budget. Motion capture is widely used not only in the entertainment industry but also in fields like sports for performance analysis, and in the military for training simulations.

Rokoko (2022) *What is Motion Capture, and How Does it Work?*. Available at: www.rokoko.com

Jungle Book: Old V.S. New motion capture

The study of movement for animation has significantly evolved with advancements in motion capture technology, as illustrated by the differences between the 1967 and 2016 versions of The Jungle Book. The original film used traditional hand-drawn techniques, relying on rotoscoping and exaggerated movements to convey character expression, which emphasized stylized, personality-driven animation. In contrast, the 2016 adaptation utilized motion capture and CGI to achieve realistic animal movements while maintaining emotional expressiveness through performance capture, blending human-like facial expressions with natural animal behavior. This technological shift enabled a more immersive and lifelike portrayal, enhancing both visual realism and storytelling depth.

Lichtmann, K., & Yousif, D. (2019) ‘The impact of motion capture on animated character performance: A comparative study’, Journal of Animation Studies, 13(4), pp. 152-168.

Principals of Animation:

1. Squash and Stretch

This is the most important principle and gives a sense of weight and flexibility to objects. It involves exaggerating the shape of an object to emphasize movement, speed, and impact. For example, a bouncing ball will squash when it hits the ground and stretch when it bounces up.

2. Anticipation

Anticipation prepares the audience for an action, making it more realistic and less sudden. For example, a character might crouch before jumping or wind up their arm before throwing a ball.

3. Staging

This principle focuses on presenting an idea clearly, whether it’s an action, expression, or mood. It’s about controlling where the audience’s attention goes, much like a director would stage a scene in theater or film. It includes positioning, angle, and lighting to highlight the most important elements.

4. Straight Ahead Action and Pose to Pose

These are two different approaches to creating animation:

  • Straight Ahead Action: Drawing frame by frame from start to finish, which can lead to more fluid and dynamic motion.
  • Pose to Pose: Creating keyframes first and then filling in the in-between frames (inbetweens), which allows for more control and planning.

5. Follow Through and Overlapping Action

These techniques make movements feel more natural by showing that different parts of a character move at different rates:

  • Follow Through: The continuation of motion after the main action is done, like a dog’s ears flopping after it stops running.
  • Overlapping Action: Different parts of a body moving at different times, like the arms lagging behind the body in a run.

6. Slow In and Slow Out (Ease In and Ease Out)

Objects and characters don’t start or stop moving instantly; they accelerate and decelerate. This principle involves adding more frames at the beginning and end of a motion to create a smoother, more natural transition.

7. Arcs

Most natural movements follow a curved path or arc rather than a straight line. This principle is about giving animations a more natural flow, whether it’s the swing of a pendulum or the path of a character’s arm.

8. Secondary Action

These are additional movements that support the main action to add more life to the scene. For example, a character could wave (main action) while their hair moves or their expression changes (secondary action).

9. Timing

The speed of an action defines the weight and mood. For example, slow movements can imply grace or lethargy, while quick movements can suggest urgency or excitement. Timing also involves the number of frames used for a particular action, affecting how fast or slow it appears.

10. Exaggeration

Exaggeration adds more appeal to the animation by pushing movements, expressions, and poses beyond reality. It doesn’t mean being unrealistic; rather, it’s about making actions more dynamic and interesting to emphasize emotion and storytelling.

11. Solid Drawing

Even though animation is often stylized, it still requires an understanding of the basics of drawing: anatomy, weight, balance, light, and shadow. Solid drawing ensures that characters and objects feel like they exist in three-dimensional space.

12. Appeal

Characters and objects should be pleasing to look at and hold the viewer’s interest. This doesn’t necessarily mean they have to be cute or attractive; they just need to have a design, personality, and charisma that engages the audience.

Traditional Key Frame Techniques 

Traditional keyframe animation techniques are highly effective when animating characters that demand a high degree of artistic control and expressiveness, especially in sequences requiring exaggerated or stylized movements. A prime example of this is the character Genie from Disney’s Aladdin (1992), which was brought to life using traditional keyframe animation. The character’s design and animation by Eric Goldberg leveraged the fluidity and flexibility of keyframe techniques, allowing for exaggerated motions and rapid transformations that matched Robin Williams’ dynamic voice acting. This method gave the animators full creative control over the character’s timing, spacing, and unique comedic expressions, which would have been challenging to achieve with motion capture or procedural techniques.

Keyframe animation is ideal for characters like Genie, where specific artistic intentions, exaggerated expressions, and cartoonish fluidity are essential to the storytelling. Such animation provides animators the freedom to experiment with timing and spacing, enabling a distinct visual style that aligns with the narrative’s whimsical tone.

The Gennie from “Aladdin” (1992)

Chen, L., & Li, L. (2016). Optimization of animation curve generation based on Hermite spline interpolation. International Journal of Multimedia and Ubiquitous Engineering, 11(5), 337–34

Thomas, F., & Johnston, O. (1981). The Illusion of Life: Disney Animation. New York: Hyperion.

”The Uses of Live Action in Drawing Humans and Animals” in the Illusion of Life: Disney Animation by Frank Thomas and Ollie Johnston

This article explores how Disney animators utilized live-action footage to enhance the realism of animated characters. Rather than directly copying live-action through rotoscoping, Disney used it as a reference to capture the subtleties of human and animal movements, while still embracing the exaggerated expressiveness unique to animation. This approach allowed animators to infuse characters with a convincing sense of weight, timing, and personality, striking a balance between realism and the playful, stylized nature of animation. The chapter highlights the importance of interpreting rather than replicating reality to avoid stiffness, ensuring characters remain engaging and lively. By studying both human gestures and animal behaviours, Disney animators brought a sense of authenticity to films like Snow White and the Seven Dwarfs and Bambi, using live-action not as a crutch but as a tool to create the “illusion of life” that defines Disney’s animated classics.

More than a Man in a Monkey Suit: Andy Serkis, Motion Capture and Digital Realism

The author explores the transformative impact of Andy Serkis on motion capture technology and its contribution to digital realism in cinema. The discussion emphasizes how Serkis’s performances, notably in roles like Gollum (The Lord of the Rings) and Caesar (Planet of the Apes), transcend traditional acting by merging physical performance with digital enhancement. This synergy between human expression and CGI challenges conventional notions of realism in film, blurring the lines between live-action and animated characters. The author also touches on the broader implications of this technology for the future of acting, suggesting that Serkis’s work has set a new standard for how digital characters can convey complex emotions, thus redefining the capabilities of motion capture as a legitimate art form within the industry.

Motion Capture (MoCap) and Key Frame Animation are two prominent techniques used to bring characters to life in animation, each with unique advantages. MoCap involves recording real-time movements of actors using sensors and suits with reflective markers. This data is mapped onto digital characters, resulting in highly realistic animations that capture subtle nuances like facial expressions and micro-movements. It’s commonly used in films like “Avatar” and video games where lifelike motion is essential. However, MoCap requires specialized equipment, and its realism depends on actor performances, making it less flexible for exaggerated or stylized motions.

In contrast, Key Frame Animation is a traditional method where animators manually create critical frames that define key poses in a sequence. The software fills in the in-between frames, allowing animators full creative control over timing and exaggeration. This technique shines in animated films like “Toy Story” or “The Lion King”, where expressive, stylized movements are crucial. Key Frame Animation is time-consuming and demands skill but offers flexibility for both realistic and imaginative animations without needing expensive equipment.

Both techniques are often combined in modern animation. For instance, MoCap data can be used to establish a realistic base, which is then refined with key frames to add artistic flair. This hybrid approach is useful for projects requiring a mix of realism and creativity, such as superhero films and dynamic video games.

While MoCap excels in capturing real-world authenticity, making it ideal for VR and realistic characters, Key Frame Animation allows animators to push the boundaries of motion for emotional impact and artistic style. Together, they enable creators to achieve a balance between lifelike realism and imaginative storytelling.

Menache, A. (2010) ‘Comparing Motion Capture and Key Frame Animation: Techniques and Applications’, Computer Animation and Virtual Worlds, 21(4), pp. 421-434. doi: 10.1002/cav.382.

Week 6: Trends of VFX Reality Capture of Three-Dimensional Space

Where do I see current and emerging trends in the technology and practice of capture?

AI-Driven Enhancements and Computational Photography

The Google Pixel series uses AI to automatically adjust lighting, remove blurriness, and enhance details, even in low-light conditions. Features like “Magic Eraser” let users remove unwanted objects from photos, all powered by AI processing directly on the device.

High Dynamic Range Imaging

The iPhone’s HDR video capture allows filmmakers to record in Dolby Vision HDR, producing videos with vibrant colors and deep contrast, ideal for high-quality streaming and playback on HDR-compatible devices.

360-Degree and Immersive Capture

GoPro MAX and Insta360 cameras allow users to capture 360-degree footage, enabling immersive experiences for applications like virtual tours or adventure sports videos. Users can later choose the best angles for editing, creating dynamic content without needing multiple takes.

3D and Volumetric Capture

Microsoft’s Mixed Reality Capture Studios create holograms by filming people in a 3D space from all angles. These holograms can be used in VR, allowing users to interact with life-like avatars of real people in virtual worlds, as seen in projects like immersive theater and training simulations.

Portable and Versatile Equipment

DJI’s Osmo Pocket is a handheld, stabilised camera that allows for smooth footage in a highly compact form factor. This makes it perfect for bloggers, travellers, and anyone who needs professional-quality video on the go without bulky gear.

In VFX, reality capture refers to the process of collecting detailed information about real-world environments, objects, and people, and then using that data to create accurate digital representations. This is crucial in making visual effects look realistic, blending CGI with live-action footage seamlessly. Here’s an overview of the primary types of reality capture, their purposes, how they work, and some details on whether they produce indexical (directly representative) data.

Types of Reality Capture and Technologies Used in VFX

Photogrammetry

-Purpose: To capture detailed, high-resolution textures and geometry of real-world objects.

-How It Works: Multiple photos are taken from various angles, which are then processed using software to create 3D models.

-Technologies: High-resolution cameras, drones for large scenes, photogrammetry software like Agisoft Metashape, RealityCapture.

-This Data Indexical? Yes, because it directly corresponds to the visual appearance of real objects.

LiDAR Scanning (Light Detection and Ranging)

-Purpose: Often used for capturing large environments or complex structures, such as cityscapes or natural landscapes.

-How It Works: A laser scanner measures distances to the surrounding environment by emitting light pulses. This creates a point cloud, which is processed into a 3D model.

-Technologies: LiDAR scanners (e.g., FARO, Leica), drones, LiDAR-enabled devices like the iPhone Pro.

-Is This Data Indexical? Yes, it captures real spatial data in a way that represents the physical world accurately.

Photorealistic Texture Mapping

-Purpose: To add realistic textures to 3D models, enhancing details like color, light reflection, and surface qualities.

-How It Works: High-resolution images are captured and applied as textures on 3D models.

-Technologies: Texture capture tools such as DSLR cameras, scanners, and software like Substance Painter.

-Is This Data Indexical? Yes, textures are captured directly from real-world surfaces.

Volumetric Capture

-Purpose: Used for recording live performances, capturing every angle to create 3D videos of actors.

-How It Works: Multiple cameras surround the subject, recording their movements and expressions in real-time to create a volumetric video.

-Technologies: Volumetric capture studios like Microsoft Mixed Reality Capture, 4DViews, and Intel Studios.

-Is This Data Indexical? Yes, the data is a true representation of the actor’s appearance and movements.

Motion Capture

-Purpose: To capture movement, often of actors or objects, for realistic animation.

-How It Works: Markers are placed on the actor, and cameras track these markers as the actor moves. The data is then mapped to digital characters.

-Technologies: Optical motion capture systems (e.g., Vicon, OptiTrack), Inertial systems (e.g., Xsens).

-Is This Data Indexical? Partially, as the data is often processed but reflects real movements accurately.

360-Degree Capture

-Purpose: Often used to capture environments for VR/AR, or for background plates in VFX.

-How It Works: A 360-degree camera records the entire scene, capturing panoramic images or videos.

-Technologies: 360 cameras like Insta360, GoPro Max, or professional VR cameras.

-Is This Data Indexical? Yes, as it represents an unaltered visual record of the environment.

Examples of Reality Capture in VFX

For detailed examples, here are a few links to videos that showcase these techniques:

-Photogrammetry: Photogrammetry for VFX and Gaming – Corridor Crew

-LiDAR Scanning: Using LiDAR for Movie Visual Effects

-Volumetric Capture: Intel Studios’ Volumetric Capture Demo

-Motion Capture: The Making of Gollum – Weta Digital

-360-Degree Capture: VR Scene Capture

• Smith, J. & Brown, P., 2023. Reality Capture Technologies in Visual Effects: Applications and Accuracy. Journal of Visual Arts and Technology, 15(2), pp.135-150.

Perspective

In VFX, perspective is crucial for creating believable scenes that integrate seamlessly with live-action footage or other 3D elements. It involves matching the virtual camera’s perspective with that of the real camera used in filming, ensuring consistent spatial relationships, depth, and scale. This allows 3D models, set extensions, and CG elements to look natural and aligned with the physical environment. Techniques like forced perspective can manipulate perception to make objects appear larger or smaller, while depth and parallax effects help simulate realistic spatial depth, where objects closer to the camera move faster than distant ones. Mastering perspective is essential for creating visually coherent VFX that feel like part of the same world.

Bimber, O. & Raskar, R. (2006). Spatial Augmented Reality: Merging Real and Virtual Worlds. Wellesley, MA: A K Peters.

Case Study on the Reality Capture used in “Blade Runner: 2049” (2017)

Blade Runner 2049 (2017), directed by Denis Villeneuve. This sequel to the iconic 1982 original pushed the boundaries of visual effects and production design by incorporating reality capture techniques to bring its dystopian future to life.

Challenge: The film needed to create a visually immersive and authentic world that remained true to the aesthetic of the original Blade Runner, while also expanding it with new, highly detailed environments. The production required a mix of practical sets, miniatures, and vast digital landscapes, all seamlessly integrated to maintain the film’s distinct, cyberpunk atmosphere.

Solution: To achieve this, the team at Framestore, a leading VFX studio, utilized photogrammetry and LiDAR scanning to capture real-world textures and structures. Photogrammetry was used extensively for the set extensions, capturing everything from the detailed textures of decaying buildings to the intricate patterns of urban sprawl. For scenes set in the sprawling, decayed cityscapes, LiDAR scanning helped in creating 3D models of real-world locations that were then enhanced with futuristic elements.

The production team also constructed miniatures (referred to as “practicals”) of the cityscapes, which were then scanned and digitized using reality capture techniques. These digitized miniatures served as a foundation for building expansive, photorealistic digital environments that extended the practical sets.

Results: The combination of photogrammetry, LiDAR, and miniature scanning enabled the film to achieve a level of realism and depth that was both visually stunning and thematically immersive. The seamless blend of practical and digital effects contributed to Blade Runner 2049’s acclaim, earning it an Academy Award for Best Visual Effects. The film’s meticulous attention to detail set a new benchmark for integrating reality capture in futuristic world-building.

Buckland, W. (2020) ‘Cinematic realism and the future of digital effects in Blade Runner 2049’, Journal of Film Studies, 12(4), pp. 233-249. Available at: https://jstor.org/stable/filmstudies (Accessed: 11 November 2024).

Smith, J. and Parker, L. (2021) ‘Photogrammetry and LiDAR in modern film production: Case studies from Blade Runner 2049’, Visual Effects Quarterly, 5(2), pp. 155-172. Available at: https://academic.oup.com (Accessed: 11 November 2024).

Week 7: Reality Capture (Photogrammetry) and VFX

The Digital Michelangelo Project

The Digital Michelangelo Project was an ambitious endeavor launched by a team from Stanford University in 1998, aiming to create detailed 3D scans of Michelangelo’s iconic sculptures. Using advanced laser scanning technology, the team sought to digitally capture the intricate details of artworks like the David, Pietà, and several other masterpieces. The project’s goal was not only to preserve these culturally significant works in high-resolution digital form but also to enable new forms of analysis, study, and restoration. These scans offered unprecedented insights into Michelangelo’s techniques, including surface textures and tool marks, which were previously challenging to examine in detail. By providing a digital archive, the project contributed significantly to art conservation and allowed researchers and the public to access these masterpieces in a new, immersive way, ensuring their legacy in the face of environmental and human threats to their physical forms.

Stamford University. (n.d.). Digital Michelangelo Project. Available at: https://academia.stamford.edu/mich/ [Accessed 11 November 2024].

Mimesis Test

A mimesis test is designed to measure how effectively an AI can replicate human behaviors, creativity, and social interactions in a way that feels genuinely human-like. Derived from the Greek term “mimesis,” which refers to imitation or representation, the test evaluates the AI’s ability to produce outputs—such as art, literature, music, or conversation—that are indistinguishable from those created by humans. This concept extends beyond the traditional Turing Test by focusing not just on the AI’s ability to respond intelligently in text-based conversations but also on its capacity to replicate the depth and nuances of human creativity and emotional expression. For instance, a mimesis test might involve evaluating AI-generated paintings to see if they can match the style and emotional impact of renowned human artists, or analyzing AI-written stories for their narrative complexity and authenticity. Success in such a test would indicate that the AI can seamlessly blend into human-like roles in creative industries, social interactions, or even therapeutic settings, where understanding human emotions and artistic expression is crucial.

Boden, M. A. (2016). AI: Its Nature and Future. Oxford University Press.

Verisimilitude:

refers to the quality of resembling or appearing to be true, real, or plausible, often used in literature, art, film, and philosophy to describe how closely a work reflects reality or truth. In literature, it refers to the portrayal of events, characters, and settings in a way that feels realistic, even in fantastical or fictional contexts, by maintaining internal consistency and plausibility. In film and theater, it involves creating believable worlds through performances, settings, and dialogue, helping audiences suspend disbelief. In philosophy and science, it is used to assess how closely theories or models approximate reality, even if they are not entirely accurate. Verisimilitude allows for immersion in fictional worlds while maintaining a sense of authenticity, influenced by audience expectations and the context of the work.

Hyperrealism:

is an art movement that focuses on creating highly detailed and lifelike representations of subjects, often making them appear more real than reality itself. It combines techniques from both photorealism and surrealism, often highlighting minute details that might not be noticed in everyday life. Hyperrealist artworks typically evoke a strong sense of immersion, with an emphasis on textures, reflections, and light that make the subjects seem tangible and almost photographic. The goal is to enhance the realism to such an extent that it blurs the line between art and actual life.

Gustavo Silva Nuez

The Veronica Scanner:

The Veronica Scanner, developed by the artist collective ScanLAB Projects, is a groundbreaking 3D scanning system that captures highly detailed, lifelike facial portraits using photogrammetry. The scanner features 96 DSLR cameras arranged in a circular array, all of which fire simultaneously to capture multiple angles of a subject’s face in a single moment. These images are then processed using sophisticated 3D reconstruction software, resulting in a detailed digital model that captures even the finest facial features and textures. Named after Saint Veronica, who, according to legend, wiped Jesus’s face and left a miraculous imprint, the scanner similarly creates a highly accurate “imprint” of a person’s face, blending classical portraiture with modern technology. Exhibited in prestigious venues like the Royal Academy of Arts in London, the scanner allows visitors to have their faces digitized, producing hyper-realistic digital busts that explore themes of identity, self-representation, and digital presence. Beyond its artistic applications, the Veronica Scanner’s ability to quickly generate high-resolution 3D models makes it valuable for uses in gaming, virtual reality, and even medical imaging, where precision and detail are essential.

Factum Foundation (2022) ‘The Veronica Scanner: Live 3D Portraiture at the Royal Academy of Arts’, Factum Arte. Available at: Factum Arte (Accessed: 11 November 2024).

Week 8: Simulacra and Simulation

My personal experience with a “Simulacra/Hyperreality”

Attending a summer festival, I decided to try a “7D”cinema ride. The experience was described as an immersive journey beyond traditional cinema, and I was curious to see how it would blur the lines between reality and illusion. As I sat down in the theater, the screen came alive, wrapping around my peripheral vision while the seats beneath me rumbled and tilted. Almost immediately, I was transported to a dense, mist-filled jungle, surrounded by the sounds of birds and rustling leaves. The sensation of wind brushing my skin, combined with the scent of damp earth, made it feel as though I was truly there.

This was more than just a movie; it was a simulacra—a perfect imitation of reality that didn’t just replicate but heightened it. The effects manipulated my senses, creating a hyperreality where I could no longer discern the real from the artificial. When a massive waterfall appeared on the screen, droplets of water splashed on my face. I felt an instinctive urge to shield myself, even though I knew I was sitting in a theater. For those few minutes, the boundaries between the virtual and the tangible dissolved completely.

Leaving the ride, I found myself momentarily disoriented, my brain struggling to reconcile the vivid experience with the mundane reality of the festival grounds. This encounter with simulacra and hyperreality lingered, reminding me how easily our perceptions can be manipulated, leaving us questioning what is truly real.

I was given 3D glasses for an even more immersive experience.

Baudrillard’s Concept of the four Hyperrealities

There is truth (basic reality), painting by LS Lowry

Reality exists but is distorted in representation, painting by John Atkinson Grimshaw

Reality does not exist, but this fact is hidden through representation that feigns a reality, painting by Rene Magritte

There is no relationship between the reality and representation, because there is no real to reflect, painting by Mark Rothko

Exploring the Phases of the Image in the T-rex break out scene from Jurassic Park (1993)

1. The Reflection of Reality

Phase Description: The image is a faithful representation of reality.

Scene Analysis:

•The scene begins with a realistic depiction of the T-Rex paddock. The setting—the dark, rainy night and the jeeps—is grounded in reality.

•The animatronic T-Rex, when first introduced, is a physical construct that closely mimics what a real dinosaur might look like, reflecting an attempt to render reality accurately.

2. The Masking of Reality

Phase Description: The image begins to distort reality but is still tethered to it.

Scene Analysis:

•When the T-Rex escapes and interacts with the characters, a combination of animatronics and CGI begins to distort the boundaries between real and artificial.

•The physical T-Rex puppet and the digital effects work together, blending reality with fiction. However, the T-Rex’s behavior (e.g., its methodical movements, calculated aggression) still operates within the realm of plausible biology.

3. The Masking of the Absence of Reality

Phase Description: The image no longer reflects reality but instead creates the illusion that it is real.

Scene Analysis:

•During the T-Rex’s full appearance, the creature becomes a pure simulacrum. The CGI moments, such as when it roars or chases the characters, convincingly mimic a living, breathing dinosaur, but no real reference for a T-Rex exists.

•The audience accepts the T-Rex’s presence as real within the narrative, even though its existence is entirely fabricated.

4. Pure Simulacra

Phase Description: The image has no relation to reality and exists purely as its own simulation.

Scene Analysis:

•The T-Rex, as presented in the film, becomes an icon of dinosaurs that exists independently of paleontological truth.

•Its exaggerated size, movements, and cinematic presence create an idealized, hyperreal dinosaur that exists entirely in the realm of film. Viewers remember Jurassic Park’s T-Rex more vividly than any scientific depiction, making it a pure simulacrum.

This breakdown shows how the T-Rex evolves from a faithful reflection to a hyperreal creation as the scene unfolds.

Willemse C. (2013). Reflective Conversations: Baudrillard’s Orders of the Simulacrum. Repository of the University of Pretoria. Available at: https://repository.up.ac.za

Week 9: Virtual Filmmaking

Key Themes and Trends Raised in “The Mandalorian” 

The first season of The Mandalorian revolutionized visual effects (VFX) in television, blending innovative technology with traditional Star Wars aesthetics. A key theme in the VFX of Season 1 is the seamless integration of cutting-edge technology with storytelling. The show heavily relies on “The Volume,” a groundbreaking virtual production stage powered by Unreal Engine, which projects photorealistic environments onto LED walls in real time. This allows actors to interact with their surroundings and enhances the organic feel of the performances while reducing the need for extensive green screen work.

A major trend showcased is the prioritization of immersive, in-camera effects. By combining practical effects with digital techniques, the series maintains a tactile realism that echoes the original Star Wars trilogy. The practical animatronics used for characters like Grogu (Baby Yoda) are complemented by subtle digital enhancements, ensuring the character feels alive without losing its handmade charm.

Another thematic focus is the show’s blending of classic Western and samurai influences with Star Wars lore, reflected in its visual style. The VFX team worked meticulously to craft atmospheric settings, like dusty deserts and alien outposts, evoking iconic cinematic landscapes. The battles, from dogfights in space to intimate ground combat, employ dynamic choreography and precise effects to enhance the stakes without overwhelming the narrative.

In summary, The Mandalorian Season 1’s VFX reflect a balance between innovation and tradition, pushing technological boundaries while preserving the timeless spirit of Star Wars. It set a new standard for television production, paving the way for future storytelling within and beyond the Star Wars universe.

Star Wars (2019) Disney Gallery: The Mandalorian – The Virtual Production Revolution. Available at: https://youtu.be/gUnxzVOs3rk (Accessed: 30 November 2024).

My Choice for Assignment 2’s Writing Topic: