Week 1:
Can we spot the current trends of VFX?
Integrating CGI with live-action footage can take your videos to the next level. By blending real-world scenes with computer-generated elements, you can create amazing and seamless visuals that aren’t possible with just a camera.
This guide breaks down the process into six easy steps, from picking the right footage and CGI elements to adding those final touches. Follow along to make your scenes look polished and realistic.
What is The Difference between Live-action and CGI?

What is The Difference between Live-action and CGI?
Live-action uses real people, props, and places. It involves filming scenes with cameras and capturing real actors and settings.
This method feels more natural because everything is real. Actors’ performances and real locations give a sense of authenticity. Physical effects like makeup, costumes, and stunts create visuals.
CGI, or Computer-Generated Imagery, is different. It uses computers to create images and animations.
With CGI, filmmakers can make things that don’t exist in the real world. They can design entire scenes, characters, and effects digitally. This allows for more creativity, as CGI can show things that are hard or expensive to film in real life.
Can we spot any Current trends of vfx?
- Anime using CGI-Failure frame
- Real rendering
- De-aging (captain marvel)
- Surrealism (horror)(dreams)
- Deepfakes-in film news
The Hurt locker (2008) bomb explosion:
Mad Max fury road explosion:
matrix bullet:
wanted (2008)
Sherlock Holmes (2009) fighting scene
The Age of the Image?
The phrase “Age of the Image,” used by Dr. Fox describes the history when pictures become the most powerful way to understand the world. Today, images are not just things we look at, they shape how we see the real world, influence our emotions, and help to understand what is happening around us.
This period is special because images have more influence now than ever before. “Image becomes a reality and reality becomes an image”. We are surrounded by visual elements from social media, movies, news, and art, and they have become a way to experience the world. In the past, words were more important for communication and learning new things, but now images are used as a “Social currency to communicate and self-promoting”. With modern technologies, it is easier to create and share pictures, making them a huge part of our everyday lives.
The positive side is that images allow everyone to be creative individuals and share their stories. People can express themselves through photos, videos, and digital art in many ways that were not possible before. But it becomes challenging to understand what is real and not, because images can be faked and edited in many ways, making it harder for us to know what to believe.
Dr. Fox helps us to understand how much power pictures carry. He encourages us to think more carefully and do not believe everything we see is true.
Week 2:
The allegory of the cave:
Plato’s allegory of the cave is an allegory presented by the Greek philosopher Plato in his work Republic (514a–520a, Book VII) to compare “the effect of education (παιδεία) and the lack of it on our nature”. It is written as a dialogue between Plato’s brother Glaucon and his mentor Socrates and is narrated by the latter. The allegory is presented after the analogy of the Sun (508b–509c) and the analogy of the divided line (509d–511e).
In the allegory, Plato describes people who have spent their lives chained in a cave facing a blank wall. They watch shadows projected onto the wall by objects passing in front of a fire behind them, and they give names to these shadows. The shadows are the prisoners’ reality but not accurate representations of the real world. The shadows represent the fragment of reality we can perceive through our senses, while the objects under the Sun represent the true forms of objects that we can only perceive through reason. Three higher levels exist: natural science; deductive mathematics, geometry, and logic; and the theory of forms.
Socrates explains how the philosopher is like a prisoner freed from the cave and comes to understand that the shadows on the wall are not the direct source of the images seen. A philosopher aims to understand and perceive the higher levels of reality. However, the other inmates of the cave do not even desire to leave their prison, for they know no better life.[1]
Socrates remarks that this allegory can be paired with previous writings, namely the analogy of the Sun and the analogy of the divided line.
The allegory of the cave : Example in the movies:
Inception (2010)
Inception explores different layers of dreams and reality ,Characters delve deeper into subconscious realities ,questioning what is real and what is mental construct , much like prisoners in the cave focusing shadows or reality.
Examples of Image becomes a reality:
The Photographic Truth claim: What is it and does it matter?”
In Tom Gunning’s article, refers to the idea that photographs directly capture reality, making them reliable and truthful representations. This belief is rooted in the indexical nature of photography, where light interacts with physical objects to produce an image, similar to a footprint marking where someone has walked. Historically, this connection to reality has given photographs a status of trustworthiness, especially in fields like journalism, law, and science.
However, Gunning highlights that the truth claim has always been fragile. Even in the early days of photography, images could be manipulated, either through darkroom techniques or staging, meaning that photographs could be altered to show something different from reality. With digital photography and editing tools becoming more advanced and accessible, it has become even easier to change images, leading to doubts about whether photos still provide a truthful reflection of the real world.
Despite these challenges, the idea that photographs represent truth continues to hold value in specific areas, such as legal or scientific evidence, where strict processes are followed to ensure the image’s authenticity. Institutions like journalism also attempt to maintain the credibility of photographs, even as manipulation becomes more common.
In short, while the ability to edit photos has weakened the belief that they always tell the truth, the importance of the photographic truth claim remains strong in certain contexts. The ongoing challenge is balancing the ease of manipulation with the need for trustworthy visual evidence.
Week 3:
Find two composite VFX shots, from a movie or TV show that is not obvious:
Examples:
The Crown :
1.Identify the Components:
The show uses real sets (like rooms or parts of buildings) combined with digital backgrounds that show famous landmarks recreate historical locations or events.
Weather effects (like rain or clouds) and extended streets or crowds are often created digitally to fit the period.
2. Optics & Perspective:
The shot employs precise historical optics and lighting to replicate natural sunlight reflecting off the actual palace. The alignment of the actors and background is carefully managed to ensure consistent depth, with the horizon line typically at the characters’ eye level, preserving the scene’s realism.
3.Believability in Composition:
Consistent lighting and shadows between practical sets and digital elements help keep the shot believable. For example, when characters walk in front of windows or reflective surfaces, the VFX artists ensure light behaves naturally.
4. Rule of Thirds:
Positioning characters: The main characters are usually placed slightly off-center, with the background (like a palace) filling the other part of the shot. This makes the scene look balanced and interesting.
Balancing the scene: It keeps both the characters and the background visually pleasing without overwhelming the viewer with too much detail.
5.Type of elements:
The scenes may include real, practical sets for close-up interactions, CGI to extend the buildings or create period-accurate environments, and other enhancements such as weather or lighting.
Ripley (2024 Version)
1.Identify the Components:
In “Ripley,” the visual effects (VFX) team might recreate old city scenes by removing modern things and adding period-appropriate details, like old cars or buildings. They may also digitally enhance some places, like making streets longer or adding extra period features.
2. Optics & Perspective:
The show makes sure that digital elements like city backgrounds or street scenes fit naturally with where the characters are. For example, if Tom Ripley is walking, the digital streets will match his movements to keep everything looking real.
3.Believability in Composition:
Lighting is key to making the VFX look believable. When they extend a street or change a building digitally, the sunlight and shadows have to match what’s real. So, in a scene where Ripley walks through a 1950s market, the shadows of both real and digital objects need to blend for it to feel convincing.
4. Rule of Thirds:
Like in the show “The Crown,” “Ripley” uses the rule of thirds to focus attention on the characters while putting the digitally enhanced scenery in the background. For instance, Ripley might be on the right side of the screen, while the city fills the rest.
5.Type of elements:
The show uses a mix of real sets, digital paintings to extend landscapes, and sometimes CGI for details like old cars or street signs. Some scenes might also use special lighting or weather effects to fit the scene’s mood.
Week 4:
The trend of the Photorealism :
The jungle book 2016 :
Planet of the apes:2023
Ripley 2023
A video generated from a text prompt using OpenAI’s Sora Prompt: A young man at his 20s is sitting on a piece of cloud in the sky.
Photorealism
Photorealism is an art style that aims to make images look just like high-quality photos, capturing tiny details, textures, light, and shadows very accurately. In visual effects artists use advanced software to mimic real-life lighting, colours , and movement, creating images that appear almost real. There are two main types of photorealism in VFX composite photorealism and CGI rendering.
Composite photorealism combines different elements, like live-action footage with CGI, into one realistic scene. The goal is to blend everything so smoothly that it looks like it naturally belongs together, achieving a seamless and realistic look that feels as though it could be a single moment captured in time.
CGI photorealism, on the other hand, uses 3D computer graphics to create images that look as close as possible to real photos. This requires careful attention to how light, colours , and textures behave in the real world to ensure the final image or animation appears lifelike. Today’s technology has advanced so much that it can be challenging to tell what is real from what is computer-generated.
A great example of this is Boris Eldagsen’s AI generated image, which won an award in 2023 and was mistaken for a real photo. This sparked discussions about how AI can now create images that look genuinely real, challenging our ideas of what is authentic in art. Photorealism now goes beyond traditional and digital techniques, incorporating AI into the mix and pushing us to rethink realism and the possibilities for artistic expression in visual media.
Week 5:
Motion capture:
Keyframe animation :
Comparing motion capture and keyframe animation
Motion capture is a technology used to digitally record human facial and body movements. This technique is commonly applied in many fields, like filmmaking, animation, and gaming.
The process usually involves placing markers or sensors on a person’s face or body. These markers are tracking their movements in real time, which are then captured by multiple cameras or sensors around the person. The data collected allows digital characters, objects, or avatars to replicate the record the accurate movements.
Keyframe animation is a technique where animators create important points, or keyframes, in an animation sequence to define the start and the ending of a movement. These keyframes mark specific positions of an object or character at certain times. The software then fills in the in between frames automatically, creating smooth transitions between keyframes.
Motion capture records real human movements using sensors, while keyframe animation relies on animators manually creating key poses. For natural and complex movements like facial expressions, motion capture excels, while keyframe animation suits fantasy or cartoonish styles. Motion capture requires specialized equipment and setup, making it more costly and complex to start, while keyframe animation can often be done with just animation software, making it more accessible. However, motion capture offers less flexibility for changes, as adjustments may require rerecording or intensive editing. Keyframe animation allows greater control, enabling animators to refine details easily without redoing scenes. Motion capture is mostly used in realistic movies, and video games, while keyframe animation is popular in animated films and animated games. Motion capture can save time by creating lifelike movements efficiently, but keyframe animation requires more time because each movement needs to be crafted. In short, motion capture is best for accurate human like animations, while keyframe animation offers exaggerated effects.
Week 6:
What types of reality capture are used in vfx?
Key Takeaways
- 3D scanning process: 3D scanning is a technique for creating digital 3D models of objects, structures, environments, and even people by collecting data about their shape and appearance.
- 3D scanning techniques: There are several different 3D scanning techniques, such as contact, laser, structured light, laser pulse, and photogrammetry, each with its own advantages and disadvantages.
- 3D scanning applications: 3D scanning is used across a wide range of industries, such as entertainment, medicine, architecture, engineering, history, design, and forensics, for various purposes such as prototyping, reverse engineering, analysis, and documentation.
- 3D scanning accessibility: 3D scanning technology is constantly evolving and becoming more accessible to beginners, students, and hobbyists, with affordable and easy-to-use 3D scanners and software available on the market.
linear perspective, a system of creating an illusion of depth on a flat surface. All parallel lines in a painting or drawing using this system converge in a single vanishing point on the composition’s horizon line.
Linear perspective is thought to have been devised about 1415 by Italian Renaissance architect Filippo Brunelleschi and later documented by architect and writer Leon Battista Alberti in 1435 . Linear perspective was likely evident to artists and architects in the ancient Greek and Roman periods, but no records exist from that time, and the practice was thus lost until the 15th century.
The three components essential to the linear perspective system are (parallel lines), the horizon line, and a vanishing point. So as to appear farther from the viewer, objects in the compositions are rendered increasingly smaller as they near the vanishing point. Leonardo da Vinci, and German artist Albrecht Dürer are considered some of the early masters of linear perspective. As the limitations of linear perspective became apparent, artists invented additional devices (e.g., foreshortening and anamorphosis) to achieve the most-convincing illusion of space and distance.
Gentle Giant Studios is a US company established over 25 years ago, is a main company specializing in 3D scanning, 3D modelling, and fine art fabrication services. They work with various industries, including film, television, collectibles, and fine art.
Gentle Giant Studios has been at the forefront of integrating innovative technologies into their processes. They were among the first to utilize 3D scanning in consumer products, offers LiDAR scanning, which involves capturing detailed 3D data of environments, props, and sets to assist in visual effects production. They have contributed to numerous high-profile projects, including Star Wars” Franchise, Harry Potter” Series, The Walt Disney Company: Provided character models and collectibles for various Disney properties.
Collaborating for Star Wars, Gentle Giant Studios, have provided high-precision scans of costumes, props, and sets. These scans capture intricate details of characters, costumes, and practical props, allowing for consistent digital doubles and facilitating CGI scenes. Advanced photogrammetry rigs often include 150 plus cameras and LiDAR scanning were used to digitally capture sets and environments, especially for expansive environments like those on alien planets.
In Summary: 3D scanning methods, Gentle Giant Studios’ use of LiDAR helps production teams save time while enhancing visual fidelity, making it a critical tool for creating immersive and realistic visual effects in film and television.
References:
https://3dprintingindustry.com/news/3d-printing-for-the-film-industry-insights-from-gentle-giant-studios-jason-lopes-229811/
https://www.youtube.com/watch?v=SuJs8QX1Lfs
Week 7:
Photogrammetry is a technique that uses multiple 2D images to create a 3D model of an object or surface. It involves taking many overlapping photos of an object from different angles and then using software to stitch them together.
Archaeology: Recording underwater archaeological sites and characterizing seafloor features
Engineering: Creating 3D models for construction projects
Film and gaming: Creating CGI for movies and games
Mapping: Preparing topographic maps
The Digital Michelangelo Project, initiated by Stanford University in the late 1990s, aimed to create detailed 3D digital models of Michelangelo’s sculptures and related architectural works. Utilizing advanced laser scanning technology and digital photography, the project captured the intricate details of ten of Michelangelo’s statues, including the renowned David, as well as two architectural interiors and all 1,163 extant fragments of the Forma Urbis Romae, a giant marble map of ancient Rome.
The primary objectives were to preserve these masterpieces digitally, facilitate scholarly research, and enhance public accessibility. The project faced significant challenges, notably managing the vast amount of data generated; for instance, the scan of the David statue alone comprised approximately 2 billion polygons and 7,000 colour images, totaling around 32 gigabytes of data.
Beyond preservation, the digital models have been instrumental in art historical research, restoration efforts, and educational initiatives, providing unprecedented access to Michelangelo’s works and contributing to the broader field of digital humanities.
Mimesis Test is a concept that explores the authenticity and believability of something (such as an artwork, a performance, or a digital simulation) in mimicking reality or aligning with human experiences. The term “mimesis” originates from Greek, meaning “imitation,” and has been widely discussed in philosophy, particularly by thinkers like Aristotle and Plato, in the context of art, literature, and performance.
In contemporary contexts, the Mimesis Test can apply to various domains, including:
- Artificial Intelligence and Robotics:
- Evaluates whether AI or robots can realistically imitate human behavior, emotions, or creativity.
- Similar in concept to the Turing Test, which gauges whether a machine can mimic human intelligence convincingly.
- Art and Media:
- Assesses how well a piece of art or media replicates real-life experiences or emotions.
- For example, realistic CGI in films might be said to pass a mimesis test if viewers perceive it as natural or life-like.
- Philosophy and Aesthetics:
- A measure of how closely an imitation adheres to its original source or ideal form.
- Used to critique whether a representation (in art, writing, or other forms) faithfully captures the essence of the subject.
- Virtual Reality and Simulation:
- Tests the realism of virtual environments and simulations in creating immersive experiences that align with users’ expectations of the real world.
Examples
- A hyper-realistic painting or photograph might pass the Mimesis Test because it convincingly imitates the subject it represents.
- In gaming or VR, the smoothness of character movements and environmental details contribute to passing the Mimesis Test for a truly immersive experience.
The Mimesis Test is less formalized than something like the Turing Test but serves as a conceptual benchmark for evaluating how closely something mirrors the essence of its inspiration or reality.
-
Hyperrealism
An art style that creates a realistic illusion by using high-resolution images to create a convincing depiction of reality. Hyperrealist art can be so lifelike that it can trick the viewer, a technique known as trompe l’oeil. Hyperrealist artists use digital imagery, lighting, contrast, and sharpness to create a more vivid depiction of reality.
-
VerisimilitudeThe appearance of being true or real. In fiction, verisimilitude is a technical problem that involves creating a logical cause web within the text to reinforce the plot’s structural logic. For example, an author might include photographs in a book to lend verisimilitude to the story.
The Mandalorian 2019:
Background of the Company
The company behind these groundbreaking virtual production techniques for The Mandalorian is Industrial Light & Magic (ILM), a division of Lucasfilm. ILM is renowned for its visual effects and was instrumental in developing the virtual production tools used on the series. One of their key innovations was the use of a new technology called Stagecraft, a state-of-the-art virtual production stage developed in collaboration with Epic Games, the creators of the Unreal Engine.
Virtual Production Techniques:
Real-time Rendering with Unreal Engine:
Unreal Engine, the popular video game engine, was used to render realistic, real-time virtual environments. This allowed the actors to interact with their surroundings on set, making it more immersive and reducing the need for post-production compositing.
This real-time rendering also allowed for seamless integration of visual effects during filming, enabling directors and actors to see final shots on set, as opposed to relying on visual effects teams to piece them together after the fact.
Virtual Cameras:
Virtual cameras were used to simulate traditional filmmaking techniques, but with the added benefit of creating complex environments in real-time. The movements of the virtual camera could match the physical camera movements, allowing filmmakers to interact with the virtual world as if it were tangible.
In-Camera Visual Effects (ICVFX):
Unlike traditional green screen work, where the background is added in post-production, the actors were placed in fully-realized, interactive virtual environments that existed in-camera. This allowed them to react to the surroundings in real time and made the experience feel more authentic.
The in-camera VFX approach helped eliminate the need for extensive green screen work and tedious post-production compositing.
LED Wall Technology:
The use of an LED wall as opposed to traditional green screen technology offered many advantages. The LED wall provided a more natural lighting environment for the actors, as the light emitted by the screen reflected onto them, creating realistic lighting and reflections.
The LED walls were also able to display backgrounds that could change according to the camera’s position, offering a dynamic environment that traditional green screen setups couldn’t provide.
Solutions & Outcomes:
Efficiency in Production:
Virtual production allowed The Mandalorian to shoot in a controlled, indoor environment with high levels of flexibility. This helped overcome the limitations of location shooting, especially during the COVID-19 pandemic when on-location shooting was difficult.The ability to adjust the backgrounds in real-time meant fewer reshoots were necessary, as directors and actors could immediately see how the visual effects integrated with the scene.
Creative Freedom:
Directors and cinematographers were able to shoot complex scenes with virtual backgrounds that felt just as real as physical locations. The use of the LED wall meant that the cast and crew could experience the environment first-hand, making it easier to capture their reactions and interactions.For instance, many scenes in The Mandalorian were filmed on an indoor set, with the actors surrounded by real-time digital landscapes like deserts, forests, and space environments.
Groundbreaking Visual Effects:
The integration of virtual production created a more seamless blending of practical effects with CGI, which led to a visually stunning final product. The series benefited from enhanced realism, particularly in scenes involving dynamic environments like the desert planets or space battles.
Impact on the Industry:
The success of The Mandalorian and its use of virtual production has had a profound influence on the film and television industry. The techniques demonstrated by The Mandalorian have been adopted by other major productions, and the demand for virtual production is growing, with companies and studios exploring Stagecraft and similar technologies.Disney and ILM have since expanded the use of this technology, and virtual production has gained significant momentum across Hollywood, with its benefits for both creative flexibility and efficiency.
How would you describe the relationship between Visual Effects and the photographic image?
Photorealism is important to visual effects, but what is it exactly? Using examples, can you define it?
Visual effects play a massive role in contemporary filmmaking, allowing creators to produce visuals that go beyond the limitations of live-action filming. Whether in films, television, video production, or gaming, visual effects blend computer generated imagery with live footage to bring enhanced, realistic, or imaginative visuals to life. They allow filmmakers to craft scenes that would otherwise be impossible, from fantastic creatures and futuristic cities to dynamic explosions and breath-taking landscapes.
Photography and visual effects are inherently interconnected, working in tandem to produce visually compelling imagery. Stephen Prince, in his book, describes visual effects as a form of “trick photography” that must remain seamless and undetectable to the viewer. While photography captures tangible, real-world scenes, visual effects manipulate or enhance these visuals to introduce new elements, often blurring the line between reality and illusion. For instance, integrating computer generated imagery elements into live-action scenes requires meticulous attention to detail, ensuring that lighting, texture, color, and movement align perfectly.
Charles Finance and Susan Zwerman, in their book The Visual Effects Producer: Understanding the Art and Business of VFX, emphasize the critical role of lighting in achieving realism. They note that “Lighting is essential to integrating a digital model into a live-action scene and making it look like the two were photographed together.” Achieving this level of realism is no simple task. Successful compositing is the process of combining various visual elements into a single cohesive image which demands both technical precision and artistic vision. As Finance and Zwerman explain, compositing requires artists to seamlessly blend layers, ensuring that the final image feels natural and believable. The most skilled compositors possess a “great eye”—an intuitive ability to make digital elements appear indistinguishable from real-life footage.
Visual effects are an art form as much as a technical process. By marrying computer technology with creative vision, visual effects artists craft immersive experiences that captivate audiences and expand the boundaries of visual storytelling.
The realism in visuals is what makes it difficult to tell real from fake. Photorealism is crucial for achieving realistic-looking visuals. Is an art style that emerged in the 1960s in Europe and the United States. It is defined by its precise attention to detail and reliance on photographs as key visual references. Artists working in this style strive to produce images that are so lifelike they are indistinguishable from actual photographs. Eran Dinur in his book “The Complete Guide to Photorealism for Visual Effects says that “VFX artists generally strive for the highest level of photorealism.”
Some artists achieve extraordinary levels of realism, while others face the challenge of the “uncanny valley” effect. This phenomenon occurs when certain elements of movie characters—such as overly flawless or insufficiently detailed skin textures, eyes that fail to reflect light naturally, awkward movements, stiff or exaggerated facial expressions, and incomplete simulations of human traits—make them appear unsettling rather than lifelike.
Artists strive to avoid the “uncanny valley” as their aim is to create realistic human characters and immersive environments. As Eran Dinur notes, Robert Zemeckis’ 2004 film The Polar Express exemplifies this issue. Despite being a landmark achievement in motion capture technology and pushing the boundaries of computer-generated imagery realism for human characters, the film was criticized for its disconcerting portrayal. The characters seemed almost lifelike but felt slightly “off,” it’s like viewing people through a fogged window—realistic yet unnervingly surreal.” For example, they were trying to recreate Tom Hanks appearance to look more humanlike, but the skin texture, artificial facial movements make it feel uncomfortable watching it. Angela Tinwell mentions in her book” The uncanny valley in films and animation” that “Conductor’s” (Name of the character) motion was described as puppet-like, and the audience was critical of a lack of human-likeness in his facial expressions that did not match the emotive qualities of his speech. The expressions also appeared out of context with a given situation as he presented an angry expression and a cold personality when interacting with other children’s characters in the film.
While lifeless objects do not typically evoke the same emotional reactions as human characters, a subtle uncanny effect can still emerge when digital visuals are close being photorealistic but lack the authenticity needed to feel entirely convincing. This might be experienced when watching a film with impressive yet slightly “off” visual effects or when observing an architectural render that appears hyper-realistic yet overly sterile or unnaturally perfect. On the other hand, computer generated, or matte-painted environments are purposefully designed to avoid realistic looking visuals, to prevent the trap of uncanny valley.
It is hard not to notice the remarkable technological advancements in visual effects, along with the significant improvements in quality over the past years. The visual effects artists can achieve realistic looking visuals mixed with things that we have never seen or experienced.
Photorealism involves observing and replicating the world around us—its light, surfaces, and atmosphere. But what happens when these elements behave entirely differently in the fictional setting artists aim to create? blurring the line between reality and illusion. With today’s digital software, we can change every rule, crafting worlds where physical laws are reversed, and visuals are replacing logic. However, immersing the audience in such environments is no easy task. Examining leading science fiction and fantasy films reveals that the most believable visuals often retain some grounded, earthly qualities. This thoughtful blend of realism and imagination, coupled with a meticulous focus on detail, transforms an otherwise inconceivable environment into one that feels entirely believable. Eran Dinur in his book, talks about movie Inception 2010 as how “Inception folding Paris streets are a great example of a successful visual balance between fantasy and familiar realism.” Once you add a touch of illusion to something real, it feels more convincing and authentic to the audience.
Photorealism plays a particularly significant role in visual effects, ensuring that the visual elements blend seamlessly into live-action footage, making it difficult for the audience to distinguish between real and computer-generated imagery. It allows viewers to focus on the story and characters without being distracted by unrealistic effects. The relationship between visual effects and the photographic image is one of collaboration and mutual enhancement. Photography serves as the foundation for visual effects, providing the reference and framework for achieving realism. In turn, visual effects expand the possibilities of photographic imagery, enabling filmmakers to transcend the limitations of the real world. Photorealism, as a guiding principle, ensures that visual effects remain believable and emotionally engaging.
By combining technical precision with artistic vision, visual effects artists craft immersive experiences that captivate audiences and push the boundaries of visual storytelling. Whether through the seamless integration of computer-generated imagery elements, the meticulous replication of real-world details, or the imaginative creation of fantastical worlds, visual effects continue to redefine the possibilities of cinematic expression. As technology advances, the relationship between visual effects and the photographic image will only grow stronger, paving the way for even more innovative and compelling visual narratives.
Reference:
Prince, S. (2012) “Digital Visual Effects in Cinema: The Seduction of Reality” available at:https://www.google.co.uk/books/edition/Digital_Visual_Effects_in_Cinema/GFwFl3xFgZsC?hl=en&gbpv=1&printsec=frontcover p150-155p (Accessed: January 13,2025)
Finance, C. and Zwerman, S. (2010). “The Visual Effects Producer: Understanding the Art and Business of VFX” available at: https://ereader.perlego.com/1/book/1605828/8?page_number=28 p29-35p (Accessed: January 13,2025)
Dinur, E (2021). The Complete Guide to Photorealism for Visual Effects, Visualization and Games. Available at: https://www.perlego.com/book/2529506/the-complete-guide-to-photorealism-for-visual-effects-visualization-and-games p14-19p (Accessed: January 13,2025)
A.Tinwell (2014)” The uncanny valley in films and animation” Available at: https://www.perlego.com/book/1603212/the-uncanny-valley-in-games-and-animation p10-20p (Accessed: January 13, 2025)