We have been asked to find new trends in the VFX industry
here is my knowledge of a new trend that I saw In a seminar this summer that I believe is very interesting and has a lot of potential, It is a company called Chaos, that has renders for 3ds mas named V-Ray
Chaos Group is a rendering and simulation software developer1. It was founded in 1997 by Peter Mitev and Vladimir Koylazov1. Chaos is best known for the development of V-Ray1. It is headquartered in Karlsruhe, Germany and Sofia, Bulgaria1
They made the film with the use of a fusion of old-school and cutting-edge Hollywood technology. The virtual production elements of the shoot used Chaos’ innovative Project Arena virtual production technology to create a convincing vision of the Wild West. Thanks to these innovative techniques, the team was able to bring on legendary cinematographer and six-time ASC president Richard Crudo to lens in-camera VFX shots with none of the frustrations of a game engine-based pipeline.
In my research, the new trends are definitely
- Real-Time Rendering:
- Virtual Production
- Augmented Reality (AR) Integration
- AI-Driven Visual Effects
- Hyper-Realistic CGI
- Immersive Experiences (VR)
Task 2. Match four Harold Edgerton images each with a VFX shot
Sherlock Homes: stop motion fight
Dr. Strange
matrix
Dred (2012)
*Homework* – What do think Dr James Fox means by his phrase ‘The Age of the Image
I think they are good questions out of this episode, one of the is – How can we tell if the images we see are real at a time when picture editing software is more widely available. Art historian James Fox explores the significant influence that images have on creating the modern world in the first episode of Age of the Image. He considers how images now more than ever affect how we perceive reality, truth, and history and challenges us to think critically about the production and use of visual media.
Also, James Fox highlights how, in contrast to the previous century, the use and manipulation of images have dramatically increased in the modern world. He makes the point that photography was seen as a kind of art or documentation when it was relatively uncommon in the early century. But the development of digital technology—particularly photo-editing software like Photoshop—has made the process of creating and modifying photographs more accessible and commonplace.
Fox emphasizes how the distinction between fact and illusion has become more hazy due to the boom in picture production and manipulation. Compared to the first century of photography, we now take and use more pictures in a single day than ever before. This deluge of photographs is paired with advanced editing methods.
HOW DO WE TRUST WHAT WE SEE?
WEEK 2 : The Photographic Truth-Claim
Analyse the image
IS Reality Real ?
Between the fire and the cave walls, there is a road, and people walk along this road, carrying various objects: models of animals made of stone and wood, human statuettes, and other things. The people who walk along the road, and the objects they carry, cast shadows on the cave walls.
The people who are chained in the cave and facing the wall can only see the shadows of the people (and the objects they carry): never the actual people and objects walking past behind them. To the people chained up in the cave, these shadows appear to be reality, because they don’t know any better.
The logical distinction between what is imaginary and what is real tends to disappear. Every image is to be seen as an object and every object as an image. Hence photography ranks high in the order of surrealist creativity because it produces an image that is a reality of nature, namely, an hallucination that is also a fact. The fact that surrealist painting combines tricks of visual deception with meticulous attention to detail substantiates this
ICONS– Signs where the signifier resembles the signified
The icons we use in digital interfaces are all signs and not specifically icons as defined by semiotics. Icons as discussed here are one possible type of form a sign might take. An icon is meant as a direct imitation of the object or concept.
SYMBOL – where the relation between signifier and signified is purely conventional and culturally specific
Symbols are at the opposite end from icons. The connection between signifier and signified in symbols is completely arbitrary and must be culturally learned. The letters of an alphabet are a good example of symbols. The shape of each letter and the sound it represents bear no physical connection to each other.
INDEXES – Signs where the signifier is caused by signified
With an index, the signifier can not exist without the presence of the signified. For example smoke is an index of fire. Dark clouds are an index of rain. A footprint is an index of a foot. In each case the presence of the former implies the latter exists.
The Weekly Homework:
What do you think is meant by the theory: The Photographic Truth-Claim?
Particularly in light of digital technology the idea of historical progress, arguing that it frequently moves unsolved problems around instead of addressing them. Film-based and digital photos can both be altered, although digital images are easier and faster to do so. the ways that digital technology is changing photography, especially when it comes to its creative identity and truth claims. It makes the case that art photography has always been free to experiment with creativity using a variety of approaches, such as numerous negatives and aesthetic choices, and that photography has never been limited to.
Digital alteration doesn’t essentially change photography’s status as an art form, even while it opens up fascinating new possibilities. Truth claims have historically been disregarded by many photography techniques, especially in commercial settings like politics or advertising.
If digital alteration were to totally destroy photography’s connection to reality, there would be no reason to take pictures since they would be indistinguishable from other visual arts like painting or sketching.
The connection between traditional photography and digital media, implying that although digital technologies are different, they are related to historical forms such as dioramas and panoramas that produce fake realities. It makes the case that, rather than only being viewed through the prism of indexical truth claims, photography should be viewed within this tradition, highlighting its visual sharpness.
The idea that digital media should be referred to as “post-photographic,” claiming that although the move to digital representation is revolutionary, it is similar to earlier changes in photography. Photographs will still have their photographic identity despite these modifications, which will change how they are made and used. Also the social discourse surrounding photography’s truth claims will change, maintaining their relevance, despite worries that digital manipulation would compromise these claims.
Ultimately, the theory highlights the complex relationship between photography, truth, and representation, suggesting that while photographs may assert a claim to truth, they are not incapable of making mistakes or being wrong and are shaped by various cultural and social factors.
Week 3:
Those are some analogue photographs of a friend of mine niavaah (niavaah) | Photos | PHOTO FORUM (photo-forum.net) she does a lot of analogue fake photographs in her studio I was witness of how she does this with a old film cameras and processing the negatives with chemicals then cutting the images and piecing them together to make a artwork.
here are also some history examples of fake photography
Digital Fake photos
Documentary history Fake in movies
Pompaii
Homework: What is Digital Compositing
the process of combining visual elements from various sources into a single cohesive image. It involves layering different images, often from video, photography, or CGI, and using techniques like masking, keying, and color correction to blend them seamlessly. The goal is to make the final composite appear as if it was naturally captured as a single shot. This technique is widely used in film, television, and digital media to create realistic or stylized effects that would be impossible or impractical to capture in real life.
Layering, Masking (Rotoscoping), Keying, Color Correction and Grading, Tracking and Match Moving, stabilizing, Blending Modes, Depth and 3D Compositing, Edge Blending and Anti-Aliasing, Warping and Morphing, Time Manipulation (Time Remapping), Grain Matching and Noise
Digital compositing is a sophisticated art form that combines multiple techniques, from keying and masking to color correction and tracking, to create a unified, realistic image. Mastering these techniques allows artists to manipulate reality, seamlessly blending live-action, CGI, and other visual elements to produce stunning visual effects.
The digital compositing is a balance of technical precision and creative vision, with tools and techniques evolving as technology advances.
At the beginning of composing :
The summer of 1857 marked an important milestone in the history of photography, thanks to Swedish-born photographer Oscar Gustave Rejlander. Rejlander is most famously known for creating one of the earliest examples of composite photography, a technique that involved combining multiple negatives into a single image. His groundbreaking work, The Two Ways of Life, exemplifies this method.
To create this complex image, Rejlander combined 32 individual negatives into a single photograph. At that time, photography was still in its infancy, and exposures were long and cumbersome. Capturing a single image with such a large group of people would have been nearly impossible without composite techniques.
VFX SHOTS from films :
LIFE OF PI
Game of Thrones
The walking Dead
Week 4 : The Trend of Photorealism
What I believe makes the VFX shot photorealistic is all those passes we get from the 3d render AOVS, and the way we implement them in the shot
shadows, reflections, refractions, dof, relighting, specular, defuse, emissions,
Arthur
Padington
Batman
Dune
Homework:
Photorealism in CG (Computer Graphics) renders is the art and science of creating digital images that are not different from real photographs. Achieving photorealism involves not only highly detailed modelling and texturing but also perfect lighting, shading, and rendering techniques. We need to ensure that all rendered elements fit seamlessly together, on high-stakes VFX or integration projects.
Detailing ,Texture Maps High-Resolution Textures, Realistic Shading and Material Properties, Lighting and Global Illumination, HDRI Lighting: Ray Tracing and Path Tracing Global Illumination to composite them we use techniques as
Depth of Field (DOF) and Motion Blur
Lens Distortion and Chromatic Aberration
Color Grading and Exposure
Matching Real-world References
That means photorealism is making sure the final render has natural depth, perspective, and color harmony when integrating it with live-action elements. CG isn’t just about raw detail but involves a whole approach to creating the best visual experience possible.
It’s a world companies standard to use the software called Nuke, were we use GDI (General Data Interface) passes—more commonly known as AOVs (Arbitrary Output Variables) or render passes that are layered in a specific order to compose a photorealistic final image. These passes allow for fine-tuned control over different aspects of a render, such as lighting, shadows, reflections, and materials. Here’s a breakdown of how to use GDI passes in Nuke and the general order of compositing them for a realistic look:
- Diffuse Pass
- Purpose: Represents the base color of the objects without any lighting or shading.
- Usage: This pass is often combined with lighting passes like diffuse lighting and GI to get the base surface information.
- Order: Typically, start by laying down the Diffuse Pass as it forms the foundation of your composite.
- Diffuse Lighting (Direct and Indirect)
- Direct Lighting: This pass captures light that directly hits objects.
- Indirect Lighting (Global Illumination): Captures light that bounces around the scene.
- Usage: Multiply these lighting passes over the Diffuse Pass to simulate realistic lighting effects.
- Order: Combine with the Diffuse pass next to establish the basic lit scene.
- Specular Pass
- Purpose: Captures the reflections and highlights from light sources on shiny surfaces.
- Usage: Add the Specular Pass to enhance reflections and add realistic shine to surfaces.
- Order: After adding the diffuse lighting, the specular pass is generally added (screen or add blend mode).
- Reflection Pass
- Purpose: Contains only the reflections seen on reflective surfaces.
- Usage: Add or screen the reflection pass over the composite, adjusting opacity if needed to avoid overpowering other details.
- Order: Following the specular pass, add the reflection pass for reflective materials.
- Refraction Pass
- Purpose: Shows the light that passes through transparent or translucent materials.
- Usage: Add or screen this pass on top, adjusting it based on how much transparency you want in your glass or liquid materials.
- Order: After reflection, if there are glassy or transparent materials, add refraction.
- Ambient Occlusion (AO) Pass
- Purpose: Enhances shadowing in crevices and contact areas.
- Usage: Multiply the AO pass over the composite to deepen shadows in small areas and add realism.
- Order: Apply toward the end to maintain shadow details without interfering with lighting passes.
- Emission/Incandescence Pass
- Purpose: Captures self-illuminating areas, such as screens or lights in the scene.
- Usage: Add this pass as a screen or add blend mode, giving the effect of objects emitting light.
- Order: Add last or nearly last to control intensity and integrate it with the overall lighting.
- Z-Depth Pass (Optional)
- Purpose: A grayscale pass that represents the distance of objects from the camera, useful for adding depth of field.
- Usage: Use as a mask in the ZDefocus node for depth-of-field effects.
- Order: This is typically used separately as a mask, not added to the composite itself.
General Workflow:
- We Start with the Diffuse Pass and add Diffuse Lighting (direct + indirect).
- Then we add Specular and Reflection passes to introduce highlights and reflections.
- Refraction if necessary for glassy materials.
- Ambient Occlusion for extra shadow detail.
- Emission to simulate glowing objects.
- We also use Z-Depth separately to add depth-of-field effects.
This ordering ensures that light and material interactions are layered logically, helping achieve photorealism while maintaining full control over each component.
CGI photorealism is the art and science of creating digital images that look like real-life photographs or footage. It combines high-quality 3D modeling, texturing, shading, and realistic lighting techniques to replicate the physical world’s intricate details and its lighting behavior.
Week5
Homework
Consolidate: Write a post comparing Motion Capture to Key Frame Animation
Motion capture excels in capturing realistic human movements quickly, while keyframe animation allows for detailed control and artistic expression. Understanding their strengths and weaknesses can help animators choose the right approach for their projects, often leading to a hybrid method that leverages both techniques effectively.
With Motion capture involves we are recording real-life movements using sensors and cameras, translating those movements into digital data that animators can apply to characters. We can have high level of realism but it can be less flexible in terms of fine-tuning specific movements.
With Keyframe animation we can manually do setting key positions at specific points in time and letting the software ad keys in-between frames. Also we have complete control over the movement, allowing for more stylized or exaggerated animations. That is more time-consuming for difficult sequences but can be more efficient for simpler animations.
With the indexical nature of motion capture data (facial data , directly taken from the actor) connects the animation to real human movement, allowing for a seamless blend of realism and more artistry. This direct representation is good for animated performances, making them resonate more with audiences and grounding fantastical elements to recognizable the human behavior.
Keyframe
Motion Capture
Week 6: Reality Capture (trends of capture in VFX)
Type of lidars
what is linear perspective (scientific perspective) – a system of creating an illusion of depth on a flat surface. All parallel lines (orthogonal) in a painting or drawing using this system converge in a single vanishing point on the composition’s horizon line.
The Renaissance architect Filippo Brunelleschi made the first known drawing in 1415 that used the mathematical system of linear perspective to create the illusion of a building receding towards the horizon line. But he wasn’t the first to make an attempt at doing so – in fact, Greeks and Roman used angular lines to convey space in their art.
an illusion or reality of 2d surface where the eye level of the figure is almost on horizon line
Homework: What is Reality Capture, how does it work, and where is it used? A case study.
Reality Capture is a new technology process that digitally captures and represents the physical world by creating very accurate 3D models from real-world data. This process typically involves capturing environments, objects, or structures in three dimensions using methods like photogrammetry, laser scanning (LiDAR), and structured light scanning. Once we collect the needed data, we need to use specialized software like for example (https://www.capturingreality.com/)
We can use Reality capture for (Architecture, Engineering, and Construction, but also to create realistic environments and objects for visual effects, game assets, and immersive experiences, etc.)
Here is a short video example
I found very interesting company from Finland that I will write for:
tietoa.fi
Tietoa is a Finnish company specializing in data modeling and visualization services, focusing on enhancing real estate and construction project efficiency. Founded in 2000, the company provides a range of high-quality visualizations and training services. Tietoa’s approach leverages advanced tools like laser scanning, photogrammetry, and aerial imagery to generate precise, initial data that minimizes design errors and costs during construction.
They use their visualization expertise in urban planning projects to create immersive 3D animations and VR simulations that allow the public to experience proposed developments before construction. By making those simulations, they also help the architects make decisions, improving public engagement and support for large infrastructure projects.
Laser scanning allows architects to create a digital twin of an existing structure without physical alterations for historical renovations. The scan helps teams understand complex details in older buildings, facilitating precise designs that respect the original structure.
I found that Laser scanning, often referred to as LiDAR (Light Detection and Ranging), is a technique that uses laser beams to capture detailed, accurate 3D data of real-world objects or spaces. It’s commonly used in construction, architecture, and infrastructure projects to create precise models of existing buildings, terrains, or project sites.
Tietoa’s involvement in the renovation of St. John’s Church focused on providing high-quality, photogrammetric data to guide the facade restoration. This initial data collection included creating detailed, colorized point clouds and high-resolution projection images of the church’s exterior. The images helped accurately document the facade’s current state, forming a precise base for restoration planning.
Particularly in exposed areas, like the church’s statues and intricate facade details, replicas were made to maintain the original aesthetic for parts that were too deteriorated, while other areas were carefully restored onsite. Tietoa’s data allowed planners to manage these delicate tasks effectively and avoid unexpected challenges in the renovation process.
This project illustrates how Tietoa’s expertise in photogrammetry and data modeling supports the preservation of historical structures. The resulting data provides architects and conservationists with precise, manageable insights, essential for maintaining heritage sites with historical accuracy and cost efficiency.
3D Scan of the National Theather
3D Scan of a cathedral
I have also tried to Reality Capture from my phone with Lidar.
taken 100 photos like this
used a phone lidar and app called reality scan
and 3d renders
Week 7: Reality capture (photogrammetry in VFX)
The digital Michelangelo scanning:
As an application of this technology, I and a team of 30 faculty, staff, and students from Stanford University and the University of Washington spent the 1998-99 academic
year digitizing the sculptures and architecture of Michelangelo. During this time, we scanned the David, the Unfinished Slaves, and the St. Matthew, all located in the Galleria dell’ Accademia in Florence, the four allegorical statues in the Medici Chapel, also in Florence, and the architectural settings of both museums. In the months ahead we will process the data we have collected to create 3D digital models of these works
They describe a hardware and software system for digitizing the shape and color of large fragile objects under non-laboratory conditions. The system employs laser triangulation rangefinders, laser time-of-flight rangefinders, digital still cameras, and a suite of software for acquiring, aligning, merging, and viewing scanned data. As a demonstration of this system, where digitized 10 statues by Michelangelo, including the well-known figure of David, two building interiors, and all 1,163 extant fragments of the Forma Urbis Romae, a giant marble map of ancient Rome. Largest single dataset is of the David – 2 billion polygons and 7,000 color images. In this paper, the challenges were faced in building this system. Focusing in particular on the unusual design to laser triangulation scanner and on the algorithms and software we developed for handling very large scanned models.
This was made to create digital archive
laser scanner gantry positioned in front of
Michelangelo’s David. From the ground to the top of the
scanner head is 7.5 meters
The Veronica Scanner
A new interactive exhibition at the Royal Academy explores the next chapter in the evolution of 3D imaging: the art of photogrammetry in the 21st century. For ten days in September, the RA will host the Veronica Chorographic Scanner, a custom-designed 3D head scanner. In just seconds, the scanner uses eight cameras to take 96 high-resolution photographs of the human head from every angle, capturing even the most intricate surface details. These images are then transformed into a digital 3D model, faithfully replicating every feature of the face.
They can be made into stunning digital portraits. The processed 3D models has been uploaded to an online gallery, allowing the public to explore them.
Mimesis is a concept found in literary criticism and philosophy with a broad set of meanings. It can refer to imitation, a form of resemblance that isn’t based on sensory experience, the capacity to receive and reflect things, or the act of representing and mimicking. It`s often associated with the idea of art or literature reflecting reality. However, its interpretation can vary depending on the context. It can mean simple imitation (like copying something), or it might refer to a deeper kind of resemblance that doesn’t depend on direct sensory experience. In some cases, it involves a receptive process where an artist or work absorbs and reflects aspects of the world, or it can be about the act of representing or mimicking something else. In short, it’s a flexible term used to describe different kinds of relationships between art, reality, and representation.
in cinema and VFX (2D/3D) is about the replication or reimagining of reality. Whether through photographic realism in live-action filmmaking or through digital techniques in animation and VFX, mimesis helps build worlds that connect with audiences emotionally or intellectually by mirroring, distorting, or inventing new versions of reality.
Verisimilitude a work of fiction—whether literature, film, or another medium—appears “true to life” or believable, even when the events or elements of the story are not actually real. The term literally means “similar to truth” and comes from the (likeness or similarity).
In film and VFX, verisimilitude is crucial because it determines how immersive and convincing the audience finds the story, characters, and world see it on screen. It’s not necessarily about presenting a “realistic” or strictly accurate representation of the world, but about creating a narrative and visual experience that feels plausible and consistent within its own logic, even if it involves fantasy or unrealistic elements (such as in sci-fi, fantasy, or animation).
Hyperrealism is a technique that takes realism in animation to an extreme level, making digital creations look almost indistinguishable from real-life footage. It involves creating animations, textures, lighting, and details with such precision and depth that they appear lifelike, sometimes we can even be surprise with the visual quality of real-world videos.
Week8: Simulacra, Simulation and the Hyperreal
The precession of simulacra.
Baudrillard’s Simulacra and Simulation challenges the distinction between reality and its representations, showing how modern culture is increasingly dominated by hyperreal simulations that obscure, replace, or even create new versions of reality itself. The map, or the simulation, becomes the reality we experience.
Baudrillard suggests that in modern society, the map (or simulation) no longer merely reflects the territory, but actually becomes more real to us than the territory itself. In other words, we no longer perceive the world directly; instead, we interact with and understand it through representations, which can be manipulated and detached from any authentic reality.
Disney’s theme parks are designed to create the illusion of a perfect world that aligns with visitors’ desires. This is achieved by encouraging guests to experience the park with a childlike sense of wonder and by promoting itself as a place where “dreams come to life.” Central to this experience is the idea of escaping the constraints of physical reality—visitors can transcend time, space, and physical laws. Attractions offer surreal experiences, such as floating through the human body or traveling through time and space, while thrill rides defy gravity and challenge the limits of what seems possible.
Disney World offers visitors an escape from societal flaws and personal limitations by creating an idealized vision of American capitalism and history. The park immerses guests in a world of perpetual celebration, filled with parades, fireworks, costumed performers, and endless fun, much like living in a constant holiday where negative emotions are excluded.
In this way, Disney World provides a fictionalized version of humanity’s deepest desire: transcendence. It allows visitors to escape the mundane world, offering a symbolic journey through imaginative, weightless realms where ordinary constraints vanish. The experience reintroduces a sense of wonder, countering the “disenchantment” of modern life as described by sociologist Max Weber, who noted the decline of religion and the rise of science. Disney World and similar cultural simulations aim to re-enchant the world, blending art and technology to create new mythologies of space, time travel, and adventure.
Disney World serves as a modern holy place that allows visitors to escape the limitations of reality, offering them a world of fantasy, wonder, and transcendence. Through its carefully crafted experiences, it provides an idealized vision of society, inviting guests to leave behind the complexities of daily life and enter a space where imagination, celebration, and limitless possibility. In doing so, Disney World taps into a deeper cultural desire, allowing for a temporary but powerful sense of liberation and joy.
Disney World, Simulation and Postmodernism
analysis based on Baudrillard’s Simulacra and Simulation theory
- the idea of we wearing rings
- added a magical text to it
3.making of the magic ring
4. The ring is in our hands
5, When we have it it changing our destiny and simulating the new world for us. Different reality.
VfX Simulation
- We see real-world in the ships , ocean , fog
2 Mat painting for the hills and the board seen by the pirate.
3. Creature and illusion
4. Full CGI shot of Pirates of Caribbean
for the assessment I have chosen OPTION 1: How would you describe the relationship between Visual Effects and the photographic image? —–
For me this is very interesting topic as the relationship between Visual Effects (VFX) and the photographic image is all about working together with them to improve and change what we see. VFX builds on or adds to the photographic image, turning it into something more dynamic, creative, or even unreal. It can enhance the original shot, transform it into something completely different, or bring together the real and the imagined in a way that feels seamless.
Week9: Virtual Filmmaking
The core idea behind virtual production—filming actors in front of a massive LED screen showing far-off or imaginary locations—actually traces back to the front- and rear-projection techniques used widely in the 20th century. In these methods, a film projector would project pre-recorded footage onto a screen behind the actors.
The issue with blue and green screens is that they can reflect those colors onto the actors. For example, if the screen is blue and the background is a clear sky, it might work fine, but in most cases, it leads to unwanted color spill onto the talent. This requires precise lighting and post-production work to remove the blue or green reflection.
Since then, LED volumes have been featured in major productions, with many other shows using them for specific scenes. The next big innovation in this technology is yet to be seen.
With a traditional linear production pipeline, 20 percent of a typical film budget is spent on reshoots. Compare this with a virtual production pipeline, which saves both time and money, as it turns your pre-vis environment into the final frame on set, making filmmaking a far more agile and iterative process.
This is photo I have taken from a BSC filmmakers event last year, I have try this myself, was a great experience
I believe there is a lot of benefits of filming in virtual production.
We have the ability to bring any actor into an environment built for imagination, or a virtual replica of the real thing. Also we can perform camera moves that are difficult or next to near impossible otherwise. Its good for keeping the entire production crew where you need them and cutting back on most of your location filming and travel (that saves on cost of the final production).
They can definitely include
- Natural scene lighting
- Complete design freedom
- Full environmental control
- Building virtual worlds
- Encouraging natural performances
- Minimizing cost and avoiding budget blow-outs
- Realizing the vision
even if we use virtual productions LED displays we still need camera rails and different camera operators
we can also have different challenges that we have to find a solutions when we film
- Color Shift – Due to both the way that LED panels are currently built and the fundamentals of how light behaves, a color shift of the content might appear when looking through the camera versus what you see with your eyes. The shift can impact the whole LED wall evenly, or just a portion of the wall. The shift can be static or it can vary over time.
- Color Banding – Color banding occurs when an inaccurate representation of the color of the content generates abrupt changes between shades of the same color, instead of gentle gradients. This is commonly caused by a low bit depth of the video pipeline and/or of the content.
- Content Color Mismatch –There are a number of elements in the image chain that can affect what the content will look like on the LED wall. Almost always, the way it looks through the camera is not what we expect. This can be caused by two major elements: 1) a misleading expectation of what the content should look like or 2) a failure in how the color pipeline is set up (color pipelines are designed to make the content look as expected on camera).
- Optical Moiré Patterns – A moiré pattern is an image artifact that is generated when two fine patterns interfere with each other. When shooting LED walls, the pattern created by the LED on the LED panels might conflict with the camera sensor photosite pattern and create visual artifacts such as color banding and multi-colored stair-stepping artifacts (a form of aliasing).
- Image Aliasing Artifacts – Similar to moiré patterns, there are a number of image-related artifacts that can affect content. Some of these fall under the category of aliasing artifacts. These appear as jagged/stair-stepping lines around the edges of some elements of your content or between high-contrast lines and/or when you notice indistinguishable signals in your content that should be different. These can be caused by either sampling issues (up or down scaling) within the image pipeline, especially within the content player or from the content player to the LED screen (specifically within the image processor), or from poor content creation: poorly captured images or of low quality, low capture/render resolutions or inefficient scaling filter.
- Compression Blocking & Noise – Similar to banding, these artifacts are caused mostly by compression and appear as visible “squares” on the content.
- Playback Lag – Issues playing content back in real-time (or at the desired frame rate) can have many causes, usually related to the content playback server and/or the media. Common causes include choice of codec, resolution, or graphics card capabilities.
- Flickering (Multiplexing) – Video artifacts such as these are mostly represented as visible changes in brightness between cycles on the LED wall. These can be seen as dark static bands that lay vertically or horizontally across the wall, they can also exist as scrolling bands at various speeds, or as bright bands.
-
HALO/RIM LIKE ARTIFACT (Edge Diffraction) –When shooting on an LED volume a halo-like edging may appear around the edge of objects in front of the wall, this could be the talent or set pieces. This is an optical effect called edge diffraction. Edge diffraction occurs when narrow-band light sources that are highly parallel (like the kind emitted from an LED wall) pass an object and suffer very little disruption on their path to the camera lens, remaining highly parallel. This phenomenon is not solely related to LED walls and can occur with any narrow-band spectrum light source emitting in a highly parallel manner.
-
REFLECTIONS – Some LED screens have matte surfaces, while others are shiny. The latter can cause your actors, props and lighting to reflect on the screen and possibly make the blacks look milky.
but there is always solution for this
But DONT WORY —- We will FIX it in POST 🙂
one of the latest film shot on LED screens is Avatar Netflix but also we can see a lot of real life effects are used like smoke and lights
Week 10-11 (presentation)