Week 1: Introduction to the Module. Topic: The Age of the Image and the Trend of the Lens
What is current trend of VFX?
The current trends in Visual Effects (VFX) include a focus on realistic character animation, virtual production with real-time rendering, integration of machine learning and AI, immersive experiences in AR and VR, interactive VFX in gaming, advanced simulations, AI-driven content creation, and environmental effects. Sustainability and data security are also growing concerns in the industry.
Real-Time Rendering and Virtual Production:
Real-time rendering technologies, powered by engines like Unreal Engine and Unity, were gaining significant traction. Virtual production, which involves creating and integrating digital elements in real time on set, was becoming increasingly popular in the film and television industry.
AI is not only streamlining various production processes but also offering creative possibilities by generating content and assisting filmmakers in making data-driven decisions. However, the creative input of human professionals remains essential in filmmaking, and AI is typically employed as a supportive tool in the process.
Can AI Make a Star Wars Film?
What impact have lenses had on cinematography practices?
In the art form of cinematography, which depends on various components to produce visually striking frames in films, lenses are essential. I examine how lenses aid camera operators in maintaining picture composition and balancing things in the frame. This control leads to powerful and visually pleasing cinematic compositions.
Lenses are super important in making movies. They’re like the camera’s special glasses. These glasses let in light and change how things look before they get recorded by the camera. So, lenses are like the camera’s eyes. They take what’s in front of them and make it look amazing on screen. When filmmakers pick the right lens, they can make their movies look different in cool ways. This makes the story in the movie even better. So, lenses are important in making movies and they make a big difference.
Manipulating Depth of Field for Visual Impact
Lens Flares
Macro lenses extreme close-ups
Dolly Zoom
What’s real and what’s photoreal
The term “real” refers to actual, untouched things or live-action photography. “Photoreal” or “photorealistic” indicates a level of realism in computer-generated imagery (CGI) that is so high that it is almost impossible to tell apart from actual images. The goal of photorealistic graphics is to faithfully reproduce the visual characteristics of real-world objects, such as texture, lighting, and detail, and to depict them so convincingly that they resemble real images or video footage. The aim of photorealism is to construct an image that is so lifelike that it makes it difficult for viewers to tell what is real and what is artificially created.
Howard Edgerton
Harold Edgerton took some very creative, peaceful, fantastic, funny, and lovely pictures. They show how art and science came together in a beautiful and striking way, changing both disciplines for the better.
1) What does Dr. James Fox mean by his phrase ‘The Age of the Image’?
The term “The Age of the Image,” which was coined by Dr. James Fox, a well-known art historian and cultural observer, defines the period in which visual imagery has taken over as the most important form of expression and communication in modern society. Images today have a profound impact on how we communicate information, develop identities, and see the world. This era can be characterized by the universality of visual culture. As a result of the development of digital technology and the internet, we are constantly exposed to a wide variety of pictures, whether they are on social media platforms, websites, advertising, or in conventional media. Smartphones with high-quality cameras are everywhere, democratizing the creation and sharing of images and enabling people to actively engage in this visual dialogue. A major part of this visual revolution is played by social media and digital platforms. These platforms have developed into vital tools for self-expression and communication as people curate and share the photographs that represent their lives. Trends and movements can spread quickly and internationally because to the virality and shareability of visual material. Moreover, images are now widely used for capturing memories, documenting historical events, and influencing collective memory. The visual archives produced in this era help to build historical narratives and preserve cultural heritage. Images have a special ability to communicate, being able to succinctly and effectively express complicated stories, emotions, and ideas. Images offer a quick and effective way to communicate at a time when attention spans are shortening. Advertisers, politicians, artists, and activists are just a few of the groups who use this power to sway public opinion, elicit responses, and change cultural attitudes. The requirement for critical visual literacy becomes crucial, nevertheless, given the quantity of visual content. It is crucial to have the capacity to evaluate visual content critically, recognizing intended meanings, potential biases, and socio-political implications. This discernment is essential for navigating a world where pictures may empower and mislead, empower and manipulate.
Harvard Reference: BBC iPlayer (2020) Age of the Image. Available at: https://www.bbc.co.uk/iplayer/episodes/m000fzmc/age-of-the-image (Accessed: 21 September 2020).
Week 2: The Photographic Truth Claim: Can we believe what we see?
The Allegory of the Cave
Brief explanation of Plato’s cave.
Prisoners are locked up in a dark cave in Plato’s Allegory of the Cave, only able to see shadows flashing on a wall. They consider this to be the only reality. The road to knowledge and truth is represented by the philosophical prisoner who breaks free and finds the sunlight outside. He encounters opposition when he returns to the others to guide them since they simply hold on to their narrow world. The escape represents the philosopher’s search for reality, while the cave stands for ignorance and the false world of images. Shadows and fire stand up for the modified reality created by society. This story highlights the challenges of awaking people to greater awareness of reality as well as the fight to accept reality.
Semiotics
Icons:
Signs that copy or look like the thing or idea they stand for are called icons. They are like what they stand for in terms of appearance or feel. In other words, there is a clear and obvious relationship between icons and the values they represent.
Indexes:
Indexes are signs with a clear, causal relationship to the thing or idea they stand for. They make an indication by means of a logical or physical relationship. Unlike with symbols, there is a purposeful relationship between the sign and its meaning.
Symbols:
Symbols are signs that randomly relate concepts or objects with them. To communicate meaning, they depend on common agreements or cultural understandings. There is no clear cause-and-effect relationship or natural resemblance between a symbol and what it signifies. Rather, it is decided by common linguistic, cultural, or social conventions.
2) What is meant by the theory: The Photographic Truth-Claim?
What’s the Point of an Index? or, Faking Photographs
TOM GUNNING
In this essay Gunning talk about photographic index or the quality of photographs that we call indexical is not the same thing as the truth claims of photographs. And second thing he argue is about the relationship between pre digital photography and digital photography. He says that digital photography are not as different as it got theorized them to be. To understand this two lager arguments, need to understand smaller argument which are digital photographs are no less indexical than analog photographs and second is analog photographs were never straightforwardly indexical, anyway.
Tom Gunning claims that he tried to untangle the idea of visual accuracy from simple indexicality, he would now like to consider the “truth claim” of photography that relies on both indexicality and visual accuracy but includes more and perhaps less than either of them. He use the word “truth claim” because he want to emphasize that this is not simply a property inherent in a photograph, but a claim made for it dependent, of course, on once understanding of its inherent properties.
“Truth claim” are statement that we make about photographs and photographs are indeed special kinds of objects that often get roped into truth claims but photographs themselves are not truthful “bereft of language, a photograph rlies on people to say things about it or for it.” “Truth” is a function of speech, and photographs don’t speak. People do. As photograph himself don’t speak but are used often as evidence for statements. In what way a photograph is being used to support a kind of statement. The image itself does not itself speak a truth claim, the truth claims are always made by people.
Week 3: Faking Photographs: Image manipulation and computer collage
Famous faked analogue photographs
Cottingley Fairies (1917): Elsie Wright and Frances Griffiths
The Cottingley Fairies were a series of five photographs shot by 16-year-old Elsie Wright and her 10-year-old cousin Frances Griffiths in Cottingley, England, and are arguably one of the most renowned examples of faked photographs. The girls appeared to be playing with fairies in the images. Considering the images to be real, Sir Arthur Conan Doyle, the man behind Sherlock Holmes, had them printed in The Strand Magazine in 1920. The fairies were eventually confirmed to be paper cutouts in the 1980s.
Loch Ness Monster (1934): Robert Kenneth Wilson
The “Surgeon’s Photograph” is maybe the most well-known image allegedly depicting the Loch Ness Monster, a monster rumored to live in Loch Ness in Scotland. The image, which was captured by London-based gynecologist Robert Kenneth Wilson, looked to depict the head and neck of a dinosaur-like monster emerging from the water. In 1994, it was discovered that the image was a fraud made with a toy submarine that had been given a sculpted head.
Digital faked analogue photographs
“Olympic Rings” UFO Photo (2012): Martin Rickett.
During the London Olympics in 2012, a picture that appeared to show a UFO flying in front of the Olympic rings went viral. Later on, it was discovered to be a digitally edited creation of an original picture that was taken in 2004 by a photographer by the name of Martin Rickett. Using image editing tools, the modified image was produced.
3) A definition of VFX compositing. What is it and how does it work?
- How does the composite work?
- What is the composites purpose?
- Think about what the composite needs to do to create an ‘impression of reality’
In visual media like film, photography, and computer graphics, compositing is a technique used to merge multiple elements or images into a single, complete scene or frame. It is an essential step in several fields, including graphic design, animation, filmmaking, and animation. Filmmakers and artists can construct complex, dynamic, and visually fascinating scenes using compositing that might not be possible in a single shot or frame.
Compositing in filmmaking combines various elements like live-action footage, CGI, and backgrounds. Using specialist software like nuke, these components are layered, masked, and modified in post-production. Techniques like keying, color correction, and motion tracking ensure they blend seamlessly. Compositing, which is crucial for visual effects, helps filmmakers to create dangerous or impossible aspects. It improves storytelling by creating a consistent exciting design, ultimately enhancing to the entire cinematic experience.
Compositing is a challenging process that requires great attention to detail and artistic performance in order to give the feeling of reality. To achieve this, various elements need to blend smoothly within the scene. t is crucial to have consistent lighting; all components, including shadows and reflections, must have the same lighting conditions. A realistic appearance of objects or characters in their surroundings is made possible by matching perspective and camera angles. Realistic reflections and shadows help to ground objects in the picture and increase the sense of existence. Careful color grading and color correction unify the color palette, ensuring that the visual elements align with the intended mood and atmosphere.
Week 4 The Trend of Photorealism
In reply to non-representational art forms, the exciting art movement known as photorealism first appeared in the late 1960s. This genre is distinguished by its painstaking attention to detail in recreating settings and items with an incredible degree of accuracy and detail, to the point where the artwork frequently seems almost exactly like a high-resolution photograph. Utilizing methods like layering, airbrushing, and precise brushwork, photorealist painters create a realism that mimics the characteristics of photography. Portraits, still life settings, landscapes, and cityscapes are common subjects for photorealistic artwork. In order to create photorealist art, photographers are used as references, and artists carefully examine and replicate the subtle visual aspects of their source material. Famous performers in this genre include Richard Estes, Robert Bechtle, Audrey Flack, and Chuck Close. With its remarkable technical expertise that blurs the boundaries between painting and photography, photorealism remains a significant and important style in modern art. It inspires wonder.
Richard Estes Artwork
One of the founders of Photorealist painting, Richard Estes, is renowned for his flawless urban landscapes. His paintings, such as “Telephone Booths” and “Ansonia,” capture every detail of city scenes, from neon-lit diners to reflective glass surfaces, with amazing accuracy. He displays his technical mastery in “Double Self-Portrait,” where he depicts his reflection in mirrored windows. His painting “Amsterdam Avenue and 106th Street,” from 2002, skillfully captures the dynamic urban environment of New York by showcasing buildings, cars, and pedestrians. Through a seamless combination of painting and photography, Estes’ art invites viewers to discover the beauty of ordinary city life through the lens of hyper-realism.
Robert Bechtle Artwork
Renowned Photorealist painter Robert Bechtle is well-known for his tranquil suburban landscapes. His 1974 masterpiece “Alameda Gran Torino” captures the quiet neighborhood of California with remarkable realism. The finely detailed painting of a parked Gran Torino in brightly lit streets is a perfect example of Bechtle’s ability to capture ordinary moments. His 1977 work “Potrero Avenue” emphasizes the modest beauty of suburban life by precisely showcasing a residential street. Through his distinctive fusion of hyper-realism and painterly subtlety, Bechtle’s art allows viewers to appreciate the commonplace and the calm of urban existence while retaining a sense of nostalgia and tranquility.
Audrey Flack Artwork
One of the leading figures in the Photorealist movement, Audrey Flack is renowned for her exquisitely detailed compositions. A notable example of her work is “Wheel of Fortune” (1977–1978), which is a reflective chrome wheel decorated with commonplace items, showcasing her exacting skill and precision. Her ability to capture the essence of a deck of playing cards in “Queen” (1976) is a testament to her extraordinary attention to detail. Through a blend of hyper-realism and artistic sensibility, Audrey Flack’s artwork demonstrates her ability to give seemingly unremarkable subjects life and gives viewers a deep appreciation for the beauty found in the everyday.
Chuck Close Artworks
A well-known name in the field of contemporary art, Chuck Close is renowned for his elaborate and monumental portraits. His famous “Big Self-Portrait” (1967–1968), which painstakingly painted each cell in a grid to capture his likeness in great detail, signaled a turning point in his career. Close’s amazing 1988 film “Kate” features his friend, the artist Kate Moss, and demonstrates his commitment to capturing subtle facial expressions. He is renowned for using a grid-based technique to capture the essence of his subjects with amazing accuracy. Chuck Close’s creative output is evidence of his inventive methods and his capacity to bring his subjects’ complexity and uniqueness to life, elevating everyday occurrences to extraordinary heights.
Computer generated photorealism
Richard Parker, the Bengal tiger that accompanies Pi on his oceanic journey in the movie Life of Pi, is mostly a result of CG wizardry. The digital character took about a year to build and featured 10 million computerized hairs. Four real tigers were used in the production, for reference and motion capture, as well as for actual pivotal scenes. Suraj Sharma, who played Pi, was never actually in the boat with a live tiger.
4) Photorealism in VFX
The aim of photorealism in Visual Effects (VFX) is to produce digital or computer-generated visuals and sequences that are so lifelike that viewers cannot tell the difference between them and live-action video or real-life environments. VFX artists and studios strive to achieve photorealism by using advanced computer graphics techniques to simulate real-world phenomena, lighting, textures, and physical properties in a highly convincing manner. Additionally, Barbara Flueckiger’s book Special effects: new histories, theories, contexts (2015, pp. 81–98) describes how VFX artists can achieve photorealism by using effects like grain, starches, dust, noise, depth of field, motion blur, diffusion, lens flare, and vignetting to enhance reality effects. The most fascinating thing I learned was about motion blur and how it played a significant role in. Barbara Flueckiger’s perspective highlights the importance of motion blur in enhancing photorealism in movies. Motion blur is an artificial technique that mimics the way our eyes and cameras naturally blur fast-moving objects or scenes. This effect plays a role to the smooth transition between live-action film and computer-generated imagery (CGI). When motion blur in CGI matches that of the live-action shots, it minimizes the perceptual gap between real and digital elements, enhancing photorealism. Motion blur is essential for expressing depth and speed. It enhances the sense of speed and motion in fast-paced scenes by making motion seem more natural and dynamic. Motion blur may also be used artistically to evoke feelings and mood, particularly in slow-motion scenes where the use of excessive blur produces a weird and powerful atmosphere.
Reference
Flueckiger, B. (2015) Special effects: new histories/theories/contexts. Edited by D. North et al. London: Bloomsbury, pp. 78 – 86.
Week 5: Digital Index: Bringing indexicality to the capture of movement
Capture are used in VFX and why?
Motion Capture (MoCap):
Why: Motion capture uses an actor or object’s movements and actions to create real visual effects animations. Character animation in video games, films, and other media is a common use for it. To create realistic character movements and interactions with digital elements, mocap is essential.
3D Scanning:
Why: Using 3D scanning, highly detailed 3D models of actual objects, sets, or even actors can be produced. These scanned models can be utilized to accurately recreate objects or characters in the virtual world. It is necessary to guarantee VFX realism and consistency.
Green Screen/Chroma Key Capture:
Why: Filming actors or objects against a bright green or blue background is known as green screen or chroma key capture. Post-production later replaces this background with a different scene or picture. It makes it possible to create composite shots in which digitally created backgrounds or objects are combined with live-action elements.
How VFX Captured Data Is Captured:
Many visual effects in movies and television shows are built upon Visual Effects (VFX) captured data, also known as “plate photography” or “live-action footage.” The following is the procedure for capturing and integrating VFX data into the post-production workflow:
Pre-Production Planning:
The VFX team and filmmakers plan the scenes that call for visual effects prior to recording any live-action material. Choosing the kind and quantity of VFX needed is part of this process.
On-Set Filming:
Cameras and, if necessary, extra equipment like motion capture systems, green screens, or tracking markers are used to record the live-action footage on a movie set or location. The particular VFX requirements determine how these tools are used.
Tracking Markers and Reference Data:
Tracking markers, reference points, and measurements are frequently positioned within scenes that will later include visual effects. They help monitor camera movement and guarantee that digital components are aligned correctly.
Motion Capture:
Motion capture suits and equipment can be used to record the exact movements and expressions of actors in order to capture complex movements or characters (such as creatures or aliens).
Green and Blue Screens:
When backgrounds or other elements will be added after production, green or blue screens are utilized. A colored screen is used to film the actors or objects, and in post-production, digital content takes its place.
Green and blue screens, also known as chroma key screens, are used in filmmaking and television production to replace the background of a shot with a different image or scene during post-production. Green screens are typically chosen when there is no green in the subject, while blue screens are used when green is prevalent. The reason for using these colors is that they are the farthest from human skin tones, making it easier to separate the subject from the background. This process allows for the insertion of different backgrounds or visual effects, enhancing storytelling and creating fantastical or impossible environments.
Rotoscoping:
To isolate actors or objects from the background in intricate scenes, rotoscoping may be required. To isolate particular areas, this entails creating intricate mattes.
Matchmoving:
To replicate the 3D space and track the movement of the camera, matchmoving software is utilized. Accurate placement of digital elements in the scene is contingent upon this.
Rendering:
To produce the final VFX shot that is incorporated into the entire film or television project, the final composite is rendered.
Where VFX Captured Data Is Used:
Film and TV: The entertainment industry uses VFX captured data extensively to create special effects, digitally modify environments, add creatures and characters, and simulate both realistic and fantastical scenarios.
Video Games: The creation of video games depends heavily on VFX data. It improves the immersive gaming experience by producing realistic environments, dynamic visual effects, and lifelike character animations.
Marketing and Advertising: VFX is used by marketers to produce visually striking ads and marketing materials. Subtle touch-ups or complex computer-generated imagery can be used to promote products and convey messages.
5) Compare Motion Capture vs Key Frame Animation
Motion capture is the process of capturing real-time actor or object movements. This results in highly realistic and natural movement because it captures the nuances and subtleties of real-world motion. Because animators don’t have to manually produce each frame of animation, it can be a time-efficient method. Rather, it captures the motion as information that can be utilized by the character or object within the animation program. It is frequently used in video games and film production to generate realistic character animation since it is perfect for capturing complex and minor movements, such as human or animal motions. There are limitations as well; motion capture may be costly to set up and operate, and it may have trouble capturing colorful or excessive motions. It also needs skilled actors, advanced equipment, and a controlled setting.
On the other hand, keyframe animation gives animators full artistic control over every aspect of the animation. Animators manually create each keyframe, which allows for highly stylized and exaggerated movements. Both 2D and 3D animation can be produced using keyframe animation, which is a flexible technique. It is frequently utilized in stop-motion, abstract, and non-realistic animations, as well as cartoons. Keyframe animation does not require specific equipment or a controlled setting, compared to motion capture. Simple hardware and software setups can be used by animators. The limitation that keyframe animation has is It can take a while to create keyframe animation, especially for complex and detailed movements. Each keyframe’s locations, rotations, and timing must be manually specified by animators.
Week 6: Reality Capture (LIDAR) and VFX
Gathering and analyzing real-world data to produce accurate digital representations, like 3D models or maps, is known as reality capture. It takes exact images of physical environments by utilizing technologies such as laser scanning, LiDAR, photogrammetry, and structured light scanning. Applications for this approach can be found in many other domains, including urban planning, archaeology, and architecture and construction. Reality capture is a useful tool for professionals in many different kinds of industries as it improves productivity and accuracy in design, research, and simulation by producing realistic and detailed digital renderings.
VERY NEW DEVELOPMENTS IN SCANNING
LIDAR Scanner
Light Detection and Ranging, or LIDAR for short, is a remote sensing technology that measures distances using laser light and creates highly accurate, in-depth three-dimensional maps of the surrounding area. The length of time it takes for light to bounce back after striking a surface is measured by the LIDAR scanner, which shoots laser pulses. LIDAR systems are able to generate accurate and thorough point clouds that capture the topography and shape of objects and landscapes by utilizing the speed of light to calculate distance.
The uses for this technology can be found in a number of industries, such as urban planning, archaeology, environmental monitoring, and autonomous cars. LIDAR scanners are essential for autonomous vehicles because they generate accurate, up-to-date maps of their environment, which facilitate navigation and obstacle detection. Due to its extraordinary accuracy in mapping the terrain, LIDAR is used by archaeologists to uncover hidden features and landscapes. Furthermore, LIDAR offers useful information for evaluating the features of the terrain and vegetation structure in forestry and environmental sciences. LIDAR scanners are a vital tool in contemporary mapping and sensing, revolutionizing our understanding of and interactions with the physical world thanks to their accuracy and versatility.
Portable 3D Scanners
Portable 3D scanners are becoming increasingly popular due to their ease of use and ability to capture 3D data quickly and accurately. These small, lightweight scanners are ideal for field work and on-site applications. With the ability to scan objects on the go, portable 3D scanners have opened up new possibilities for industries such as architecture, engineering, construction, and manufacturing.
Polycam is a 3D scanning phone app
With the help of the iOS app Polycam, you can turn your smartphone into a portable and easily accessible 3D scanner. It makes use of photogrammetry, a method that builds a comprehensive 3D model by analyzing several images. Users use the camera on their device to take a sequence of pictures of an object or scene from various perspectives. After processing these photos, the app creates a three-dimensional depiction of the subject. Because of its intuitive interface, Polycam is well-suited for a wide range of applications, from professionals creating 3D models for design or documentation to hobbyists taking pictures of objects.
Lidar, on the other hand, is a technology that makes precise 3D maps of environments by measuring distances using laser light. Lidar sensors are built into some smartphones and tablets, though they aren’t apps per se. This allows apps that make use of this technology to function. Lidar apps make use of the sensor’s depth data collection capabilities to provide precise measurements, improved augmented reality experiences, and, in certain situations, 3D scanning capabilities. These applications profit from Lidar’s ability to measure distances quickly and precisely, which makes them useful instruments for a variety of uses, such as interior design, gaming, and navigation.
Google Maps use 3D photogrammetry
It is true that Google Maps use 3D photogrammetry to produce three-dimensional depictions of buildings and landscapes. The process of extracting 3D information from 2D images is called photogrammetry. This procedure is used in the case of Google Maps for street-level, aerial, and satellite imagery.
Google Maps creates its 3D models by examining several photos of a location shot from various angles. After that, photogrammetric processing is applied to these pictures to produce precise and comprehensive 3D models of the surrounding area, structures, and other features. A more realistic and immersive depiction of the mapped environment is the end result.
Although the main 3D modeling technology used by Google Maps is photogrammetry, it’s important to remember that other technologies, like LiDAR (Light Detection and Ranging), may also be used in certain situations or environments to improve the precision and detail of the mapping data. Nonetheless, a crucial element of Google Maps’ methodology for generating three-dimensional depictions of the globe is still photogrammetry.
What is perspective?
In the visual arts, perspective describes the method used to depict three-dimensional objects on a two-dimensional surface in order to give the impression of depth. Mathematically, linear perspective uses convergent lines that meet at a vanishing point on the horizon. By using this technique, objects appear smaller as they get farther away, which helps to portray spatial relationships realistically. Two-point perspective, which is appropriate for scenes viewed from different angles, consists of two vanishing points as opposed to one in one-point perspective, which only has one.
Often referred to as atmospheric perspective, aerial perspective creates depth by depicting far-off objects with less color saturation, contrast, and detail. Due to atmospheric conditions, objects that are farther away may appear hazier and acquire a bluish tint. These methods are essential to the visual arts, architecture, and design because they allow artists to accurately depict depth and space on flat surfaces. By enabling viewers to sense the spatial depth and dimensionality of the created imagery, perspective knowledge and application enhance the realism and coherence of artistic and design endeavors.
6) Case Study post on Reality Capture
The Dream Life of Driverless Cars – The New York Times
Matthew Shaw and William Trossell, the geniuses behind ScanLAB Projects, are showcased for their innovative work in an article by Geoff Manaugh in the New York Times Magazine on November 11, 2015. This case study looks into the relationship between art, technology, and the rapidly developing autonomous car scenario, shining light on how ScanLAB’s creative methodology captures the unexplored areas of the urban experience.
Shaw and Trossell’s passion with 3-D scanning led to the founding of ScanLAB Projects in 2010(ScanLAB Projects), which went on to become a leading London design studio. Their early work defied standard scanning traditions by pushing the boundaries of laser-scanning technology into unexpected areas and capturing blurry landscapes and Arctic ice floes.
The key to ScanLAB’s game-changing idea is to map London using the “robot eyes” of autonomous vehicles. Shaw and Trossell highlight the shift from human to machine vision with the growing autonomy of vehicles. Their objective is to investigate the “sideline stuff” or “peripheral vision” that autonomous cars accidentally observe.
Through intentional deactivation of some sensors, ScanLAB found unrealized creative potential in the errors and misunderstandings arising from the sensory limitations of autonomous vehicles. In contrast with standard methods for self-driving cars that fix bugs, Shaw and Trossell welcomed these faults. Unknowingly amount of data evolved into a new design that provides a rare glimpse into the computer processing dreams of driverless cars.
The resulting digital representations of London defy the ordinary. The Houses of Parliament repeat and stutter, and time-stretched double-decker buses become into featureless mega-structures that block roadways, producing an unusual and scary scene. Trossell describes these unusual behaviors as “mad machine hallucinations,” which is a fitting name given to the state of automobile imaging technology that has unleashed a monster.
The project by ScanLAB proposes a significant change in perception and experience of the contemporary landscape. It introduces a radically inhuman viewpoint on the built environment, challenging the conventional concept of perception. There’s talk of a new Romanticism where self-sufficient machines become urban landscape photographers.
Harvard Reference
Manaugh, G. (2015). The Dream Life of Driverless Cars. New York Times Magazine. Retrieved from https://www.nytimes.com/2015/11/15/magazine/the-dream-life-of-driverless-cars.html
Rushil_21551976_Assighment_01
Week 7: Reality Capture (Photogrammetry) and VFX
How are Cameras and Photography connected to Reality Capture?
Photogrammetry: Photogrammetry uses photographs taken from multiple angles to create a 3D model of an object or environment. The software analyzes the photographs to identify common points and calculates the distance between them to create a model.
Photogrammetry is the art and science of extracting 3D information from photographs. The process involves taking overlapping photographs of an object, structure or space and converting them into 2D or 3D digital models.
Examples
Verisimilitude
When a movie feels realistic and its subjects, characters, and elements look like actual life or duplicate significant or essential aspects of it, then it is said to have verisimilitude.
Hyperrealism
Hyperreality in a movie is essentially a visual language because it works better with pictures to provide the viewer with a hyperreal experience. A person is transported to a hyperreal setting by visuals that represent their needs and wants. The study asks whether the world shown in the media is reel or real.
A simulation is a long-term imitation or representation of a system or process found in the real world. It entails building a model to replicate the original system’s behavior in certain circumstances. A simulation may be used for a wide range of purposes, such as training, testing, analysis, or entertainment.
A variety of tools, including computer programs, mathematical models, physical models, and combinations of these, can be used to implement simulations. The main idea is to simulate a system’s key components so that, without actually interacting with the system, one can study its behavior, forecast its future, or obtain insights into how it works.
Google Maps
The idea that we can perceive reality by using simulations, like Google Maps, begs interesting questions about how we interact with the outside world. Google Maps displays details about places, streets, and landscapes, acting as a digital version of the real world. Users interact with a simulated version of reality as they move through this virtual representation.
Utilizing Google Maps can be thought of as a kind of pre-experience or sneak peek at actual situations. People can prepare and acquaint themselves with an environment before visiting it in person by virtually exploring it. This pre-experience gained through simulation can affect one’s expectations, judgment, and general understanding of the real world.
Not only that, but the concept of looking at the world through a simulation goes beyond navigation. Other ways to experience reality include augmented reality, virtual reality, and different digital interfaces. The line separating simulated and real experiences gets increasingly hazy as technology develops. Engaging with digital simulations not only modifies our viewpoints but also invites contemplation on the dynamic interplay between the virtual and physical facets of our existence. Essentially, the virtual worlds we interact with could serve as a prism through which we perceive and understand the outside world before going there.
Sign-order: Phases of the Image
1) It is the reflection of a profound reality:
It means that there is an image or representation that reflects a deeper, more significant reality. What is shown here is an accurate representation of something more profound or significant.
Example: The theme park may be designed to mimic a real city with streets, buildings, and various attractions. In this sense, it reflects the appearance and elements of a city like Disneyland.
2) It masks and denatures a profound reality:
In this case, the representation is understood to both reflect and fake the true character of the actual reality. It hides the true nature of what it stands for, almost like a mask.
Example: A theme park can mimic the exterior features of a city, but it might not have the true details, legacy, or natural growth of an actual city. For the purpose of amusement, it simplifies and distorts the realities of urban life like Disneyland castles.
3) It masks the absence of a profound reality:
This suggests that the representation is meant to hide the fact that it is based on a false reality. In actuality, there is no depth or meaning; the image or representation gives the impression that there is.
Example: It’s possible that the theme park hides the fact that it’s not a true city with real people, true cultural dynamics, and the complicated problems towns face. Without providing any real substance, it gives the appearance of urban life.
4) It has no relation to any reality whatsoever: it is its own pure simulacrum:
Taking a more extreme stance, this statement suggests there is no connection whatsoever between the representation and any underlying reality. It becomes its own reality and continues to exist on its own as a simulation, a copy without an original.
Example: When things get too far out of hand, the theme park turns into an isolated universe disconnected from any actual city. It is not connected to the real-life urban experiences it attempts to mimic; it exists purely for entertainment purposes like Disneyland Paris gives image of America.
Week 9: Virtual Filmmaking
This week, we concentrated on virtual production, which was the last module topic.
Virtual production is a technique used in film and television production to create realistic environments and effects on a virtual set using technologies such as motion capture, augmented reality, and computer-generated imagery (CGI).
The Virtual Production of The Mandalorian Season One by ILM
We viewed a video that The Pulse at Unreal had created. Filmmaking in real time: The shift to titled virtual production.
Unreal Engine use in Filmmaking
Filmmaking went through a revolution thanks to Unreal Engine, which provides a strong and user-friendly platform for producing realistic environments and visual effects. Filmmakers can view sharp images immediately thanks to its real-time rendering capabilities, which eliminates the need for drawn-out post-production. Building detailed environments is made easier and more efficient by the engine’s large asset library and user-friendly interface. Filmmakers can create intricate interactions without a lot of coding thanks to Unreal Engine’s Blueprint visual scripting system and sophisticated lighting and shading techniques, which help create realistic scenes. Ultimately, the revolutionary influence of Unreal Engine is found in its ability to democratize expensive filmmaking equipment, enabling artists to realize their visions with hitherto unseen ease and speed.
Unreal Engine use in Gaming
Strong game development framework Unreal Engine has grown to be a mainstay in the gaming industry. Game developers can create visually stunning and immersive gaming experiences with Unreal Engine, which is renowned for its cutting-edge graphics, real-time rendering, and scalability. With the help of its flexible toolkit, which includes a visual scripting system called Blueprints, both experienced developers and novices can create intricate game mechanics and interactions.
The cross-platform capabilities of Unreal Engine enable the development of games for a variety of platforms, including PCs, consoles, and mobile devices, while its asset management and content creation tools simplify the design process. The Unreal Engine has been instrumental in influencing the visual style of contemporary video games, establishing new benchmarks for realism and interactivity with its emphasis on photorealistic graphics and dynamic environments. The fact that Unreal Engine is being used by so many game developers attests to its influence, as the engine fosters innovation and raises the standard of gaming across the board.
Week 10
Assignment 02 Question
Presentation slides
Assignment 2 Essay