Investigative Study

Week 1:

CGI is for LOsers!

“Why do some filmmakers say no to CGI?”

Some filmmakers like Christopher Waltz are against the use of CGI in films because filmmaking began as an artform of only acting. This meant if a film back then did not look as aesthetically pleasing but, had well acted characters, this film would be well recognised.

As time went on and more CGI was introduced to cinema such as the Marvel Cinematic Universe films, the demand for CGI increased drastically. This is because the idea of a super hero is something fictional in the sense that the elements used to bring these “super heroes” to life are not real therefore, filmmakers need to bring in CGI such as a laser blast that emits from someone’s eyes. Originally, a character with this ability is written in a comic book where acting is not something that dictates how good the story in that medium is unlike a film. In a comic book it works, in film adding this laser to a character overshadows their acting ability because the character being played would not work without this laser ability. Hence, some filmmakers are disappointed that CGI is becoming the main driving factor for how good a role is played.

The “loser” bit Waltz uses implies that CGI is lazy in the sense that an actor is heavily dependent on his CGI abilities within the film’s world and can sacrifice some of their acting skills because the CGI would be in the spotlight as the main driving factor for how good a scene or a character is to an audience.

What impression might these comments give to an audience?

Claims like the Waltz made, are to drive audience away from CGI heavy films because using the logic I explained earlier, if CGI is “lazy” and “for losers” because actors do not have to improve their acting abilities to be featured in CGI scenes, practical effects would however force an actor to improve their acting ability because they need to interact within a physical environment. So if a movie can only be good because of the acting, CGI would be rendered useless.

However, films like “Gravity” (2013) heavy rely on CGI in a way where actors do need to be aware of their non-practical and fake environment. This is because space scenes for example are impossible to film as of today and therefore are made digitally. This statement disproves the idea that using CGI is lazy.

Moreover, human beings are curious by nature, what I mean is that an audience would eventually get bored of a non-fictional form of media, we need bigger stories, stories that are alien to us to really keep us entertained. I believe, filmmakers should not sacralise a good story simply because acting is not the main driving factor. Even as of today, certain films with certain CGI elements like “Star Wars” and their “lightsabres” have custom built training grounds to teach their actors how to adapt to a fake element which does involve their acting skills.

Potential topics for my investigative study fmp

1- How close an actor is to a specific VFX based role.

Actors such as Robert Downey Jr. and Hugh Jackman are heavily associated with their CGI based character roles, Iron Man and Wolverine. Most audience tend to refer to them by their roles instead of their name. Whereas, you have other actors like Robert De Niro and Al Pachino, heavily being referred to by their names and when they are brought up in a conversation, how good their acting career has been- is the centre of the conversation, rather than one of their roles. Even when an actor plays a role with no VFX multiple times, they still are not associated with that role the same way VFX roles are. My question is, what makes audience give a new identity to these VFX based actors?

2- Human VS AI production: who can make the better production within a given timeframe?

I will conduct two live industry briefs of the same idea. One where I dominantly use AI and one where I do not use AI at all. In the aftermath, I will compare results and see where I worked better. For example, I can get someone to learn how to create a non VFX production using AI from scratch and compare how fast they learned that skilled in comparison to me starting a VFX production from scratch.

3- Virtual production VS green screens

Virtual production has rapidly emerged as a game changing alternative to traditional green screen filmmaking. Using LED walls powered by real time rendering engines like Unreal Engine, filmmakers can display digital environments directly on set, allowing actors to perform within immersive worlds instead of a blank backdrop. This topic relates to the “lazy” element I spoke about earlier in “CGI is for losers!” because it talks about how actors should interact with the CGI rather than it just be put on them in post-production.

Deepfakes

This video shows a computer-generated image of Queen Elizabeth II delivering a Christmas message. It looks and sounds exactly like her, but it’s not actually her. This technology, called “deepfake,” uses artificial intelligence to create realistic but fake videos, making it seem like someone is saying or doing things they never did. In this particular video, the deepfake Queen talks about current events and even performs a TikTok dance, highlighting how convincing these fakes can be.

Comparing: “The Mandalorian” deep fake with Queen Elizabeth’s deepfake

The Queen Elizabeth and Luke Skywalker deepfakes highlight the technology’s divergent paths of social commentary versus cinematic storytelling. Channel 4’s “Deepfake Queen” was a transparently fake satire created by a professional studio to deliberately warn the public about the dangers of misinformation, using the spectacle to deliver a message about trust. In contrast, the deepfake of a young Luke Skywalker in The Mandalorian was a narrative tool intended to be accepted as “real” within the story, famously showcasing how a fan’s version could surpass the official studio’s initial CGI efforts and lead to a new era of digital characters in film. While the Queen’s deepfake was designed to make you question what you see, Luke’s was created to make you believe in it.

Ethicality

The primary ethical implication of deepfake technology is its profound ability to erode trust and manipulate reality, threatening both individuals and society. For individuals, this manifests as a gross violation of consent and identity, enabling malicious acts like the creation of non-consensual pornography, targeted harassment, and sophisticated financial fraud through impersonation. On a societal level, deepfakes pose a critical threat to democracy and social cohesion by fuelling political disinformation, allowing bad actors to fabricate “evidence” of events that never happened or, conversely, to dismiss genuine evidence as a “deepfake”—a phenomenon known as the “liar’s dividend.” This undermines the integrity of journalism, the justice system, and our shared sense of reality, forcing us to confront a world where we can no longer instinctively trust what we see and hear.

Unreal: the vfx revolution podcast

The first episode of Unreal: The VFX Revolution introduces listeners to the origins of modern visual effects, focusing on the 1970s moment when George Lucas, John Dykstra, and a small group of innovators founded Industrial Light & Magic in a Van Nuys warehouse. Drawing on interviews with VFX pioneers such as Robert Blalack, Richard Edlund, Dennis Muren, and Douglas Trumbull, the episode explores how early film artists combined ingenuity, engineering, and risk-taking to create the ground-breaking effects of Star Wars and other classics. By situating these achievements against the backdrop of pre-digital, mechanical, and optical methods, the programme highlights the leap that VFX represented at the time, while foreshadowing the digital transformations to come.

This episode not only documents technical milestones but also emphasizes the collaborative and experimental spirit that defined the birth of the VFX industry. As the series opener, it establishes both a historical baseline and a narrative arc that will carry through subsequent episodes, underscoring how creative ambition and technological innovation intersected to redefine cinematic storytelling.

BBC Radio 4 (2023) Unreal: The VFX Revolution, Episode 1: A long, long time ago… [Podcast]. BBC. Available at: https://www.bbc.co.uk/programmes/m001nvyb (Accessed: 30 September 2025).

Week 2:

AI PHOTOGRAPHY

Photographer admits prize-winning image was AI-generated

The article highlights several key debates sparked by Boris Eldagsen’s AI-generated image. These include questions about the definition of photography and whether an image created without a camera can truly be called a photograph; concerns about authenticity and transparency, as many argue artists should clearly disclose AI use; and issues of fairness in competitions, since AI images may have an advantage over traditional photographs. It also raises questions about creativity and authorship, exploring who deserves credit—the human or the machine—and the broader role of AI in art, with some seeing it as a new medium and others as a threat to artistic integrity. Finally, the case touches on public trust and truth, as AI-generated “photos” risk undermining confidence in photography as a faithful representation of reality.

Key debates:

  • Definition of photography: Should AI-generated images be considered photography if no camera was used?

  • Authenticity and transparency: Should artists be required to clearly disclose when AI tools are used in creating images?

  • Fairness in competitions: Is it fair for AI-generated works to compete against traditional photographs taken by humans?

  • Creativity and authorship: Who deserves credit for an AI image — the human who prompted it or the machine that produced it?

  • Role of AI in art: Is AI a legitimate new artistic medium or a threat to traditional artistic skills?

  • Public trust and truth: Do AI-generated “photos” undermine photography’s reputation as a truthful representation of reality?

AI is transforming visual effects production by making complex tasks faster, cheaper, and more accessible. It can automatically generate realistic environments, enhance motion capture, de-age actors, and even create lifelike characters or crowd scenes that once required large teams and long hours. This automation allows artists to focus more on creativity rather than repetitive technical work. However, challenges remain—AI can produce inconsistent or uncanny results, raising concerns about quality control, artistic originality, and the displacement of human jobs. There are also ethical and legal issues, such as the use of AI-generated likenesses and questions of ownership over AI-created content. Despite these challenges, the integration of AI offers major advantages, including greater efficiency, reduced costs, and expanded creative possibilities in film and media production.

Prompt photography (AI-generated images) and traditional camera photography differ in how they create and represent visual reality. Traditional photography relies on capturing light through a lens to record real moments, blending artistic vision with technical skill in composition, lighting, and timing. In contrast, prompt photography uses text or data prompts to generate images entirely through algorithms, creating visuals that may look photographic but are synthetic rather than captured from the real world. Yet, the two overlap in their artistic intent—both aim to evoke emotion, tell stories, and explore visual aesthetics. Technologically, both depend on digital tools and editing software, though AI extends creative control beyond what a camera can see. In terms of visual culture, AI-generated imagery challenges how society defines authenticity, originality, and artistic authorship, pushing the boundaries of what “photography” means in the digital age.

 

Week 4

Essay proposal