Investigative Study

Investigative Study

WEEK-01

Deepfake

  • Write a brief description – what is a deepfake?

Deepfaces can be images, videos, or audio taken from one person, edited or generated using artificial intelligence (AI), and applied to another person to create synthetic media.

  • How does it work? How is it believable?

In the process of deepface, they use two different AI software applications simultaneously. The first AI software will scan audio and video of the main character and generate tampered images or videos. The second AI will compare and identify the differences. This process will run several times until the second one can’t longer tell a fake compared to the natural character.

  • List some examples with images.
Anderson Cooper 4K Original/(Deep)Fake Example

Anderson Cooper 4K Original/(Deep)Fake Example

 

Deepfake of Zelenskyy deep fake

Deepfake of Zelenskyy deep fake

 

Original/Deepfake Elon Musk

 

Mark Zuckerberg

Mark Zuckerberg

Source- https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them

You Won’t Believe What Obama Says In This Video!

 

The following TED talk video explains deepfake technology and ways to detect fake videos.

Fake videos of real people — and how to spot them | Supasorn Suwajanakorn

 

  • List keywords associated with the topic.
  • Core Term: “Deepfakes” or “Deep fake”
  • Technical Terms: “Generative adversarial networks (GANs)”, “Synthetic media”, “AI-generated content”, “Face swapping technology”
  • Related Issues: “Media manipulation”, “Digital ethics”, “Misinformation”, “Disinformation”, “Deception in digital media”, “AI in media production”
  • Contextual Fields: “Digital identity”, “Privacy concerns”, “Political deepfakes”, “Media authenticity”

We have discussed the industries that utilize deepfake and artificial intelligence (A.I.).

Film Productions

Race to the bottom-the cheapest bid or pitch on a project wins – is this a economic model VFX

Fix it in post, VFX can over promise. Can bring innovation.

AI  benefits in VFX, such automation, reduction in entry tasks such as roto cleanup. But  a lot negatives  raised, for example, impact of creativity, AI hyperreal look. What are the positives?

AI 3d modeling

Some experiments with – https://www.meshy.ai/discover?via=aianimation%E2%80%9D

Ethics

Bringing back past actor back from the dead. Actors data(eg. motion capture) is it safe?

Uncanny valley Aesthetics VFX

CG animals vs real animal in movies. Nat Geo

Photorealism,

Degrading or correct the CG image to make appear

WEEK 02

Boris Eldagsen’s award-winning picture

Boris Eldagsen’s award-winning picture

This photograph has won the Sony World Photography Awards 2023 under the creative category. However, the camera did not click on that photo. It was an AI-generated image by Boris Eldagsen, the 53-year-old German photographer. He named this image “as a cheeky monkey” and sent it to the contest. He won the prize because this was the first time anyone had thought of the beginning of the AI era. He wrote back how he generated the image and rejected the prize. However, Sony has still been listed as the winner. Eldagsen called it “promtography” ( AI image from text prompt).

Fane Saunders, T. (2023). What AI means for the arts. The Daily Telegraph, 27 May, pp.4–6.

WEEK-05

WRITES STRIKE

 

Los Angeles Times

https://www.latimes.com/entertainment-arts/business/story/2023-09-24/writers-strike-over-wga-studios-reach-deal-actors

 

 

 

 

 

 

Proposal IDEAS

How character models developed using artificial intelligence (AI) technology can feasibly be used by industry animation workflows.

 

Investigative Study Research Proposal 

Research Question 

How character models developed using artificial intelligence (AI) technology can feasibly be used by industry animation workflows.

Introduction

The visual effects and animation industry relies heavily on skilled labour and traditional techniques, which consume much time. Artists use professional software like Maya and Z Brush to model, texture, and rig characters from scratch in this method. Traditional character-developing artists have complete control over the model. The artist can modify and adjust the objects according to the needs of the main project. To work in the industry as a 3D/2D artist, you must have a certain amount of knowledge, skill set and years of practice and experience. For instance, an animated movie requires many skilled artists to complete the tasks, and the production budget would be high.

Implementing AI-based software will affect professional areas in general, as well as work and career. AI-generated 3D modelling reduces costs by automating much of the labour-intensive work. While studios can produce more content at reduced costs, initial investments may be required to develop or acquire AI tools and technologies. This will end most professional careers in the industry.

I will conduct a Turing test to evaluate an artistic AI’s ability to create 3D models. If the system can automatically produce 3D models that meet industry standards without human intervention, it will pass the Turing test.

This research explores the feasibility of integrating AI-generated 3D character models into the animation industry’s workflows.

image from: https://www.reddit.com/r/blender/comments/11ur31k/stumbled_upon_a_helpful_visual_guide_for_3d/?rdt=43878

 

The 3D production pipeline is an industry-standard process for creating 3D content for various mediums, such as movies, video games, and advertisements.It is divided into three main stages: pre-production, production, and post-production.

In the pre-production stage, the focus is on meticulous planning and conceptualization. Teams work on developing the story, creating storyboards, and producing concept art to define the visual style of the project.Designs for characters, environments, and props are finalized, and technical setups for the pipeline are prepared.Creative ideas take shape in production. Artists build 3D models of characters, objects, and environments, adding details through sculpting using Maya and Z Brush or any other 3D software. Textures are applied using tools like Substance Painter, while shaders define how materials interact with light. Characters are rigged with skeletons to enable animation, which can be done manually or using motion capture.Scenes are illuminated to create the appropriate atmosphere, and effects like fire or water are simulated using specialized software like Houdini.Once everything is ready, rendering processes the 3D data into 2D visuals, producing the final images or videos.

The final stage of post-production involves polishing the output. Compositing combines rendered layers into a seamless whole. The editing department ensures the narrative flows seamlessly and coherently throughout the story.

Sound design enhances the project with sound effects, music, and voiceovers. At the same time, colour grading adjusts tones to create the desired mood.The project is finalized after extensive testing to resolve any issues. This process blends creativity and technology, streamlining the workflow to bring complex 3D projects to life.

 

 

Methodology

The main methodology for this essay will be practical research. I will conduct secondary research through articles and books related to the topic. Through research, I will learn about different aspects of traditional and AI-driven workflow.  I will investigate the output from the Z Brush character model and the meshy  AI-generated model.

  • Explore industry reports on animation workflows, AI integration, and emerging technologies.
  • Look into case studies of animation studios already integrating AI, such as Pixar, DreamWorks, or independent game studios.
  • Compare the workflows before and after AI implementation: time taken, quality of output, and creative freedom.
  • Test the compatibility of AI-generated assets with industry-standard animation software, such as Unreal Engine or Maya.

Sources

  • Lev Manovich (1994). https://manovich.net/ (Accessed: October 24, 2024).
  • Runway | Tools for human imagination. (2018). https://runwayml.com/ (Accessed: October 22, 2024).
  • Meshy(2023). https://www.meshy.ai/discover (Accessed: October 24, 2024). HSE ART AND DESIGN
  • SCHOOL (2020) Lev Manovich ‘Artificial Intelligence, Aesthetics, and Future of Culture.’ https://www.youtube.com/watch?v=6t6ZpNHYa5M (Accessed: October 25, 2024).

Keywords

  • AI in Character Modelling
  • Artificial Intelligence in Animation
  • AI-generated Models
  • Deep Learning for Animation
  • Generative Adversarial Networks (GANs)
  • Neural Networks for Character Creation
  • Automated Character Modelling
  • Digital Character Design
  • Machine Learning in Animation
  • 3D Model Generation
  • AI Animation Tools
  • Character Rigging with AI
  • Realism in Digital Characters
  • AI-enhanced Animation Pipelines
  • AI-driven Workflow Optimization
  • Creative Automation
  • AI Tools for Digital Artists
  • Animation Production Techniques
  • Ethical Considerations in AI Animation
  • Human-AI Collaboration in Animation

Bibliography

  • Manovich, L. (2021) ‘Artificial Aesthetics, Chapter 1: “Even an AI could do that”,’ Medium, 30 November. https://medium.com/@manovich/artificial-aesthetics-chapter-1-even-an-ai-could-do-that-b75a6266da03. (Accessed: October 22, 2024)
  • Fane Saunders, T. (2023). What AI means for the arts. The Daily Telegraph, 27 May, pp.4–6. (Accessed: October 22, 2024)

Figures

I have done the following experiment with meshy AI. I uploaded my picture onto Meshy AI and tried to develop the 3D model.

Image to 3d by Meshy.ai – https://www.meshy.ai/discover

Meshy AI allows the export of FBX files of the model and can use Maya for further editing.

 

 

 

 

 

 

 

 

 

 

 

 

 

Other AI tools

 

 

Books

http://manovich.net/index.php/projects/artificial-aesthetics

https://medium.com/@manovich/artificial-aesthetics-chapter-1-even-an-ai-could-do-that-b75a6266da03

 

Lev Manovich | Essays : What is Digital Cinema?

The Language of New media – Lev Manovich

Cinema and Digital Media -Lev Manovich

New Media: a critical introduction
Second Edition

The Digital Image and Reality
Affect, Metaphysics and post-Cinema -Dan Strutt

 

Industry Reports and White Papers

Case Studies and Practical Articles

Ethical and Social Considerations

 

2. How have advancements in real-time rendering changed the VFX industry?

Explain what real-time rendering is and how it differs from traditional rendering methods.

Explore how game engines (Unreal Engine, Unity) are being used in VFX for films, TV shows, and live events.

Analyse case studies where real-time rendering was used in production (e.g., “The Mandalorian” with Unreal Engine).

 

Discuss the future implications of real-time rendering for virtual production and live events.

 

Bubbl.us – Create Mind Maps | Collaborate and Present Ideas