Wednesday, 17 November 2010

Technologies used in “Avatar” Movie

Writer-director James Cameron is making another cinema history with the breathtaking new 3D technology in his latest sensation “Avatar” which has swept the world with more surprises giving a new dimension to the world of cinema. Cameron developed this new technology that could finally revolutionize the film making by getting the 3D and the CGI to new heights while mixing the real footage and the motion-captured CGI in an immersive 3D technology. A brief review of the technologies used by the director that powered some of the most stunning effects of the movie is given below:

Performance Capture by CGI

Cameron has used the Computer generated imagery (CGI) extensively in Avatar. Though he has already been using it since his earlier movies days like Terminator2: Judgment Day, and Total Recall, but in Avatar he specifically used a novel technique called “image-based facial performance capture” that required actors to wear some special headgears already equipped with camera. As the actors performed, the camera transmitted facial movements that were put on the virtual characters. This made the movements of the body back to a connected array of systems which acted out their scenes on a ‘performance capture’ stage six times bigger than anything that was ever used earlier in the industry. This resulted in an amazingly emotional authenticity by the movie characters. The movie’s footage was built from around 70% CGI including its female lead.

Digital Animation

All the movie’s animations were rendered by Weta Digital, the digital-effects studio of Peter Jackson, and a huge team of the artists were constantly working for over a year to transfer the renderings to the photo-realistic images. Every minute detail was taken care of, by rendering every tree, leaf or even rock individually with the most innovative methods in rendering, lighting and shading that used over a petabyte (1000 terabytes) of hard disk storage.





Cameron pioneered a specially designed camera built into a 6-inch boom that allowed the facial expressions of the actors to be captured and digitally recorded for the animators to use later. 



The virtual camera system in use on the set of the film. The motion-capture stage known as "The Volume" can be seen in the background.

Reference :
http://en.wikipedia.org/wiki/Avatar_%282009_film%29#Visual_effects

Tuesday, 16 November 2010

Stop motion animation :

Stop motion (also known as stop action or frame-by-frame) is an animation technique to make a physically manipulated object appear to move on its own. The object is moved in small increments between individually photographed frames, creating the illusion of movement when the series of frames is played as a continuous sequence. Clay figures are often used in stop motion for their ease of repositioning. Motion animation using clay is called clay animation or clay-mation.



1980s to present : 

In the 1970s and 1980s, Industrial Light & Magic often used stop motion model animation for films such as the original Star Wars trilogy: the chess sequence in Star Wars, the Tauntauns and AT-AT walkers in The Empire Strikes Back, and various Imperial machines in Return of the Jedi are all stop motion animation, some of it using the Go films. The many shots including the ghosts in Raiders of the Lost Ark and the first two feature films in the RoboCop series use Phil Tippett's Go Motion version of stop motion.
Stop motion was also used for some shots of the final sequence of Terminator movie, as they were for the scenes of the small alien ships in Spielberg's Batteries Not Included in 1987, animated by David W. Allen. Allen's stop motion work can also be seen in such feature films as The Crater Lake Monster (1977), Q - The Winged Serpent (1982), The Gate (1986) and Freaked (1993). Allen's King Kong Volkswagen commercial from the 1970s is now legendary among model animation enthusiasts.


Reference :
http://en.wikipedia.org/wiki/Stop_motion

Thursday, 11 November 2010

Task 2 : Research proposal

Hey,

For task 2,i have chosen animation as my field of interest in Autodesk 3D Max. Am doing Synoptic Assessment i.e am combinig both Darren's module and Shane's module together. The following model were made in 3ds max 2009 which will be animate.



  
    In order to animate this mechanical beast,i have to look into Quadrupedalism. Quadrupedalism is a form of land animal locomotion using four limbs or legs. An animal or machine that usually moves in a quadrupedal manner is known as a quadruped, meaning "four feet" (from the Latin quad for "four" and ped for "foot"). The majority of walking animals are vertebrate animals, including mammals such as cattle, dogs and cats, and reptiles, like lizards.


Computer animation of quadrupedal locomotion

For animating this mechanical beast, I researched into the anatomy of a tiger as a reference and also the various movements that tigers can able to perform. Consider the fact that quadruped skeletons are structured, tigers walk on what would be our toes and fingers, with the 'wrist' and 'ankle' higher above the ground. This will creates a rather different motion to that used by bipeds.  

 Steampunk Cheetah

 

 Refrences : 

http://www.youtube.com/watch?v=YiEqRgCWa58&feature=player_embedded 

 

Task 1 : Technology Research

Hi,
For my task 1, i have chosen motion capture technology to highlight its aspects,weaknesses and how this technology may be advanced within the short to medium term.
 


Motion capture

Motion capture, motion tracking, or mocap are terms used to describe the process of recording movement and translating that movement on to a digital model. It is used in military, entertainment, sports, and medical applications, and for validation of computer vision and robotics. In filmmaking it refers to recording actions of human actors, and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture.
Camera movements can also be motion captured so that a virtual camera in the scene will pan, tilt, or dolly around the stage driven by a camera operator while the actor is performing, and the motion capture system can capture the camera and props as well as the actor's performance. This allows the computer-generated characters, images and sets to have the same perspective as the video images from the camera. A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in the set. Retroactively obtaining camera movement data from the captured footage is known as match moving or camera tracking.




Advantages

Motion capture offers several advantages over traditional computer animation of a 3D model:
  • More rapid, even real time results can be obtained. In entertainment applications this can reduce the costs of keyframe-based animation. For example: Hand Over
  • The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques. This allows many tests to be done with different styles or deliveries.
  • Complex movement and realistic physical interactions such as secondary motions, weight and exchange of forces can be easily recreated in a physically accurate manner.
  • The amount of animation data that can be produced within a given time is extremely large when compared to traditional animation techniques. This contributes to both cost effectiveness and meeting production deadlines.
  • Potential for free software and third party solutions reducing its costs.

Disadvantages

  • Specific hardware and special programs are required to obtain and process the data.
  • The cost of the software, equipment and personnel required can potentially be prohibitive for small productions.
  • The capture system may have specific requirements for the space it is operated in, depending on camera field of view or magnetic distortion.
  • When problems occur it is easier to reshoot the scene rather than trying to manipulate the data. Only a few systems allow real time viewing of the data to decide if the take needs to be redone.
  • The initial results are limited to what can be performed within the capture volume without extra editing of the data.
  • Movement that does not follow the laws of physics generally cannot be captured.
  • Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion or manipulating the shape of the character, as with squash and stretch animation techniques, must be added later.
  • If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, over-sized hands, these may intersect the character's body if the human performer is not careful with their physical motion.


Facial motion capture




Facial Motion Capture or Performance Capture is the process of electronically converting the movements of a person's face into a digital database using cameras or laser scanners. This database may then be used to produce CG (computer graphics) computer animation for movies, games, or real-time avatars. Because the motion of CG characters is derived from the movements of real people, it results in more realistic and nuanced computer character animation than if the animation was created manually.
A facial motion capture database describes the coordinates or relative positions of reference points on the actor's face. The capture may be in two dimensions, in which case the capture process is sometimes called "expression tracking", or in three dimensions. Two dimensional capture can be achieved using a single camera and low cost capture software such as Zign Creations' Zign Track. This produces less sophisticated tracking, and is unable to fully capture three dimensional motions such as head rotation. Three dimensional capture is accomplished using multi-camera rigs or laser marker system. Such systems are typically far more expensive, complicated, and time-consuming to use.
Facial Motion Capture is related to body motion capture, but is more challenging due to the higher resolution requirements to detect and track subtle expressions possible from small movements of the eyes and lips. These movements are often less than a few millimeters, requiring even greater resolution and fidelity and different filtering techniques than usually used in full body capture. The additional constraints of the face also allow more opportunities for using models and rules.

Marker-based

Traditional marker based systems apply up to 350 markers to the actors face and track the marker movement with high resolution cameras. This has been used on movies such as The Polar Express and Beowulf to allow an actor,to drive the facial expressions of several different characters.
Active LED Marker technology is currently being used to drive facial animation in real-time to provide user feedback.



Markerless

Markerless technologies use the features of the face such as nostrils, the corners of the lips and eyes, and wrinkles and then track them.These vision based approaches also have the ability to track pupil movement, eyelids, teeth occlusion by the lips and tongue, which are obvious problems in most computer animated features. Typical limitations of vision based approaches are resolution and frame rate, both of which are decreasing as issues as high speed, high resolution CMOS cameras become available from multiple sources.

References :
http://en.wikipedia.org/wiki/Avatar_%282009_film%29#Visual_effects
http://en.wikipedia.org/wiki/Motion_capture
http://en.wikipedia.org/wiki/Facial_motion_capture