jump to navigation

Wanted: Bending VFX for a Killing Machine July 17, 2008

Posted by farhanriaz in 3D, Movies, Review, VFX.
1 comment so far

Wanted required more than 800 visual effects shots, including a train derailment sequence from Framestore that almost half of its crew worked solely on. Courtesy of Framestore. All images © 2007 Universal Studios.

In 2006, the Russian movie Night Watch made a strong impression on American producers. Director Timur Bekmambetov’s innovative use of visual effects created a definitely unique movie experience, enough so to land him his first American directorial gig with Wanted (which opened June 27 from Universal). In this adaptation of a comic book created by Mark Millar, a young “nobody” (James McAvoy) finds out that he is the son of a legendary assassin. He enters a mysterious fraternity where he is trained to become a perfect killing machine, a human being able to bend the laws of physics and gravity to his own advantage.The movie required more than 800 visual effects shots, a massive effort initially supervised and produced by Jon Farhat. However, during post-production, Farhat fell very ill and had to be replaced by Visual Effects Supervisor Stefen Fangmeier. “I had directed Eragon, and, at that time, I was exploring new directing opportunities,” Fangmeier says. “I didn’t want to get into a vfx assignment that would tie me up for too long. This project was perfect in the sense that they only needed somebody for four months to come in and finish up.”

Stepping in a colleague’s shoes is never easy, but on Wanted, Fangmeier ended up facing many other challenges. “When I came in, most of the shots had already been assigned to a variety of vendors. The majority of the visual effects were being created by Bazelevs Studios, Timur’s own company in Moscow. They did almost 500 shots encompassing a very large range of effects. We also had Hammerhead, Hydraulx, PacTitle, Hatch FX and CIS Hollywood and Framestore in London. So, I had to delve into shots that someone else had conceived, with visual effects already well underway and with key creative people based in Moscow. Since Bazelevs were in charge of two thirds of the shots, I primarily focused on the work that was being done there.”

Adaptation
The Moscow-based studio had produced some spectacular shots for Day WatchNight Watch, but Wanted was their first American production. This new experience didn’t go without difficulties, as Hollywood doesn’t do things quite the same way as a Russian director employing his own company on his projects. and

“They really have a strong talent pool there, they also have the software, but they didn’t have any experience interfacing with a major studio, dealing with constant editorial changes, and meeting a schedule of deadlines, etc.” Fangmeier observes. “For instance, when I got involved, they only had 12 final composites out of 500 shots, and we were already close to the original deadline (the movie was initially due for release in March 2008). One of their issues was to get the director to buy off on concepts and shots. They had to date done many different versions for quite a few shots. At one point, somebody needed to make decisions and get the shots done. So, part of my job was to establish priorities, to select the 50 or 70 shots that could be completed each week, and to push them forward. For the remainder of the post schedule, we needed to finalize 45-50 shots per week in order to meet the deadline! It definitely put a lot of pressure on everybody… So, this project was a creative challenge on one hand, but on the other also a significant production challenge.”

After his first three days in Moscow assessing the production, Fangmeier requested that American Visual Effects Producer Steve Kullback should join him in order to wrangle the production management side of things. VFX Producer Juliette Yager had already been brought on board by production. “If there is one thing I appreciate after 15-and-a-half -years at ILM, it is the importance of very rigorous production management!”

Moscow's Bazelevs Studios created the sequence in which assassins ride a train rooftop for a clear view of their target in an office building. CG is used extensively. Courtesy of Bazelevs Studios.

Time Manipulations
Bazelevs used an NT-based pipeline that included SOFTIMAGE and Maya for 3D, RenderMan and mental ray for rendering and Nuke or Fusion for compositing. The company was responsible for a great variety of shots: CG rats, CG bullets, digital doubles, assassin POV effect, fluid simulations, etc. Some of their key effects included the many speed changes that Bekmambetov had envisioned for his movie. The shots were filmed at very high speed, and then digitally altered to modify the frame speed, some time from normal to very slow to ultra fast to normal again, all within a single shot. 2D artists worked from templates that the film editors had designed in Avid. Using time flow algorithms, they changed the speed of the shots while trying to retain the original image quality. A task not as easy as it sounds as the time warp process generates a lot of artifacts.The speed changes allowed the camera to follow a bullet up to the point where it hit its target. Bazelevs created the bullet in Maya and added reflections and shadow on the environment to better integrate it. “In one shot, a bullet flies around Angelina Jolie’s head in full close-up,” notes Fangmeier. “When the bullet passes by her, you can see its shadow on her face and then her hair slightly moving, and finally her eye blinking. We also worked hard on the depth of field to keep the bullet realistically in focus, while Angelina would go from blurred to sharp to blurred again. Since the shot was in very slow motion, we needed all those subtle details to sell it… ”

Moving VFX
One of those stylized shots forms the climax of a sequence in which the two lead assassins travel on a city train rooftop in order to get a clear view on their target in an office building. “The actors inside the building were shot in an interior office set. We then added a CG exterior, a CG window, CG glass debris, and the city background. For the exterior shots on the train, James McAvoy and Angelina Jolie were filmed on a partial train rooftop set in front of a greenscreen. We then extended the set in CG to get a complete train. In order to create the cityscape, five cameras were bolted on top of a real elevated train traveling through downtown Chicago. The plates were then tiled together to create a cyclorama, and later combined with the foreground elements. We also added CG cars in the background, and created an entire bridge that the train goes under. It was a fairly complex combination of 2D and 3D elements.” Another major speed change occurs during the opening scene where a sniper gets shot in the head — with the camera following the bullet exiting the victim’s forehead in gory slow motion. The actor’s face was extracted from the plate and re-projected on a CG head that was deformed by the CG bullet animation. Fluid dynamics made the CG blood follow the bullet’s motion. “The movie is gory, but those shots are so stylized that the audience understands this is not reality. After all, the story is based on a comic book and we had to preserve that quality.”

One of the most memorable images in the movie features a character jumping through a window, with thousands of glass debris sticking to his body. In the longer shots, the actor was shot greenscreen and a tracked CG double was animated through a CG window, starting off a rigid body simulation created in Maya. A close-up of the same action was produced using an entirely CG head.

A character jumps through a window with thousands of glass debris sticking to his body. The actor was shot greenscreen and a tracked CG double was animated through a CG window. Courtesy of Bazelves Studio.

Individual VFX sequences
Meanwhile, in America, other vendors were hard at work on specific, isolated sequences. “Hydraulx was responsible for a complex effect that was meant to go completely unnoticed,” Fangmeier says. “Colin Strause and his team built a CG weaving machine for a sequence in which our hero needs to slow time down in order to find an object that is attached to one of these tens of thousands of threads. At Hammerhead, Jamie Dixon supervised the car chase and created the scene in which a CG Ford Mustang flips over a live action limo. As for Hatch FX, Deak Ferrand and his team did two extensive matte paintings for the prologue.”In London, Framestore was called in to create the climactic train crash sequence that takes place on a bridge. In-house Visual Effects Supervisor Craig Lyn oversaw the challenging assignment. “There were several obstacles that we had to overcome,” he notes. “The first being the tight production schedule that we worked under. We completed over 117 shots in a three-and-half-month period. This included our pre-production phase for the build and look development of our digital assets, which included both a CG train as well as a full digital environment of a gorge. While pre-production was going on, we had to lock animation, which took place over a three-week period. Due to the compressed schedule, lighting of the shots occurred concurrently to the digital environment build, a less than ideal situation.”The biggest challenge was a shot that ran more than 40 seconds: a train carriage falls down into the gorge, impacts a rocky outcrop, and then scrapes down the side till it comes to a rest. At the end of the shot, the CG train has to seamlessly transition into a live action plate of the carriage. The shot ran the full length of Framestore production schedule. By the end of the show, almost the half of the crew was dedicated solely to delivering this one shot. Framestore’s software pipeline was predominantly Maya-based for animation, lighting setup and digital environments. Vfx work, which involved dust, smoke, debris and rigid body dynamics, was done in both Maya and Houdini. On the rendering front, the team utilized a hybrid solution of both RenderMan and mental ray. The compositing work was done entirely in Shake.

Framestore had to work on a tight schedule to create the climactic train crash sequence that takes place on a bridge. The studio completed more than 117 shots in a three-and-a-half-month period. Courtesy of Framestore.

Full CG Environment
The team built the train from assets supplied by production, and then detailed it out, based on reference photography. The digital environments turned out to be a much tougher challenge. “We had to create a fully CG gorge that was seen from any number of angles,” Lyn observes. “The gorge was built using low resolution meshes in combination with higher resolution ones for the more detailed areas. Texture maps and matte paintings were then projected onto the surfaces from multiple camera angles, and we were able to reuse many of the common angles for multiple shots. The break off pieces for both the train and the gorge were a combination of several techniques. Hero debris was animated traditionally, while the smaller ones were done using rigid body simulations from both Maya and Houdini. The deforming shapes of the bridge being ripped apart, and the train being squashed, were sculpted by our modeling crew, and then used as blend shapes.”Framestore’s rendering pipeline was HDRI based with reflections and heavy ray–tracing done in mental ray. That data was then passed back into RenderMan for the final renders.

The team also built low resolution digital doubles for Jolie and MacAvoy. “The trickiest bit was Angelina’s hair, which was supposed to be long and flowing,” Lyn explains. “We didn’t want to go to the trouble of a CG hair build and groom, since she was only in a couple of shots. Instead, we did a simple bluescreen shoot in an alley behind one of our buildings of a wig on a broom handle, and then tracked it in 2D!”

This outrageous sequence concludes a movie filled with unique moments and imagery. Indeed, Fangmeier was repeatedly impressed and surprised by some of the concepts that Bekmambetov came up with. “Timur has a great imagination for this type of things. There are some really good moments in the film where you feel: ‘Wow! This is a neat idea. I’ve never seen it before!'”

~ farhanriaz

Advertisements

The Forbidden Kingdom: VFX and the Chi Energy Effect April 21, 2008

Posted by farhanriaz in Movies, Review, VFX.
Tags: , ,
2 comments

The tale of the Monkey King is as much a part of Chinese culture as Mickey Mouse is to American life, and the chance both to tell this story and have martial arts superstars Jackie Chan and Jet Li appear together in the same film for the first time was just too good an opportunity for director Rob Minkoff to pass up.

“The chance to interpret the character in this film, get Jet Li to play it and then kind of present this character to the West, it’s almost like the story of the movie,” says Minkoff, who directed both Stuart Little movies and co-directed The Lion King.

The Forbidden Kingdom (opening April 18 from Lionsgate) begins with American teen Jason Tripitikas (Michael Angarano), a martial-arts movie geek who is beaten up by local bullies and wakes up in mythical China. Tasked with returning a mystic weapon to the Monkey King, who’s been imprisoned by the Jade War Lord (Collin Chou) for more than 500 years, Jason is aided in his quest by kung fu master Lu Yan (Chan), the Silent Monk (Li) and the beautiful Golden Sparrow (Liu Yifei).

But bringing to life Forbidden Kingdom required a lot of work in a very short timetable, especially when it came to using visual effects to mix the film’s Hong Kong-style martial arts action with the storybook fantasy of the original myth.

Minkoff says he wanted the visual effects to evoke the feel of classic Hong Kong films. It also needed to balance the story’s sense of storybook fantasy and realism. “The audience is a little more sophisticated, so some of the fog-machine effects with the dry ice obviously weren’t going to cut it with us,” he adds. “We obviously wanted something that was slightly more contemporary.”

Minkoff says the effects work ended up staying largely in Asia, thanks to Exec Producer Rafaella DeLaurentiis, who was impressed by the high quality and low cost of some work done by a Korean house. “She thought that would be an interesting option for us,” continues Minkoff. “It’s a Chinese story, Asian-themed, and would require a sensitivity that might be a natural fit with Korea.”

Work ended up being spread around a number of vfx studios, with a trio of South Korean houses leading the charge: Macrograph, DTI and Footage.

But first, the film had to go through a short prep of eight weeks and then into a tight, 101-day shooting schedule in China. Ron Simonson came onto the project about a month into shooting as the senior visual effect supervisor, and says the biggest challenge was getting up to speed on what was being shot on a set full of green safety pads and wire rigs, and making sure it would work for the visual effects artists later on.

Bringing the vfx to life in Forbidden Kingdom required a lot of work in a very short timetable. Hong Kong-style martial arts is mixed with the original Monkey King myth. All images © Lionsgate Ent.

“It was ‘how to rig the wires and the pads so it works best for us’ and make sure we get all the pieces shot to replace all those things,” says Simonson. “Basically, it’s just kind of like, ‘OK, this is what you want to do. Can we maybe move this a little bit that way and move the camera a little bit that way cause that’ll work better?'”Simonson worked closely with stunt coordinator Woo-Ping Yuen and Minkoff on shots, using an on-set previs team to quickly test ideas for changes in or additions to scenes in the edit.

Shooting in China had the benefit of being authentic and costing much less than other locations, though there were some differences and issues with technical requirements. Minkoff says they had some concerns about the ability to hang high-quality greenscreens that were solved with DP Peter Pau’s suggestion of covering all four walls of the stage with plywood and green paint

The effects work ended up staying largely in Asia because of the high quality and low cost of some work done by a Korean house. Work was spread around a number of houses.

Of course, having a good greenscreen stage also upped the ante. “The number of sequences that we ended up setting and shooting on the greenscreen stage just made the numbers shoot up,” confirms Minkoff.

Minkoff’s background in animation also helped meet the tight deadlines. “Rob being able to articulate what kind of effect we were talking about and then having it worked up there and dropped into the edit, into the Avid, and see how it worked, really expedited the process,” Simonson says.

Simonson was on set with a crew of about 10-12 visual effects artists, including previs and postvis artists who were essential in planning and executing shots. “Some of the bigger shots were created quite late in the game,” Simonson says. “We’d be looking at the edit and Rob would say, ‘We need some way to get from here to there,’ and we would draw it up and it would save a lot of time getting it to the animators.”

The film also called for a lot of diverse visual effects that made it difficult to, as Simonson suggests, achieve some economy of scale. “There’s big environment stuff, there’s water, we have fire, we have CG weapons, we have the chi energy effect,” he reveals. “All that separate stuff all took a lot of R&D.”Most of the work was split up between the three lead houses in South Korea, with Asia Legend in Hong Kong doing a lot of wire and object removal, Simonson says. Other contributing houses were Xing Xing in Beijing, Frantic Films of Vancouver, and Illusion Arts Digital, Stingray VFX, Svengali and Digiscope of Los Angeles.

Coordinating all this was difficult, Simonson says, as the locations of the houses crossed the International Date Line as well as language and cultural boundaries.

Working on a Hollywood film also had benefits to the Korean houses, who emphasized in bidding on the project their ability to work hard and produce quality work even it meant the crew went without sleep for months on end, says Minkoff. While he says they definitely didn’t want anyone to work that hard, “it seemed like there was a sympathetic kind of attitude about their ambition, which was to break out of the Korean market into the larger Hollywood market,” he says.

While much of the vfx work was along the lines of wire and object removal, Simonson cites as a favorite the movie’s opening scene in which the camera swoops through the sky until the figure of the Monkey King appears to be standing atop a cloud. The Monkey King, played by Li, then proceeds to fight his opponents as they stand on the very tops of mountains protruding through the clouds.

“It took a lot of look development to get a level of realism, but also stay in the kind of storybook land vision that that scene is,” he says. “I opted to shoot real clouds for the fly in and all the mid-ground mountains, background mountains, mist and everything else was CG.”

A battle sequence set in a field of cherry blossom trees featured no real trees — just a stick in the ground with every blossom glued to the trees, Simonson offers.

The film boasts a lot of diverse visual effects that strained the budget.There's big environment stuff, water, fire,CG weapons and the chi energy effect.

The film doesn’t entirely take place in ancient China, and replicating modern Boston — where Jason begins his journey — required a combination of real plates and stitching together still photography in Nuke to create the cityscape.While Chan and Li are formidable weapons in their own right, the script also featured a powerful staff and the witch-like Ni Chang, played by Li Bing Bing, who uses a whip and even her own hair as a prehensile weapon in battle.

“That was again a lot of coordinating with the fight guys on set and coming up with ways to mimic the hair and the whip so (the actors) could react to it,” Simonson says. “We used ropes and lines attached to Bing Bing so that when Jackie’s grabbing it they could later replace the rope with the hair.”

The hair in particular was a difficult effect to work out. “Everyone was worried over whether the hair was going to work,” says Simonson. “We were trying to figure out alternative things to shoot in case the hair didn’t work while we were doing the R&D. But, fortunately, we got the test done early enough that the director was comfortable with how the hair was going to work and they went from five or six shots of the hair to 35 shots with the hair once they were comfortable with it.”

It took a lot of development to balance the realism and the storybook land vision. Real clouds were shot but all the mid-ground mountains, background mountains and mist were CG.

Simonson says that the various houses worked on about 900 shots overall, though with some sequences getting cut from the film the final on-screen tally is around 750. “About 25% of that is wire removal, rig removal. There’s a lot of background cleanup. In all these beautiful Chinese exteriors, there’s power lines and stuff in every single one of them, so all that stuff had to be removed.”

While shooting in China was different in many respects, Minkoff also says there were fewer hoops to jump through than when making a movie in North America or Europe. It also lent authenticity to the story of the Monkey King.

“That was the attraction of doing it,” he says. “If you have to go shoot China in Palmdale, what fun would that have been?”

~farhanriaz

Method Creates VFX Magic with Houdini March 2, 2008

Posted by farhanriaz in 3D, Review, Software, VFX.
Tags: , , , , , ,
1 comment so far

Artist Andy Boyd crosses the Pond to Create

High-end VFX for Method Studios in Los Angeles

Working in England, where he was Head of 3D Commercials for Framestore CFC, Andy Boyd developed a passion for creating high-end visual effects. His portfolio includes two well received Rexona commercials that feature digital animals running wild in the urban jungle. One of these creatures was even featured on the cover of 3D World #89 along with an article highlighting Andy’s furring technique.

In the summer of 2007, Andy set off for Los Angeles and a new job working at Method Studios. In six short months, Andy has tackled several high profile projects that range from a Hummer commercial to a high-profile Super Bowl ad for Bridgestone. In each project, Andy uses Houdini to help him meet tight deadlines while creating effects that can be easily revised in response to client and director feedback.

Particle Splashes

The first advertisement Andy worked on at Method Studios was a car commercial. A digital Hummer was being driven through a pool of water and Andy needed to supply the splashes. Given the tight timeline, he decided to use Houdini’s particles instead of a full blown fluid simulation because the water didn’t need to settle. He used Houdini’s particle fluid surfacer to create the splash geometry which he then rendered in Mantra using Houdini 9’s physically-based rendering.

Little Minx Project

Andy’s next project was a short film called As She Stares Longingly at What She Has Lost by director Phillip Van. Set to melancholy music, the film is part of the ‘Exquisite Corpse’ project launched by Little Minx in partnership with RSA Films. Andy worked with a team of talented artists to create an entire forest, a waffle cloud, a waterman, and vines.

For this project, actors were shot against blue screen and all the environments created digitally. The trees needed to be highly detailed in order to give an ominous feeling to the scene. Realizing that he would have to manage all this detail as efficiently as possible, Andy took advantage of the Mantra: Delayed Load feature. Trees were set up as scattered points on a grid with parameters that would populate the tree with details such as branches and vines at render time.

As Mantra rendered the scene, the geometry needed for each section was loaded in. Then as Mantra moved on to another section, the geometry was removed and new pieces were loaded. This approach allowed Andy to put as much detail into the scene as he needed without any memory limitations. At one point in production, he created vines that would creep up the trees but this shifted the focus away from the characters and did not get used in the final film.

By choosing Mantra, Andy was able to add motion blur, depth of field and volumetrics without significantly impacting his rendering time. Many of his colleagues at Method Studios were used to adding depth of field later using compositing techniques and were impressed that he could combine camera effects in one render pass. All of the shadows were created using the new deep shadow technology so that raytracing would not be needed.

Going to the Super Bowl

The ads that play during the Super Bowl have become as much of a spectacle as the game itself. Super Bowl ads are scrutinized in the press and companies pay a lot of money to showcase their products on the big day. For Andy, a Bridgestone ad called Scream would be his introduction to the world of Super Bowl ads.
In this spot, Method Studios had to create a digital squirrel that almost becomes road kill as he retrieves a fallen acorn. The squirrel, a number of other forest animals and a female passenger all scream out in fear while the driver, confident in his Bridgestone tires, easily swerves around the frightened animal. Working under a tight six week schedule, Andy would need to help create a number of digital animals including the squirrel that would be cut against a live-action squirrel. To make things even more challenging the squirrel’s scream would be a close-up shot in HD that would leave nothing to the imagination.

Andy’s experience creating furry animals at Framestore CFC came into play with one key difference. In England he was rendering with RenderMan and had access to programming talent to build all the fur procedurals needed to achieve a realistic look. In Houdini, Andy needed to create his own system using the Mantra fur procedural. Luckily the grooming features of the fur could be created using Houdini’s CVEX language instead of coding in C. This was a time saver because the CVEX didn’t need to be compiled every time a new feature was added.

Andy needed to add lots of detail to the squirrel because of the HD broadcast. He imported the animated squirrel into Houdini and fixed smoothing problems using Houdini’s procedural modeling tools. He then assigned and groomed guide hairs that would be used by the fur procedural to create the final fur. These curves were then run through a Wire dynamics simulation for added realism. The procedural was then used to generate about 1.5 million hairs – all at render time. Andy also used a CVEX shader to set up clumping and painted a number of different attributes on the squirrel’s skin to control the final look of the fur.

Andy’s confidence in the fur tools he created in Houdini helped him handle such a high level of realism, in such a tight schedule. Being able to have complete control without relying on programming talent showed that even smaller shops can create film-quality work while respecting client budgets.


Going to the Super Bowl

The ads that play during the Super Bowl have become as much of a spectacle as the game itself. Super Bowl ads are scrutinized in the press and companies pay a lot of money to showcase their products on the big day. For Andy, a Bridgestone ad called Scream would be his introduction to the world of Super Bowl ads.
In this spot, Method Studios had to create a digital squirrel that almost becomes road kill as he retrieves a fallen acorn. The squirrel, a number of other forest animals and a female passenger all scream out in fear while the driver, confident in his Bridgestone tires, easily swerves around the frightened animal. Working under a tight six week schedule, Andy would need to help create a number of digital animals including the squirrel that would be cut against a live-action squirrel. To make things even more challenging the squirrel’s scream would be a close-up shot in HD that would leave nothing to the imagination.

Andy’s experience creating furry animals at Framestore CFC came into play with one key difference. In England he was rendering with RenderMan and had access to programming talent to build all the fur procedurals needed to achieve a realistic look. In Houdini, Andy needed to create his own system using the Mantra fur procedural. Luckily the grooming features of the fur could be created using Houdini’s CVEX language instead of coding in C. This was a time saver because the CVEX didn’t need to be compiled every time a new feature was added.

Andy needed to add lots of detail to the squirrel because of the HD broadcast. He imported the animated squirrel into Houdini and fixed smoothing problems using Houdini’s procedural modeling tools. He then assigned and groomed guide hairs that would be used by the fur procedural to create the final fur. These curves were then run through a Wire dynamics simulation for added realism. The procedural was then used to generate about 1.5 million hairs – all at render time. Andy also used a CVEX shader to set up clumping and painted a number of different attributes on the squirrel’s skin to control the final look of the fur.

Andy’s confidence in the fur tools he created in Houdini helped him handle such a high level of realism, in such a tight schedule. Being able to have complete control without relying on programming talent showed that even smaller shops can create film-quality work while respecting client budgets.

Flying Free

The tools and techniques used to create fur for the Super Bowl squirrel were quickly put to use on Andy’s next project. In a commercial developed for Washington Mutual Bank, digital hair would be needed for a bald man who imagines driving a convertible along the coast as his hair grows back in front of our eyes. The tools used for this project were easily re-purposed from the fur project except the guide hairs would require more styling control and the wire dynamics would be much more dramatic.

“The flexibility of Houdini’s approach makes it easy to start from an existing solution instead of building every project from the ground up,” says Andy. “When working with tight deadlines, this gives us more time to focus on the creative needs of the project. For example, Jack Zaloga, Junior TD, was able to pick up the fur system from “scream” and right off the bat was render hair blowing around without any prior fur/hair experience.”

These projects demonstrate how far commercial VFX have come. These projects can be a real test-bed for tools and techniques that must achieve feature film quality in the new HD world. Tight deadlines rule the day and artist ingenuity is a critical part of the process. One can only imagine what Andy and Method Studios will pull off over the next six months.

[Related Links]
http://www.sidefx.com/

~ by farhanriaz

The Mill uses Houdini February 2, 2008

Posted by farhanriaz in 3D, Review, Software, VFX.
Tags: , , , , ,
add a comment

Time Saved is Real Tipping Point

In the end, the final look of the Tipping Point ad is a testament to the creative vision and skill of the team which resulted in a commercial that is spell binding in its impact on viewers. That achievement is all the more remarkable when one considers the tight twelve week time frame in which the production was completed while bringing to life the vision of the director.

“In the past, I had always thought about making a jump to Houdini because of its reputation as a powerful CG animation tool,” said Bares. “After using it, I realize how on certain projects I have been throwing away time writing scripts to solve production problems. Now I see that Houdini’s node-based approach is great to use period. I expect we will be seeing a steady stream of work from The Mill that uses Houdini.”

A Growing Appreciation for a Powerful Tool

The team at The Mill was pleased to see that the ramp-up time to learn Houdini was modest. An experienced Houdini artist was brought in to do the initial TD work based on pre-production plans. At the same time, he worked with the other team members to quickly get their Houdini skills productionready.

As familiarity with Houdini grew, initial plans to only use it for a small part of the pipeline was adjusted to give it a bigger role in order to better meet production requirements.

“I initially wanted to do the inner structure of the pint shot in Softimage XSI. I then realized that it was much faster to set things up in Houdini before transferring to XSI for geometry tweaking,” said Bares. “We built a very sophisticated proprietary import/export system so we could transfer geometry, particles, fluids, and hair from Maya to Houdini and back again and from XSI to Houdini and back again. This lets us to work with whichever tool is best suited to the job at hand.”

“From the beginning, we anticipated scrutiny from the client, director and agency when it came to creating the look of a Guinness pour using a tower of books,” said Jordi Bares, Joint-Head of 3D of The Mill’s London-based studio. “We needed to manage more than 60,000 CG objects, each controlled by multiple variables, while maintaining the ability to respond to client feedback.”

Using Houdini meant that the team could accommodate changes such as making the pint taller , adding more books, or creating more pages per book. For example, the ability to shift timing elements and speed in specific rows played a major role in helping to create the stop and start characteristic in the sequence that imitated the Guinness two-part pour.

“We found out very soon that we had to do the final scene in 3D. Tests involving our CG team, a concept artist and matte painter showed us that the shot was complex but do-able,” said Bares. “By rigging up the whole structure in Houdini, we could make very accurate changes. For example, Houdini let us set up specific controls for the number of pages per book and the speed at which they turned. This had a tremendous impact on the overall feeling of the shot.”

“We were able to tie in the start and stop pattern of the pour into the rig by adding per row control. Each book would find out which row it was in, read the right parameters for that row’s control and act according to instruction.” said Bares. “Even though this was one of the most difficult, jobs I have ever worked on, using Houdini gave us control over timing and speed and ultimately made the project pretty easy to manage.”

By taking advantage of Houdini’s procedural architecture, the team was able to explore creative ideas throughout the production process without ever writing a single script. They were able to load over 60,000 objects and easily manipulate them in 3D. The team also usedHoudini Mantra to render the shot and was able to build a fast and effective rendering process that helped make sequence visualization quick and detailed.

“Manipulating this many objects would have been a huge challenge in either Maya or XSI,’ said Bares.“And Mantra was able to render the whole lot in no time which made me very happy.”

The Mill uses Houdini to deliver the Perfect Pourin Guinness “Tipping Point” Ad

Getting the perfect pour on a pint of Guinness is considered part art; part science. Animators at The Mill faced a similar challenge when asked to create a Guinness pour out of thousands of books with flipping pages for the commercial “Tipping Point.”

Set in an Argentinean mountain village, the commercial follows an elaborate domino project that starts in a small house then grows to include everything from wheels to cars to flaming bales of hay. The action then culminates in a three dimensional tower of books where the pages flip in sequence to create a dramatic working model of the classic Guinness pour.

Built-in Flexibility

The team had initially planned to shoot the first few books on location then use CG to complete the shot. After a few days on set, the team quickly realized that the high altitude of 3200 m made it virtually impossible to create a real working model of the structure. The complete effect would therefore need to be created using CG. To establish a flexible process capable of meeting creative demands, the team chose to model, rig and render the Guinness tower shot in Houdini 9.

[Related Links]
The Mill
Tipping Point
SideFX

~ by farhanriaz

Cloverfield: Reinventing the Monster Movie January 23, 2008

Posted by farhanriaz in 3D, Movies, Review, VFX.
Tags: , , , ,
3 comments

From the moment a mysterious little teaser attached to Transformers hit theaters last July, an Internet obsession was born. Name-less and featuring no recognizable stars, the minute-and-a-half tease started out by slowly fleshing out the basic concept of a movie shot hand-held featuring some attractive twenty-somethings throwing a goodbye party for a friend. It was all rather Felicity-like until the tease kicked into overdrive with a Manhattan explosion and the head of the Statue of Liberty rocketing onto the streets of Brooklyn. That money shot alone was powerful enough to send fanboys flocking to the web for answers.

In the seven months that followed, some mysterious and cryptic websites were found (Slusho.jp and http://www.tagruato.jp), but nothing more of note was revealed other than the fact that it was produced by J.J. Abrams (Mission: Impossible III) and his creative team at Bad Robot, it was a disaster movie in the style of The Blair Witch Project and its title of Cloverfield. Pretty much aside from the creative names involved, Paramount and team Abrams were maddeningly able to squelch just about every other detail up until release, leaving everyone asking up until opening day (Jan. 18): “Just what is attacking New York City? Is it a monster?!”

Damn right, it’s a monster, and, as Abrams has stated in press interviews, Cloverfield finally gives America its very own Godzilla. Freakishly huge, impervious to standard munitions and rather pissed off for some inexplicable reason, this brand-new monster lays one hell of an 85-minute smack down on the Big Apple.

While this sounds like the makings of a summer blockbuster, Cloverfield is not. It’s a winter experiment, if you will, with a fraction of the budget of a summer movie, no stars and a visual gimmick that is literally sending some audience members running for their barf bags. Yet it broke records with the biggest box office ever for a January opening (an estimated $46 million for the four-day MLK holiday weekend) and a lot of that has to do with the monster. Created by artist Neville Page and Tippett Studio, Visual Effects Supervisor Kevin Blank had the fun job helping to facilitate the making of a monster, both literally and figuratively.

A long-time member of Abrams’ Bad Robot family, Blank was brought in while working on Lost. “Cloverfield was J.J.’s idea and then he hired [Lost scribe] Drew Goddard to write the script… J.J. was doing creature design and sketches four or five months before I was involved…Then they brought in Neville, who was doing design work for Avatar [and later Star Trek]…He knows such a breadth of zoology and every type of creature in existence and bringing together a hybrid of lots of different types of reality-based life. So the process of getting to what the creature [looked like] was very, very developed when I showed up. What transpired after I showed up was more skin coloration and style of eyes. There were a few design details that never really manifested, but I think they will come out in the toy,” Blank teases.

Considering the style and budget limitations on the film, Blank says from the beginning the visual effects were always about getting the most bang for the small bucks. “The trick was how do you provide this amazing experience and show enough of a really big event, but then get away from that event and don’t hang on that event? There is an ode to Jaws and an ode to Aliens where what you see less of is scarier and that’s very, very much played to. Also, a big inspiration piece for this is 9/11. We think of the monster as an event rather than a tangible thing like 9/11, which was this horrific day. When you look at lots of YouTube footage [from 9/11], this is where director Matt Reeves started. He kept saying, ‘keep it real, keep it real, keep it real.’ When you look at that [9/11] footage, there might be a camera pointed at a building coming down and then the camera hangs there for a second, like the person is in shock, but then they run and get behind a car. Then the camera is looking at a foot or a door jam or maybe underneath a car looking across the street to smoke, but the noise and the description is so compelling and drama driven that it’s seeing that piece of drama that really gave the project its soul. The visual effects were just about giving large scale payoffs.”

One of the key factors in launching the buzz for Cloverfield came from the teaser trailer that ran in the summer of 2007. The striking shot of the Lady Liberty’s head landing, scratched and decapitated on the streets of New York really promised something exciting to come. Blank reveals that trailer was literally the start of shooting for the entire project. “One of the biggest challenges of the whole project is that we started [without a script]. There was an outline so we knew the basic beats, but there was an element of the process of discovering locations and that was what [the scene] had to be because it all happened so quickly. The really stressful part for myself was while the movie was being prepped, and being prepped kind of on the fly because it’s hard to prep without a formal script, was to do this trailer. We basically had about two-and-a-half weeks to do it from the moment we filmed it to the moment it had to be attached to Transformers.

“In terms of sheer momentum it created, that was amazing. But it’s one thing to be prepping a movie that quickly and it’s another thing to be prepping a movie and delivering something as high scale as that trailer… Taking things from previs to shot execution and development of the model was really fast and a tough juggling act. It came out great and created a lot of buzz. We went back and tweaked the shots after, so the shots in the movie have evolved from what was seen in the trailer. Mostly, there is a better model of the Statue of Liberty head. The full trailer actually shows the new Liberty head to compare.”

While the gimmick of a first person POV witnessing a monster attack is compelling, Cloverfield’s success lies in the execution and visuals of the monster. Blank says they were gratefully given enough money to get the right vendors to do the job. “Even though the movie was low budget, the visual effects budget we had was a good size. We were dealing with big movie vendors and we hired Double Negative in London [under the supervision of Mike Ellis] and Tippett Studio [under the supervision of Eric Leven]. Tippett has a terrific reputation as a creature house and they made the monster. They brought it to life. But the thing I was trying to do, though, is that I’ve always had a philosophy of matching the talent with the task. Tippett is a full-service visual effects company capable of doing lots of things and, obviously, we went to them for their creature work but they ended up doing a lot more. With Double Negative, I was really impressed with their work on Batman Begins and Children of Men.”

But the tight budget also meant that more vfx had to be utilized to fill the production gaps. “We were trying to shoot on a small set with a bunch of greenscreen and make everyone believe it,” Blank continues. “I give a lot of credit to Production Designer Martin Whist because he had the least amount of resources to produce something believable. We kept saying to him, give us the front 10 or 20% in front of camera for real and we’ll do the rest. A lot times you would expect on a movie like this for the set to comprise 50 to 60% of what is going on and visual effects is completing the lower half. But visual effects were doing a lot more than that. For example, we had a very large sequence on the Brooklyn Bridge. What was created was basically a 150-foot stretch for the board planks, a few benches and then lighting fixtures were in place where they would be on the bridge, but the railing, the lamps and everything is CG. In New York, we shot helicopter plates on the side of the Brooklyn Bridge to make the environment, but the actual structure of the bridge was 99% visual effects. The only thing that was not was the ground these people were walking on.”

It was so much work that Blank confirms it’s not really even quantifiable. “The one thing about this movie is that it’s basically a big monster movie done in The Blair Witch style, so there is no traditional camera coverage. You can have shots that go on and on for a minute and within one shot you can have three-dozen visual effects going on. Roughly there were 150 plates in play, but in terms of actual quantifying how many effects, I’m the wrong person to ask,” he chuckles, and then pleads that it’s the breadth that really counts here.

Blank adds that, unlike traditionally filmed movies, Cloverfield found the bulk of its vfx work in adding elements rather than subtracting them. “We had about 32 days of shooting and a few days of additional shooting and about 10 of those were on a greenscreen stage. What Martin Whist created was very minimal, it was great, but visual effects were adding a crazy amount of additional stuff. Everything we saw looked great, but it was not as much as you would expect to see. So the amount of resources that was given to production was spent really wisely.”

Of course the pièce de résistance of the film is the actual monster itself and Blank says he is thrilled with the end results and the process of getting him there. “I am really proud of the creature from a design perspective, so a lot of props to Neville Page and for Tippett Studio for realizing something really amazing looking. But the other big thing was there was some shared material between Double Negative and Tippett because they are houses that use similar pipelines — as they basically use Maya and Shake for everything. That was factored into the decision [to hire them] because it happened so quickly, so sometimes you couldn’t think, ‘Well, I’ll give this here and that there.’ I knew there was going to be some shifting. It created a situation where the people were all using the same [systems], so it might be a case of Tippett generating a little piece of a creature but then giving it to Double Negative to put into a broader-based environment piece. Tippett did all the creature work [overseen by Animation Supervisor Tom Gibbons], but they did some environment work too. Double Negative did more shots on the show than Tippett, and I know [it will all be about] ‘the monster, the monster, the monster,’ but a lot of people will be unaware of the extent of the environment creations going on in the film. Big credit goes to both houses.”

With all the hype said and done, Blanks says he knows the movie delivers. “I think everyone will have a wild ride…[and] rather than the monster having a personality [like Godzilla or King Kong], it’s more of an entity or an event. This movie is more like a fantastical 9/11 re-imagining. It is a monster movie but an experiential one. I think it is going to be viewed in a unique way and in some ways it may be difficult to compare. Ultimately, there are 60 some creature shots and that’s not a ridiculous, crazy amount and many of them are cheating. But trust me: you’ll get a good look at him,” he laughs.

And after you do, you’ll certainly be able to appreciate Page’s invaluable contributions, as well as Tippett’s. Funny enough, Page says that Abrams initially approached him anonymously by e-mail while he was working on Avatar, mentioning how he adored his Gnomon Workshop training DVDs. Page assumed he was a young student. “Felt a touch clueless, to say the least. I blame J.J., however, for the misinterpretation. His e-mail was so personable and matter of fact that it did not feel like a major director wanting to collaborate on a movie. The moral to this story is pretty obvious.”

And naturally what was initially pitched to Page by the filmmakers was short on creature details. “They wanted it big. They wanted it to be something ‘new.’ It had to adhere to some story points, but it was wide open. I listened; I took notes. I couldn’t pass this up. I accepted.”

But coming up with something new, especially on the heels of The Host, was an extra challenge. “Whenever I’m asked to design something that is ‘completely new,’ ‘fresh’ and ‘that has never been seen before,’ I get nervous. I have a long philosophy on this, but I will say that ‘new’ things need to be familiar as well. If not, then they are maybe too difficult to understand and comprehend. The hardest thing, in a way, was to not repeat any of the stuff that I did on previous films. The good news was that Cloverfield’s parameters lent itself to developing something ‘new’. In other words, the original creators (J.J., producer Bryan Burk and screenwriter Drew Goddard) set the tone and we all developed it together. Furthermore, I was afforded the opportunity to hire a great talent, Tully Summers, to help me out. He is such a treat to work with. And he was an invaluable resource of ideas and execution on both the Big Guy and his parasitic friends. I had heard about The Host during the development of Clover, but did not see anything until I was done with the design. I dug The Host. I thought that it was such a success in so many ways. Some people are drawing conclusions that Clover and The Host are similar in design. They are in that they ravage and seem to originate from the water, but the end results are quite different. However, when I finally saw some of the concept art, there were some very obvious similarities. But then again, I think that we were both channeling similar biological possibilities.”

Page suggests that understanding the monster’s motivations is key and to do that requires researching as many aspects of the life you are creating. And he starts the design process more as an actor than as a visual artist.

“My preference for doing most design is to start with pencil and paper. Rough sketches. Again, none of us really knew what it was going to be, so I went for the shotgun approach. Generate as many design variations as possible and see which ones get closest to the target. I did floating gasbag tentacular things, sea serpenty things, arthropods, whatever. But, what guided us were the narrative needs. Which is great, because nothing was to be superfluous. I prefer when things are purposeful. Utilitarian, if you will. As for how many sketches it took to get to the center of this tootsie pop? Never enough. I love the process, the drawing, the sculpting, but I had so little time to do ‘cool’ art. So, I really had to be very efficient with time and process: Maybe 80 sketches to establish a direction, six clay sculptures to assist and then many, many hours of digital sculpting to finalize the design. In terms of efficiency, I try to make every moment count in my days, especially when on multiple projects. The sketchbook is always with me.”

Page’s design process begins with slowing down and trying to think clearly. But no drawing until the mental images start to flow. “Sometimes I start with big gestural silhouettes, other times with loose, gestural lines. Either way, I am looking for interesting forms. While in this mode, I am tapping into all of the research I have done and keeping in mind all of the pertinent story points and, of course, all of the clients desires and comments. I may do some of these drawing digitally using Photoshop on either a Wacom tablet or a Cintiq. Sometimes I will bust out a lump of clay and explore some ideas there and, other times, I may sculpt digitally using ZBrush. In the end, ZBrush was used for all final development and the final sculptures for use by Tippet Studio.”

Not surprisingly, Page insists that he did everything to avoid comparisons to Godzilla: no dragons or lizards in this creature’s DNA. “Granted, it is huge, comes out of the water, has a tail and ravages Manhattan, so there were some major elements that kinda screamed Godzilla. But the design and biology and history are very different. For me, one of the most key moments in our collective brainstorming was the choice to make the creature be something that we would empathize with. It is not out there, just killing. It is confused, lost, scared. It’s a newborn. Having this be a story point (one that the audience does not know), it allowed for some purposeful choices about its anatomy, movement and, yes, motivations. The hardest thing to accept, in terms of making a truly plausible creature like this, is its scale. Nothing would look like this at that scale [the size of a skyscraper], and that is to assume that anything could ever really be that scale as a living organism on land. Other movies that had gigantic monsters have helped pave the way to the ‘suspension of disbelief.'”

Tara DiLullo Bennett is an East coast-based writer whose articles have appeared in publications such as SCI FI Magazine, SFX and Lost Magazine. She is the author of the books 300: The Art of the Film and 24: The Official Companion Guide: Seasons 1-6.

~by Tara DiLullo Bennett