Loading...

Follow VFX Voice Magazine - VisualEffectsSociety on Feedspot

Continue with Google
Continue with Facebook
or

Valid

By IAN FAILES

In many television series it can be prohibitively expensive to build a complete set of the desired location. There’s also the logistical challenges, sometimes, of shooting in an actual location, filling it with crowds or period-accurate set dressing and vehicles.

Enter the digital backlot, a common methodology used in TV (and film) for filming scenes largely against blue or greenscreen and then filling out the action with digital or augmented environments.

On season 3 of Amazon’s The Man in the High Castle, VFX studio Barnstorm VFX was called upon to create several such environments to help tell the story of the alternative-reality, post-World War II in which the U.S. is divided into the Greater Nazi Reich and the Japanese Pacific States.

The Man in the High Castle Season 3 Visual Effects Breakdowns - Barnstorm VFX - Vimeo
Watch Barnstorm VFX’s The Man in the High Castle Season 3 breakdown, which includes a number of digital backlot shots.

“For the New York City streets scenes, for example, we knew we couldn’t go to New York City to shoot this. And then we couldn’t really find a good location in Vancouver to shoot it because there were just too many things – bike lanes, trees, things that just would never work. So very early on with those sequences we decided that it would be much easier to have a completely blank slate to work with.”

—Lawson Deming, Co-founder, Barnstorm VFX

Kinds of digital backlots

In the show, each digital backlot approach differed slightly, as Barnstorm co-founder Lawson Deming outlines. “For the New York City streets scenes, for example, we knew we couldn’t go to New York City to shoot this. And then we couldn’t really find a good location in Vancouver to shoot it because there were just too many things – bike lanes, trees, things that just would never work. So very early on with those sequences we decided that it would be much easier to have a completely blank slate to work with.”

Barnstorm VFX co-founder Lawson Deming.

In that case, Barnstorm completed a largely CG environment. Meanwhile, some other scenes involved a military person landing on the deck of a battleship by helicopter. While the ship itself was always going to be a digital creation, the helicopter was initially intended to be practical.

“The chopper they were going to film with was a firefighting helicopter,” says Deming, “and they were going to put vinyl skin on the outside to make it look like a military helicopter, and then there were a bunch of fires happening and the helicopter had to go work and fight fires. So on fairly short notice we changed to the idea of doing the helicopter digitally and the actor had to ‘emerge’ from that – he was just shot against greenscreen.”

Original plate for the helicopter arrival shot.

Final shot, with a CG helicopter.

“The chopper they were going to film with was a firefighting helicopter … then there were a bunch of fires happening and the helicopter had to go work and fight fires. So on fairly short notice we changed to the idea of doing the helicopter digitally and the actor had to ‘emerge’ from that – he was just shot against greenscreen.”

—Lawson Deming, Co-founder, Barnstorm VFX

Re-creating (and creating) the environment

Knowing that what is typically filmed in digital backlot shots is just a placeholder for the real thing, Barnstorm had to ensure that they had accurate measurements of the shot sets, and similarly accurate surveys of the environment, vehicle or place they were creating digitally.

“For the helicopter scene,” notes Deming, “we had measurements. We knew how high it was off the ground. We knew the height of the lip of the opening. We knew the size of the opening. There’s a moment there where the actor puts his hand on the side of the door as he’s getting out of the helicopter. We made sure that we had an opening in the green box that had been built that was just the right size that he could actually do that, and his hand then would be physically touching the digital helicopter that we built.”

The New York City street shot had the challenge of taking plate photography that did not have tall buildings in it (which impact the plate lighting) and re-creating the scene appropriately. “Obviously, there’s only so much you can do about that,” says Deming, “but we shot intentionally in overcast lighting to the best that we could and then we try to match.”

To get the look of a New York City that has actually been ‘re-worked’ a little by the Nazis, Barnstorm referenced old imagery of the city landscape, but then would add in construction cranes and more brutalist architecture.

An important step, too, was making sure anything the studio added digitally did match to any physical components that were crafted by the show’s art department – “items like telephone booths,” identifies Deming. “The art department built telephone booths and fire hydrants and various sorts of street furniture for New York for the scenes. So we made digital versions of all those things as well.

“And then,” adds Deming, “we just went crazy on details, details on everything. Lane markings for traffic, signage on the sides of the streets. We created all sorts of things, down to – and I don’t think anybody sees it – but there’s discarded cigarettes, cigarette wrappers and stuff like that in the gutter. Not too many, because it’s a fascist state and they would fine you at the very least for that! But all those kinds of details we sort of peppered in there.”

Shots on the deck of the battleship were filmed against greenscreen.

The final scene, with digital ship and environment.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By TREVOR HOGG

Concept art of the Mi-8 helicopter flying with the money net over the coca valley. 

Former Delta Force soldiers join together to rob a Columbian drug lord and undertake a harrowing Mi-8 helicopter flight through the Andes with their precious cargo in Triple Frontier, a Netflix original action film directed by J.C. Chandor (A Most Violent Year) and starring Ben Affleck, Oscar Isaac, Charlie Hunnam, Garrett Hedlund and Pedro Pascal. Production VFX Supervisor Mark Russell (Deepwater Horizon) reunited with Chandor and recruited DNEG to digitally enhance the pivotal and signature action sequence.

The mantra for Chandor and Russell was that everything had to be grounded in reality and told from the characters’ point of view. “We went on location for every single scene and therefore had something to work off of,” states DNEG VFX Supervisor Chris Keller. “There were bluescreen and greenscreen on set, but more often than not we didn’t end up using them. In fact, all of the helicopter interiors were shot on a white screen which was a strategic choice; it generated a natural light that was in line with the entire look of the film.” The helicopter buck was shot outside of a warehouse studio and placed on a hydraulic gimbal with six axes of motion. “That entire sequence of them flying up to the mountain and then crashing down was heavily previs’d so we could program the buck to simulate the same actions.”

Chris Keller, VFX Supervisor, DNEG

Concept art of the Mi-8 helicopter crashing in the coca field. 

“That entire sequence of them flying up to the mountain and then crashing down was heavily previs’d so we could program the [helicopter] buck to simulate the same actions.”

—Chris Keller, VFX Supervisor, DNEG

Aerial background plates were combined with the foreground plates shot on the buck in a backlot, a few full CG shots, and practical helicopter footage to produce the final sequence. “The helicopter background plates were shot in the Sierra Mountains and were so high up that the sun was always out,” remarks Keller. “Hawaii, which covered a huge part of the journey through the Andes, was muggy, foggy and rainy, so a lot of that is overcast.” No sky replacements were inserted into the scenes. “We had to go with what was shot.”

A 3D environment was created for the entire valley with nine different variants of coca plants digitally constructed and rigged for simulations. “Whenever we have exterior shots of the helicopter flying and approaching the village [constructed by the art department] about half was CG plants and jungle. When we’re not close to the village that was all CG. Once the actors are down in the field, the vegetation around them was always real as was the crashed helicopter. But anything behind the helicopter was CG coca plants and jungle built on a real plate.

BEFORE: Hawaii, where principal photography took place, does not have snowy mountains required for the film.

AFTER: The snowy mountains were added digitally based on photography taken of the Andes in Argentina. 

“Hawaii doesn’t have snowy mountains, which was something we needed to tell the story that they’re in the middle of the Andes. Mark had put a list together of locations in Argentina that he wanted us to cover, and we sent out a unit for a week that captured plenty photography of the Andes. We were able to dress those in the background.”

—Chris Keller, VFX Supervisor, DNEG

BEFORE: The helicopter buck was shot outside of a warehouse studio and placed on a hydraulic gimbal with six axes of motion.

AFTER: The white screen, which was chosen because it generated a natural light, was replaced with plate photography.

“Hawaii doesn’t have snowy mountains, which we needed to tell the story that they’re in the middle of the Andes,” continues Keller. “Mark had put a list together of locations in Argentina that he wanted us to cover, and we sent out a unit for a week that captured plenty photography of the Andes. We were able to dress those in the background.”

An actual Mi-8 helicopter was used during the production. “We tried to make the Mi-8 fly the general beats that were happening in the story. Because the Mi-8 is such a beast of a machine, it couldn’t do all of the necessary dramatic flying that was required. There were some shots that we could get away with adding smoke to the real helicopter, and in other ones we completely replaced it. When the full coca valley is revealed for the first time, that ended up being 90% CG. That establishing shot was based on a plate with a real flying helicopter, which was a great basis for the visual effects work; it allowed us at any moment to compare with the plate and to make sure that we were 100% photoreal.”

BEFORE: A grey model of the terrain as seen from the helicopter. 

AFTER: The overcast skies were kept from the plate photography. 

“There were bluescreen and greenscreen on set, but more often than not we didn’t end up using them. In fact, all of the helicopter interiors were shot on a white screen which was a strategic choice; it generated a natural light that was in line with the entire look of the film.”

—Chris Keller, VFX Supervisor, DNEG

BEFORE: An actual Mi-8 helicopter flies with the money net. 

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By KEVIN H. MARTIN

Aladdin (Mena Massoud) in Disney’s live-action Aladdin, directed by Guy Ritchie. (Images courtesy of Disney Enterprises.)

Disney’s wildly successful and critically-acclaimed 1992 animated feature Aladdin, co-directed by Ron Clements and John Musker, and starring Robin Williams as Genie, was merely one of many animated dramatizations of the classic story produced since the 1926 feature The Adventures of Prince Achmed. The Disney adaption drew on character and story elements from 1940’s live-action The Thief of Bagdad (itself a remake of 1924’s Douglas Fairbanks silent-era spectacular), a visual extravaganza that featured pioneering bluescreen work in a color film by Special Effects Director Larry Butler, which earned the film an Oscar for its effects.

The success of the Disney feature led to a pair of direct-to-video sequels and a TV series, plus a Broadway adaptation. After some gestation, a live-action adaptation was announced, with Guy Ritchie signed to direct. A veteran of the Ritchie-directed Sherlock Holmes films, VFX Supervisor Chas Jarrett was tasked with both technical and creative control on the project, including a major contribution from Industrial Light and Magic, as well as work from Hybride Technologies, Base VFX, Magiclab, Host VFX and One of Us. DNEG provided stereo conversion for the 3D release, while Ncam and Nvizage handled on-set tracking and virtual camera functions.

Jarrett, an Oscar nominee for Poseidon and BAFTA nominee for Charlie and the Chocolate Factory, who won a VES Award for the first Sherlock Holmes feature, began his career at MPC. After a decade there, he became an independent supervisor, and in recent years, he headed up VFX efforts on both Pan and Logan. “Our story is very faithful to Disney’s original animated feature,” Jarrett readily acknowledges. “There were many inspirations we could take from it, but it was never our intention to lean too closely towards such an iconic film [for the live-action film]. It was important for everyone on the project that we stood on our own feet.”

Towards that end, pre-production commenced with what Jarrett characterizes as a blank slate. Production designer Gemma Jackson, an Emmy winner for John Adams and Game of Thrones, who had recently collaborated with Ritchie on his King Arthur: Legend of the Sword, put her team in the art department to work immediately by sourcing volumes of reference material and beginning visualization for all aspects of Agrabah, the thriving desert city featured in the film.

“At the same time, we began developing storyboards, animatics and previs with an internal VFX team of around 35 people, who ultimately produced around 40 minutes of animatics and previs for the film,” Jarrett continues. “This was mostly used to establish the more choreographed musical sequences in the film. We also brought on board a large team of traditional animators who were instrumental in developing character ideas and performances.” Proof Inc. also contributed to both previsualization and postvis efforts.

Another major issue during prep revolved around vendor selection. “Choosing the right team for a project like this is crucial,” he emphasizes, “in part, because it’s such a smorgasbord of creative and technical challenges. So my first port of call was to ILM and two of their VFX supervisors, Mike Mulholland and David Seager. Mike was the Overall ILM Supervisor while David joined me on set as our 2nd Unit VFX Supervisor, and then headed back to Vancouver to manage the team there. Also on board early was Animation Supervisor Tim Harrington, joined later by Steve Aplin – two fantastically creative collaborators.” Other supervisors include ILM London’s Mark Bakowski, Daniele Bigi, Animation Supervisor Mike Beaulieu and Jeff Capogreco.

Aladdin meets the CG larger-than-life Genie (played by Will Smith). Genie was realized by the digital character and FX simulation teams at ILM.

Aladdin and Jasmine (played by Naomi Scott) flee from palace guards on the streets of Agrabah, the thriving desert city featured in the film. (Photo: Daniel Smith)

“[Director] Guy [Ritchie] was never inclined to dwell on ‘magical’ effects for long, so they tend to be quick and percussive.”

—Chas Jarrett, VFX Supervisor

Aladdin’s sidekick Abu, who was in the original feature animation, was entirely digital in the live-action version, and based on a Capuchin monkey.

Aladdin, the street rat with a heart of gold, and Genie. (Photo: Daniel Smith)

Aladdin stands by as Genie is about to make his appearance from a mini tornado of blue vapor.

An extensive backlot set served as the backstreets of Agrabah and the main parade ground in front of the palace gates.

Given Guy Ritchie’s trademark gritty approach to filmmaking, it was no surprise that this fairytale would be one with an edge. “Guy was clear from the outset that the film had to take place in a viably real world that felt tangible and authentic,” states Jarrett. “For us, this meant that while there’s a strong fantasy element to the story, the world needed to feel grounded with environments and characters that were plausible. Guy is very open to trying new technical methodologies in his films, and we certainly pushed those boundaries for this project, but working in real sets and locations as much as possible was always his preference. Where we used digital sets and extensions we took great care to base our work on scans and plates of real places to ensure that we stayed grounded in ‘reality’ – which meant the environments were inspired by real locations. So Giles Harding, my on-set supervisor, oversaw LIDAR and photogrammetry scanning of locations in Morocco and Jordan, as well as a hugely detailed capture of all our amazing sets.”

Marwan Kenzari is the powerful sorcerer Jafar. (Photo: Daniel Smith)

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By TREVOR HOGG

Machine-learning software was trained to capture micro-moments that made for a more believable facial performance. Front is Chris Evans, who plays Steve Rogers/Captain America. (Images courtesy of Framestore and Walt Disney Pictures/Marvel Studios.)

Along with ILM, Framestore was responsible for creating Smart Hulk, a peaceful coexistence between scientist Bruce Banner and his volatile, green alter ego that makes his first appearance in Avengers: Endgame.

“ILM started developing Smart Hulk probably during Infinity War,” states Framestore Visual Effects Supervisor Stuart Penn. “Before the beginning of Endgame they turned over the base model, textures and some of the face shapes. From that, we developed our own muscle system, added to the face shapes, built our own face rig, and were provided reference by the client of turntables and facial workouts that ILM had done to make sure our facial performances were matching.”

Stuart Penn, VFX Supervisor, Framestore. 

Smart Hulk was the most complex facial rig created by Framestore. The two key areas to translate were the movement of Mark Ruffalo’s mouth and the way his eyes moved. 

“It was down to the level where [Mark Ruffalo] has specific muscle structures in his face, and it was translating those movements into Hulk. The two key areas were the movement of his mouth and the way his eyes move. It was the essence of Mark’s performance coming through on quite a different face.”

—Stuart Penn, Visual Effects Supervisor, Framestore

Machine-learning software was trained to achieve certain keyframes in specific shots.  “That allowed us to generate a quick first-pass animation for the entire sequence,” explains Penn. “We would quickly render it out and send it back to the client so they had something quickly for their edit. We then moved to keyframe animation on the facial stuff to get what we weren’t getting from the solves. The solves also gave us a micro-movement pass that we could extract from the machine-learning data.”

Because Smart Hulk is a hybrid of Hulk and Bruce Banner, there was a heavier reliance on the performance capture of Mark Ruffalo. “It was down to the level where he has specific muscle structures in his face, and it was translating those movements into Hulk. The two key areas were the movement of his mouth and the way his eyes move. It was the essence of Mark’s performance coming through on quite a different face.”

Smart Hulk is slightly less green in Endgame, as the desire was to have a more human subsurface skin coming through underneath. At right is Scarlett Johansson, who plays Natasha Romanoff/Black Widow, and left is Chris Evans.

“It was the most complex facial rig that we’ve ever done. It had many more shapes and additional controllers for the animators to get that fine level of performance, and then all of the dynamic stuff and machine-learning micro-movement that we then plugged in on top.”

—Stuart Penn, Visual Effects Supervisor, Framestore

“It was the most complex facial rig that we’ve ever done,” continues Penn. “It had many more shapes and additional controllers for the animators to get that fine level of performance, and then all of the dynamic stuff and machine-learning micro-movement that we then plugged in on top. The body rig was similar to other rigs that we’ve done before, but we didn’t have a full, working muscle system in there, which we needed.  Marvel was keen that he’s muscly and quite bulky, so when he moves and flexes his arms you actually see those muscles moving and sliding underneath the costume. We had to overdo the muscles to read them through the cloth of the costumes.”

Smart Hulk needed to be able to convey a wide range of emotions. “The hardest thing was that he was doing comedy performances quite a bit. Comedy is all about timing and subtle changes in expressions. We would always go back to watch what Mark did to see what it was about his performance told you what his emotional state was.”

A more finesse approach was adopted for the animation of Smart Hulk since he is capable of graceful and delicate movements. At right is Chris Evans.

Smart Hulk (Mark Ruffalo) was rigged to believably handle a wide range of emotions, including comedy, like in this moment with Chris Evans.

“Marvel was keen that he’s muscly and quite bulky, so when he moves and flexes his arms you actually see those muscles moving and sliding underneath the costume. We had to overdo the muscles to read them through the cloth of the costumes.”

—Stuart Penn, Visual Effects Supervisor, Framestore

Unlike his unruly predecessor, Smart Hulk is capable of graceful and delicate movements. “You wouldn’t have caught him picking up a pencil and poking the keyboards, but now he is,” states Penn. “These are precise actions, rather than Hulk smashing, grabbing a car and throwing it. It was a different level of animation. It was much more finessed.”

Having Smart Hulk interacting with the console was straightforward. “We found and filmed people with chunky hands to see the way all of the muscles and tendons moved and simulated that on top as well. We had him eating ice cream as well. We were trying to work out what sort of instrument does he use? Does he use a big or little spoon? His hands are disproportionately big to his mouth. It was tricky finding the right size spoon for him to hold in his fingers without it looking uncomfortable.

The entire hangar environment was built in CG along with the exterior view of the Avengers’ compound. 

“You wouldn’t have caught him picking up a pencil and poking the keyboards, but now he is. These are precise actions, rather than Hulk smashing, grabbing a car and throwing it. It was a different level of animation. It was much more finessed.”

—Stuart Penn, Visual Effects Supervisor, Framestore

The time-travel suits were a digital creation. 

“Our animators love to perform and filmed each other acting out things so they could then use that as reference beyond what we had,” adds Penn. “We got the designs of the costumes and built real copies of them. We dressed up the biggest and chunkiest in the office and filmed them to give us reference as to how the costume might move. Mark isn’t anywhere as heavy as Hulk and doesn’t move in the same way. His footsteps are nowhere near as big as Hulk, so it’s a definite interpretation of Mark’s body performance into something much bigger. Trying to keep it alive and weighty was definitely a challenge.”

Smart Hulk is slightly less green in Endgame, according to Penn. “There was a desire that we feel the human subsurface skin coming through underneath. We put a pinkish, subsurface glow underneath. We pushed that quite a lot, especially in the cheek area and around the ears, to give him more of a human, fleshy surface underneath the green that’s on top.”

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By IAN FAILES

New ways for actress Emilia Clarke to ride Drogon were developed, including pre-animating a hydraulic buck for convincing motion. (All Game of Thrones images copyright HBO.)

Visual Effects Supervisor Joe Bauer has a very honest answer when asked what it was like working on the multiple award-winning and now stunningly completed HBO series, Game of Thrones.

“It was an ongoing panic,” admits Bauer. “You know the dreams you used to have, where you dreamed you were on the school bus and you looked down and you’re in your underwear? It felt like that.”

Similarly, Visual Effects Producer Steve Kullback – the other half of the crack VFX team that shepherded the groundbreaking work in the David Benioff and D.B. Weiss show – recalls the monumental task that lay before them each season, including this past final eighth season.

“When we look back on what’s been done,” states Kullback, “obviously we’re all enormously proud. It’s an amazing team of insane overachievers that have just pushed and pushed to deliver on a vision that Dan and David had.”

Visual Effects Supervisor Joe Bauer (left) with Visual Effects Producer Steve Kullback scouting in Bardenas Reales, Spain for “The Spoils of War” episode. (Image courtesy of Steve Kullback)

On the set of “Battle of the Bastards” in Saintfield, Northern Island, with (from left) Kristofer Hivju, Fabian Wagner, Joe Bauer and Steve Kullback. (Image courtesy of Steve Kullback)

Daenerys rides Drogon in “The Spoils of War.”

The dragons of Game of Thrones grew in size as each season was released.

Ice Dragon. Things turn dangerous when the Night King is able to commandeer one of Daenerys’ dragons

“[Showrunnners] Dan [Weiss] and David [Benioff] wanted to max out and go for the gusto. While we were not doing anything we hadn’t really done before, we were doing so much of it, and so much of it is so complex that it was really frightening. Ultimately their support and participation has been the linchpin to us being able to do our jobs, because they give us an enormous amount of creative freedom, and the freedom to execute it in the way that we think is the right way to do it.”

—Steve Kullback, Visual Effects Producer

The dragons make their mark in the frozen lake battle.

Real photography, CG imagery and effects simulations helped bring part of The Wall down.

A wight bear attack was one of the highlights of Season 7.

To accomplish skeletal wight effects, visual effects artists took live-action actors with makeup and prosthetics and removed parts of the bodies and faces digitally.

The city of Braavos, one of the many environments required to be built for Game of Thrones.
WHY GAME OF THRONES WAS DIFFERENT

There’s no shortage right now of feature-film-quality visual effects work in the world of television. So what do Kullback and Bauer think lies at the success of Game of Thrones’ visual effects, both in terms of accolades (including multiple Emmys and VES awards) and helping to bring large audiences to the show?

One aspect Kullback highlights is their own take on shooting everything that can possibly be shot photographically. “It gives us not only an insane number of elements that go into our shots,” he says, “but also insane reference to inform those parts that have to be CG. I think that has probably single-handedly upped the bar, and it’s something we hadn’t been accustomed to even striving for before, in television and a lot of times in features, too.”

Indeed, this final season’s visual effects were certainly ‘upped,’ with Bauer suggesting that “any one of our complex shots would have been the highlight of any previous season. Most of our shots were complex shots this year, with more than two and sometimes eight layers of photographed elements.

“The reason we went so photography-heavy,” adds Bauer, “was the concern of the post-production time, in that with a feature you’ll have much more time to develop your assets and the sim work – all the stuff that you would otherwise shoot. We didn’t have the time to do that, and I wanted to go into post with photography that covered all the bases. Then the CG was all about holding it together. We’ve stuck to that because we really liked the way it looks. The thing is, as the shots get more complex, you end up needing to shoot more elements if you’re going to follow the same philosophy.”

As the VFX work did become more complex season over season, Kullback (who joined the show in Season 2) and Bauer (who started in Season 3), found themselves investing more time in the planning stages. They also learned they could delegate more via a large team of virtual and previs supervisors, and an army of on-set effects supervisors working with different units, including motion control. “Once upon a time we’d be the people running the little dragon heads on sticks around the set,” says Bauer.

The duo says their approach to the visual effects work took a major leap in Season 3 for a moment in which Daenerys orders one of her dragons to kill the slaver Kraznys. “It is the first time that a dragon roasted somebody on camera,” says Bauer. “And that was the first time we made the argument to production to set up an on-set fire stunt rather than doing CG fire.”

Other..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By IAN FAILES

Performers wearing Vicon’s motion-capture suits act out a scene in a capture volume.

If you’re looking to bring to life a CG human, character or creature with the aid of any kind of performance capture, there’s now a bevy of options at your disposal. Among those are many different motion-capture suits – ranging from optical to inertial systems, as well as ‘faux mocap’ tracking suits – and facial-capture set ups.

VFX Voice asked several mocap vendors and visual effects studios about their various suit and head-cam offerings.

OPTICAL SYSTEMS

In optical motion capture, infrared cameras pick up the light reflected back from retro reflective markers on the suits (in a ‘passive’ system). ‘Active’ systems allow cameras to sync up to strobing markers on the suits. Each camera sees the marker from a 2D perspective and when all of the 2D files are reconstructed together, a 3D marker in space can be calculated.

Vicon motion-capture suits include a hat, gloves, overshoes and markers that can be glued or taped onto the suits – there are facial markers, too. “The suit has been designed to be comfy to wear, capable of dealing with stunt work and covered in Velcro so we can attach markers to it,” outlines Vicon VFX product manager Tim Doubleday. “The reflective markers are essentially a molded base covered in reflective scotch tape. They can vary in sizes and are small enough to go on fingers.”

Motion-capture performers stage a fight scene in Vicon’s motion-capture suits and Cara helmets.

A performer stands in a T-pose to calibrate the Vicon gear.

OptiTrack’s newly released motion-capture suit.

Meanwhile, Optitrack also offers a full range of motion-capture and tracking solutions, including a new suit. “The new suits were designed to do what the older suits do – only better,” says OptiTrack chief strategy officer Brian Nilles. “They are now antimicrobial and more breathable than before, offering a better fit and more flexibility, which provides performers with exceptional freedom of movement and comfort over long recording sessions. Our popular X-base markers adhere much better to the new suits, making them nearly impossible to knock off during performance capture, and it allows the performers and the mocap technicians to focus on the performance rather than the tech.”

Also providing optical systems is Fox VFX Lab, which was formerly Technoprops. They’ve made significant innovations in virtual cameras and simul-cams, something Technoprops founder Glenn Derry (now team leader, Fox VFX Lab and VP, Visual Effects, at Fox Feature Films) helped pioneer on Avatar. The Fox VFX Lab suits, gloves and marker patches are made by 3 x 3, with the markers themselves from MoCap Solutions. The shoes are Nike with mocap fabric sewed on. “The mocap suits with the gray fabric and colors make it easy to pick out which actor you are looking at when using reference video to make performance selects,” says Derry. “If everyone is wearing solid black the editor’s job is more difficult. Plus, who doesn’t like festive colors?”

The head rigs from Fox VFX Lab are designed and built in-house. “The original version of our head rigs was designed and manufactured during the production of Avatar, though at the time the camera was singular and standard definition,” describes Derry. “The head rigs have been continuously improved upon feature-wise for the last 13 years. We use them every day on productions ourselves under the most demanding conditions in the business.”

Close-up on Optitrack’s suit fabric and optical markers.

Two performers captured together in Fox VFX Lab’s capture volume.

Fox VFX Lab’s motion-capture suit and head rig.

Xsens’ MVN Link (right) and MVN Awinda (left) suits
Mocap Websites

Dynamixyz: www.dynamixyz.com

Faceware: www.facewaretech.com

Fox VFX Lab: www.foxvfxlab.com

Optitrack: www.optitrack.com

Perception Neuron: www.neuronmocap.com

Rokoko: www.rokoko.com

Vicon: www.vicon.com

Xsens: www.xsens.com

INERTIAL SYSTEMS

Inertial, or magnetic, motion-capture systems use magnets, accelerometers and gyroscopes all within a contained cable system that tends to zip into some kind of lycra suit. No calibrated volume is required.

Xsens offers suit, sensor/tracker and software solutions. MVN Link and MVN Awinda are Xsens’ main offerings, with the suits using 17 sensors embedded or wirelessly strapped to the body of the performer. “The MVN Link is a full-body, camera-less mocap suit that is connected to a wireless data link and used for high-dynamic movements, like fighting scenes and fast maneuvers,” explains Xsens product manager Hein Beute. “MVN Awinda is the fully wireless version with wireless sensors built into it. It has greater data collection capabilities and can also be used to capture multiple subjects at once.

“The defining characteristic that sets our solution apart from others is the magnetic immunity,” says Beute. “It is the only system that provides magnetic immunity to this level, because of the sophisticated sensor fusion algorithms which took us many years to develop. We can also do multi-level motion capture, which is hard for most other inertial motion-capture systems.”

Perception Neuron also offers a full-body wireless motion-capture system using inertial measurement unit technology. Their Neuron suit comes with a network of straps that house the inertial sensors, known as ‘neurons.’ Full hand and finger tracking is part of the set-up for the 2.0 option. “The system includes the Axis Neuron software, which allows for up to five actors at a time to be recorded or streamed live, and all of the hardware including sensors and straps,” says Perception Neuron’s chief motion-capture technologist, Daniel Cuadra.

“Perception Neuron offers accessibility to all levels of users from professionals to beginners, and is also used by many educational institutions due to its versatility and simple set up,” states Cuadra. “Perception Neuron is also the only motion-capture solution to include full-finger tracking at no extra cost.”

One of the newer entrants in the inertial space is Rokoko, which makes a wireless motion-capture body suit that is right now pitched at indie developers and smaller studios who might not have had access to mocap previously. The Smartsuit Pro contains 19 sensors that are part of a sports textile and mesh suit, with straps positioned wherever there are sensors. Via the gyro, accelerometer and compass tech in the sensors, the system balances the capture data output on a body model on a suit-based hub, which is transmitted through Wi-Fi to a computer or smart device.

“We wanted it to be this personal mocap system,” notes Rokoko founder and CEO Jakob Balslev, “one that you could almost write your own name on, and have there to use anytime. Above all, we wanted something that was intuitive to everybody, something that one person would be able to set up, one person could operate and get started.”

FACIAL CAPTURE

Two of the main players focusing on facial-capture hardware (and which also have software solutions) are Faceware and Dynamixyz. Head-mounted camera systems from Faceware are designed to be used on set, in voice-over or ADR booths and on motion-capture stages, without the need for markers. Faceware also provides a supported software pipeline for getting the data into and out of content creation tools.

Faceware’s main offering is the ProHD Headcam, a fiberglass helmet available in three sizes and fitted with different anodized aluminum boom arms so the user can cover any number of capture situations. The camera is a micro Full HD camera – which also includes an onboard mini light box – and is powered by a separate capture belt for wireless transmission. “The real-time full resolution and full frame-rate transmission allow for the highest quality real-time tracking for live character facial animation, from real-time events to previs,” says Faceware Vice President of Business Development Peter Busch. “The signal can also be recorded for further enhanced tracking and animation.”

Dynamixyz’s facial-capture pipeline involves markerless capture, with video recording and head-mounted camera (HMC) software part of the system. The company provides a custom HMC designed to capture faces (“mostly human,” says Dynamixyz CEO Gaspard Breton, “although we would really like to test it on a pet face one day for fun”). The HMC can be either single view, with one camera facing the actor, or multi-view, with two cameras, one for each side that leaves the field of view unobstructed.

“Our main software Performer enables you to analyze data and re-target them on any 3D model and any rig,” states Breton. “Once the system is trained, it can produce massive amounts of frames throughout batch processing, with minimal rework for amazing results. Our system is also compatible with any body motion-capture system.”

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By NAOMI GOLDMAN

Chris Meledandri, Founder and CEO, Illumination Entertainment (Photo: Alex Berliner) (Images courtesy of Universal Pictures and Illumination Entertainment)

Sparked at a young age by cinematic marvels on the big screen, Chris Meledandri has taken that sense of wonder and brought animation into the lives of audiences worldwide, creating unforgettable characters that have ingrained themselves in the pop culture zeitgeist.

Thanks to Meledandri’s dynamic leadership and creative approach to storytelling, the Oscar®-nominated producer, founder and CEO of Illumination has built one of the most pre-eminent brands in family entertainment by developing humor-based comedic films that strike the balance between subversive and emotionally engaging, and appeal to all ages and on a global scale.

Chris Meledandri receives the VES Lifetime Achievement Award from longtime collaborator Steve Carell, the voice of super-villain Gru in the Despicable Me franchise.

For his enormous contributions to the advancement and ever-increasing success of mainstream animated entertainment over the last 20 years, Meledandri was recently honored with the VES Lifetime Achievement Award at the 17th Annual VES Awards. Upon receiving the award from longtime collaborator Steve Carell – whom Melendandri first tapped 15 years ago for Horton Hears a Who – Meledandri mused on the sobering experience, “I want to thank the VES for waking me up from my blissful dream-world, where I am forever 25 and have a full head of hair!”

Since founding Illumination in 2008, Chris Meledandri has become one of the most successful creators of original film franchises. In just a decade, Illumination produced two of the five highest-grossing animated films of all time, as well as the highest-grossing animated franchise in history: Despicable Me. Collectively, the company’s nine films – which include last year’s seventh highest-grossing film Dr. Seuss’ The Grinch, as well as Despicable Me, The Secret Life of Pets and Sing – have grossed over $6 billion globally, with 2016’s Minions and 2017’s Despicable Me 3 both crossing the $1 billion mark. In recognition of Meledandri’s ability to outperform titles such as Deadpool, Iron Man and Star Wars, Deadline wrote in 2017 that their “Most Valuable Blockbuster [rankings] might have to be coined the Meledandri Tournament.”

Today, Illumination’s characters can be found worldwide in theme parks, consumer goods, social-media memes and games. Illumination’s 2013 mobile game, “Minion Rush,” has since been downloaded more than 800 million times, making it the sixth most-installed mobile game in history.

ORIGIN STORY

Looking back at what sparked his initial interest in filmed entertainment, Meledandri fondly recalls growing up in New York City with parents who loved cinema and transfixing films that helped chart his future course. His parents, Roland Meledandri, a noted men’s clothing designer, and Risha Meledandri, an activist, gallerist and poet, cultivated a movie-going household that relished auteurs like Scorsese, Kubrick and Fellini. He notes that it was only later, through is own children, that he was exposed to the wide world of animation.

“My parents didn’t particularly believe in babysitters, and they took me to see Easy Rider when I was just nine years old! Then a few months later, I entered a cavernous dark space and joined with many others as we stared at a giant screen, watching flickering lights illuminate the imagery of 2001: A Space Odyssey. We were no longer in the Ziegfeld Theatre. Kubrick had transported us into the realm of his imagination and enabled us to suspend disbelief in a manner not quite previously possible. That afternoon, while lying through the star gate, my sense of wonder was ignited, and I have been chasing that feeling ever since.”

Despicable Me (2010)

“The single most important moment in my career was asking the exceptional Janet Healy to come work with me. She has been my producing partner and invaluable to the evolution of Illumination. She is a pioneer in our business – a founding member of the VES – and the finest producer I have known.”

—Chris Meledandri

To this day, he marvels at the significance of what the team behind 2001 created and the impact it had on modern cinema.

In high school, Meledandri became interested in theater. “My mother told me that a producer provided the stage on which creative people come together to tell a story. I took her literally and immediately started constructing sets for plays, both at school and in off-Broadway theaters.

“Now, 40 years later, the nature of the stage has changed, but I am still providing the creative space, the stories, the opportunity and support for extremely gifted people to come together to create movies that bring wonder into the lives of our audiences.”

Despicable Me 2 (2013)

Despicable Me 3 (2017)

“Here we are, 12 years, 1,000 people and nine films later, with a company where every film has been directed by somebody who started with us, never having previously directed an animated feature film.”

—Chris Meledandri

ROOTS AND WINGS

Meledandri points to some of the mentors who influenced him at critical junctures. At Dartmouth College, he studied with film historian David Thomson. “Movies had long been the window through which I learned about life, but David cemented my love of cinema and my desire to make it my life’s work.

“In the early ’80s, when movies were still being scheduled on stripboards and cut on flatbeds, I spent five years with Producer Daniel Melnick, who made films ranging from Altered States to Footloose. Soon after starting, I was dispatched to bring Dan a script for a lunch meeting at the famed Ma Maison restaurant in Los Angeles. As I approached my new boss, I spotted the larger-than-life figure of Orson Welles. In that moment, I realized that I was in the land where the line between real life and cinematic magic were often blurred.”

Meledandri describes the founding of Illumination as “a dream bouncing around my head,” and he gives tremendous recognition to the extended team of actors, musicians, writers, designers, artists and technical designers who have been integral to his journey.

“The single most important moment in my career was asking the exceptional Janet Healy to come work with me. She has been my producing partner and invaluable to the evolution of Illumination. She is a pioneer in our business – a founding member of the VES – and the finest producer I have known.”

RIDING THE ROLLERCOASTER

Meledandri went out on his own to produce movies when he was 25, and waxes, “Boy, did I make some stinkers!” In 1998, when he founded Fox’s animation division, he oversaw the costly film Titan A.E., which was deemed a massive failure, losing $100 million. But he cites the experience as one of the transformational learning experiences of his career and values every opportunity to integrate lessons learned into his overall creative and operational approach.

“A few years earlier, in 1993, after executive producing Cool Runnings at Dawn Steel Pictures at Disney, I was working at Fox when a young director showed me a few sequences in..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By IAN FAILES

Westworld co-creator Jonathan Nolan on location. (Photo courtesy of HBO)

Upon receiving the VES Visionary Award at the 17th Annual VES Awards earlier this year, writer/producer/director Jonathan Nolan spoke of growing up idolizing pioneering filmmakers who innovated in visual effects. The Westworld co-creator pledged that “VFX is movies.”

That comment certainly endeared the VFX-strong audience at the Awards, but as Nolan continued, it was clear he revered the entire visual effects process, its history and the artists involved in it. “It’s an extraordinary privilege for me to be honored by this group – this is the club that I always wanted to join,” he said.

Nolan is certainly part of that club, having been involved with some of the most significant visual effects films in recent years writing (with his brother Christopher Nolan) on The Dark Knight, The Dark Knight Returns and Interstellar, and as a creator of the television shows Person of Interest and Westworld. His credits also include Memento and The Prestige.

HBO’s Westworld, in particular, which Nolan created with his wife, Lisa Joy, via their Kilter Films production company, arrived at a time when the bar had been set incredibly high for visual effects in television. The show has been recognized with both Emmy and VES Award accolades, and the showrunner is deeply embedded in the visual effects process.

EARLY BEGINNINGS

For Nolan, his exposure to the world of visual effects began early, as he and his brother made their own films. “We were trying to figure out, how could we emulate the filmmaking that we were seeing at the movie theater?” Nolan recounts. “Star Wars was obviously the single biggest influence for us. We were fascinated in unmasking the magic trick of those films, and trying to figure out how they did what they did.”

It turned out, too, that the movies Nolan wanted to make would be the sorts of films where visual effects would have to play a large part. “The kind of films and the kind of stories that [Chris and I] wanted to tell really didn’t map onto the physical world as it exists,” he says. “As much respect as I have for the more naturalistic kind of filmmaking, and the amazing stories that can be told right here on Earth, for me, I was always more interested in storytelling that created new realities.

“The only way to do that,” adds Nolan, “is that you have to get your hands dirty, and figure out how you’re going to make that work in front of a camera, and after the fact. And so, from the very, very beginning, that VFX challenge goes hand-in-hand with the writing, and with direction, and working with actors and music and everything else.”

Nolan on the set of Westworld with actress Thandie Newton, who plays host Maeve Millay. (Photo courtesy of HBO)

Westworld is one of the few television series shot on film. (Photo courtesy of HBO)

Jonathan Nolan with Westworld Visual Effects Supervisor Jay Worth. (Image courtesy of HBO)

The original plate for an aerial shot of the Westworld control center called ‘The Mesa.’ (Photo courtesy of HBO)

The final shot took the plate photography and added in the sprawling control center. (Photo courtesy of HBO)

“The challenge for us was, ‘Okay, well, it better look pretty damn good. If all these people are paying all this money to experience a theme park, it better look pretty great.’”

—Jonathan Nolan

Live-action plate of a train arrival scene in Westworld. (Photo courtesy of HBO)

Greenscreen elements of the plate were replaced with train and environment extensions. (Photo courtesy of HBO)

“As much respect as I have for the more naturalistic kind of filmmaking, and the amazing stories that can be told right here on Earth, for me, I was always more interested in storytelling that created new realities.”

—Jonathan Nolan

WRITING AND VISUAL EFFECTS

Nolan is the first to acknowledge that, as a writer on the Dark Knight films and Interstellar, he was able to put almost anything on the page and then leave it to others to turn his words into imagery for the screen. “One great benefit as a writer on those projects, is not really being responsible in any way for, or on the line for, what you’d write,” he says. “It was Chris and [Visual Effects Supervisor] Paul Franklin’s problem to figure out how to make it work.”

A key observation Nolan made from watching his brother’s films come together was the combination of different filmmaking and effects methods to produce the final result. “Those films are meticulously, beautifully made, and they use all the different available techniques,” says Nolan. “You have miniatures work, and you have full computer graphics environments, and you have projection, and you have a lot of techniques that we then adapted into Westworld where we could.”

Indeed, on-set projection was something Nolan saw being used on Interstellar that he then became particularly fond of as a filmmaking and visual effects tool. “I remember being on the set of Interstellar and climbing into a spaceship. Production designer Nathan Crowley’s team built this beautiful interior for the spaceship, all practical, you climbed inside it, you’re in there with the actors. Then when it was time for takes, through-the-window projection starts and space appears. The pièce de résistance was the special effects team shaking the entire platform to give it movement. The thing I loved about it is that, for the actors, it requires no imagination on their part. They’re in space, you’re all there together.

“I think that’s one of the most exciting things that’s happening right now – the clawing back reality as much as possible from the greenscreen era that we’ve been in, and having those assets ready ahead of schedule so you can feed them directly to the camera. That allows everyone on set to know exactly the story that you’re telling.”

Projection was so influential, in fact, that Nolan specifically incorporated it as a story device in Westworld – for the central map – which was done as a live projection on set. “There were a lot of people watching, imagining that it’s computer graphics – and it is – but it’s beautifully prepared assets that are then live-projected onto the set,” explains Nolan. “That was actually quite an interesting challenge, because we wanted topography on that map, so that required working with a vendor to hand-carve and create the physical shape of the map and then working very closely with the vendors to supply them with the assets.”

“The map also lifted out of the ground and turned, and we had to marry that to the movement,” says Nolan. “It was an enormous technical challenge, but came off beautifully, really beautifully. So to be able to stand on a set, and instead of putting down a little green box for people to look at, they would look at a live rendering – the illusion is essentially complete on set.”

Audiences can expect to see that approach taken further in Westworld’s third season, according to Nolan, who is looking to “take that technology off of just the map and use it to extend sets in more sophisticated ways that had been impossible to do up to this point. That’s an area in which we’re very excited about the possibilities and, without breaking confidences, we’ve been very excited at some things that other people are doing right now, and we’re trying to see what we can do about them in Season 3.”

GOING OLD-SCHOOL

One reason Nolan was drawn to the live-projection technique, of course, is that it fit neatly into the type of world he imagined for Westworld’s characters to inhabit. “One of the ironies of Westworld,” comments Nolan, “was creating this artificial world, but one that you have to imagine that people would be willing to spend tens of thousands of dollars a day to experience.

“Our take was, it had to feel more real than real,” he continues. “There can’t be any sense that it’s an illusion, because that’s what the guests are paying a lot of money for. They want a tactile, lived-in, beautiful reality that they can experience that may feel more real than where they come from. So the challenge for us was, ‘Okay, well, it better look pretty damn good. If all these people are paying all this money to experience a theme..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By JIM McCULLAUGH

D.B. Weiss and David Benioff receive the VES Award for Creative Excellence at the 17th Annual VES Awards.

When D.B. Weiss and David Benioff accepted the VES Award for Creative Excellence at the 17th Annual VES Awards in February, they readily acknowledged the contributions of all the VFX practitioners who contributed to the show.

“When George R.R. Martin wrote the book from which Game of Thrones is based,” commented Benioff, “he did so to escape the restraints that film and TV imposed on the imagination. When we first met him, he told us point blank it was meant to be unfilmable. But because of Visual Effects Producer Steve Kullback and Visual Effects Supervisor Joe Bauer, we filmed it.”

He continued, “Steve and Joe have consistently and stubbornly refused to recognize limitations. They have never been about ‘if.’ They have only been about ‘how.’ They and their team made wights, dragons, and dire wolves feel as real as the actors. Working with these gentlemen has been straight joy from the first to the last. This award rightly belongs to them. We write about dragons flying around burning shit – and they do it all.”

It’s more than fair to say that Daniel Brett Weiss and David Benioff have changed the nature of TV and visual effects like never before when they became co-creators and showrunners for HBO’s Game of Thrones. Not only is Game of Thrones, which debuted in April 2011, a smash hit worldwide, but its final 2019 season has been one of the biggest events in TV history.

Consider: It has received 308 awards from various industry groups and cinematic institutions and has been nominated 506 times. It has become the gold standard of TV (and maybe even cinema) special effects.

D.B. Weiss hails from Chicago and earned a degree from Wesleyan University and went on to Dublin, Ireland’s Trinity College where he earned a Master of Philosophy in Irish Literature. His thesis was on James Joyce’s Finnegan’s Wake. After that he earned a Master of Fine Arts in creative writing at the Iowa Writer’s Workshop. It was in Dublin in 1995 that Weiss first met David Benioff.

David Benioff hails from New York City. He attended Dartmouth College and later went on to Dublin’s Trinity College where he, too, studied Irish literature, writing his thesis on Samuel Beckett. Later, he attended the Creative Writing Program at the University of California, Irvine where he earned a Master of Fine Arts in Creative Writing.

D.B. Weiss (left) and David Benioff on the set of Season 5. (Photo courtesy of HBO. Photo: Macall B. Polay)

David Benioff (left) and D.B. Weiss on the set. (Photo courtesy of HBO. Photo: Helen Sloan)

David Benioff (left) and D.B. Weiss discuss a shot during Season 5. (Photo courtesy of HBO. Photo by Helen Sloan)

Actress Emilia Clark, aka Daenerys Targaryen, enjoys a light moment with D.B. Weiss (middle) and David Benioff on the set during Season 7.

After their academics, both Weiss and Benioff tried their hands at professional writing. Benioff’s first published novel was The 25th Hour, which caught the attention of Tobey Maguire. Benioff adapted it into a screenplay for a movie by director Spike Lee. After that he published a short story collection called When the Nines Roll Over.

Next up for Benioff was a screenplay he wrote for the film Troy. He then wrote screenplays for the films Stay and The Kite Runner, and co-wrote X-Men Origins: Wolverine.

Meanwhile, D.B. Weiss’s trajectory after school was to work as a personal assistant to Eagle Glenn Frey while later reuniting with Benioff in Santa Monica, California in 1998 where they co-wrote and produced several scripts. Weiss wrote a novel in 2003 called Lucky Wander Boy.

The duo really took off in 2011 when they intersected with author George R.R. Martin and HBO to adapt Martin’s book A Song of Ice and Fire. In addition to writing most of the episodes of the series, each has directed an episode. They co-directed the series finale.

The future bodes well for the pair. The duo has been linked to another HBO series called Confederate. Disney has also announced that the pair will write and produce a series of Star Wars films. Benioff and Weiss have become two of Hollywood’s most formidable writers, producers and screenwriters.

As Weiss and Benioff reflect back on their careers to date, they appear to have heeded the words of George R.R. Martin from his book A Dance with Dragons: “If you want to conquer the world, you best have dragons.”

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By NAOMI GOLDMAN

Corman directing Peter Fonda and Nancy Sinatra in The Wild Angels.

Legendary filmmaker Roger Corman is one of the most prolific producers and directors in the history of cinematic entertainment. A trailblazer in independent film, his creative work and output of drive-in classics has earned him monikers such as “The Pope of Pop Culture” and “The King of the B Movies.” At 92 years old and with a storied career spanning more than six decades, Corman has produced upwards of 350 feature films, with a stunning track record for turning a profit on all but a handful of his wildly inventive features.

Corman’s filmography boasts one inspired title after another, including The Beast with a Million Eyes, Slumber Party Massacre, The Devil on Horseback, Swamp Woman, Caged Heat, Nightcall Nurses, Frankenstein Unbound, and cult classics Death Race 2000 and The Little Shop of Horrors. His diverse slate also includes The House of Usher, part of the critically acclaimed adaptations of Edgar Allan Poe stories starring Vincent Price; The Intruder, a serious look at racial integration featuring William Shatner in his film debut; The Wild Angels, which kicked off the ‘biker movie’ cycle; and The Trip, which began the psychedelic film wave of the late 1960s.

In 1964, Corman became the youngest filmmaker to have a retrospective at the Cinémathèque Française. He has also been honored with retrospectives at the British Film Institute and the Museum of Modern Art. For his influence on modern American cinema and ability to nurture aspiring filmmakers, in 2009 he was bestowed with an Honorary Academy Award.

Corman on The Wild Angels set with Peter Fonda. ORIGIN STORY

Corman was born in 1926 in Detroit, Michigan. As a young boy, he and his friends were regulars at the local theaters on Saturday afternoons, primed to catch double features. The reissue of Frankenstein and an English science fiction film, Things to Come, were among the first films to spark his imagination.

Corman initially planned to follow in his father’s footsteps and pursue civil engineering, but while studying at Stanford University his interests shifted. “I was writing for the Stanford Daily and found out that the film critics got free passes to all the theaters in Palo Alto, and one was graduating. So I wrote a few reviews and was taken on as a critic. Films had been just entertainment, but now I began to analyze them. I was more interested in this, but I was graduating and earned my engineering degree. I was the failure of my Stanford class. After three days at U.S. Electrical Motors, I quit and got the worst job among my peers – as a messenger at 20th Century Fox, delivering the mail for $32.50 a week. But it was pure passion.

“At Stanford,” Corman continues, “I enlisted in the Navy College Training Program. When I left Fox, I still had time left on the G.I. Bill, which covered education costs for service veterans. I spent a term at Oxford University before coming back home with a drive to become a screenwriter and producer. I landed a job as an assistant to literary agent Dick Hyland and started writing under a pseudonym. When I sold a script with someone else’s name on it and it came back around, Dick laughed and said, ‘As long as we get the 10% commission, it’s okay!’”

Corman with protégé Ron Howard.

Jonathan Demme directs Corman’s cameo in The Silence of the Lambs.

In 1953, Corman sold his first script, The House in the Sea, which was filmed and released as Highway Dragnet. “From that sale and other outreach, I raised a grand total of $12,000 and used it to make a low-budget film that we shot in six days, called It Stalked the Floor, which was changed to Monster from the Ocean Floor. It did well, and then I produced The Fast and the Furious – a valuable title that served me well in later deals!”

He went on to broker a multi-picture deal with American Releasing Corp., later renamed as American-International Pictures (AIP). With Corman as its lead filmmaker, albeit with no formal training, AIP became one of the most successful independent studios in cinema history. Corman first took to the director’s chair with Five Guns West and, over the next 15 years, directed 53 films. He quickly earned a reputation for churning out low-budget films on a lightning fast turnaround.

Upon leaving AIP, he decided to focus on production and distribution through his own company, New World Pictures. “I had always admired the great auteurs and had a new model for the profitable regional distribution of art cinema, in addition to my films. I reached out to Ingmar Bergman to take Cries and Whispers under my cost-and-profit-sharing approach, and that started the flow of foreign films.” New World became the American distributor of the films of Bergman, Akira Kurosawa, Federico Fellini, François Truffaut and others. In a 10-year period, New World Pictures won more Academy Awards for Best Foreign Film than all other studios combined.

Corman sold New World Pictures in the 1980s, but continued his work through various companies over the years – Concorde Pictures, New Horizons, Millennium Pictures and New Concorde.

Corman directing John Hurt in Frankenstein Unbound.
STANDOUT CINEMA

Corman describes what led to The Little Shop of Horrors. “I had just made A Bucket of Blood (1959), combining horror with some humor and it did very well. I had a chance to use an adjacent set about to be torn down, so I went full steam in making a comedy with a little bit of horror thrown in – that was The Little Shop of Horrors. We used a lot of the same cast and shot it in two days on a shoestring budget, finding that sweet spot between horror and humor.”

Corman shared that The Little Shop of Horrors (1960) was almost a joke. “I had made a horror film where I carefully planned a sequence to make the audience scream. The key to that type of shooting isn’t the moment of screaming – it’s the building up and the breaking of that tension. It worked perfectly, but after they screamed there was laughter. I wondered, what did I do wrong? I understood it was appreciative laughter and started developing a theory on the relationship between comedy and horror.”

Progressing forward, Corman cites Battle Beyond the Stars (1980) as his first attempt at making a bigger-budget sci-fi film with extensive special effects. “When Star Wars came out, I had great admiration for the film. But I thought, wow, we are in trouble, because this is what we had been doing, only bigger and better. The only way to compete was to raise my budgets and run my own studio. So I pre-sold some producing rights to Warner Bros. and put up the other half of the money to buy a site [in Venice, California] and converted it into a sound stage. The Lumber Yard was born.”

Death Race 2000 (1975) remains one of Corman’s favorite films, based on a short story about a futuristic race where the drivers don’t only drive to win, but to knock other cars off the road. “I started thinking about violence and gladiator games and the bloodthirsty role of the audience. My idea was for drivers to get points for knocking off other cars and for how many pedestrians they could kill. It was a huge success and was voted somewhere as the greatest B picture of all time. To this day, when I hear ‘$20 for the little old lady in the crosswalk,’ I still chuckle.”

THE SCHOOL OF CORMAN

Legions of filmmakers and actors got their start with Corman, who has a remarkable penchant for spotting and cultivating talent. Among his protégés are Francis Ford Coppola, Ron Howard, James Cameron, Martin Scorcese, Jack Nicholson, Sandra Bullock, Robert De Niro, Peter Bogdanovich, Jonathan Demme, Dennis Hopper, Sylvester Stallone, William Shatner, John Landis, and even current VES Board of Directors Chair Mike Chambers. Many of them have paid their respects by giving him cameos in their films, including The Silence of the Lambs, The Godfather Part II, Apollo 13, The Manchurian Candidate and Philadelphia.

He speaks with a sense of parental pride about some of his alumni.

Francis Ford Coppola: “In the late 1950s, I saw some beautiful Russian science fiction films. I went to Moscow and bought the American distribution rights. But the films had a lot of anti-American propaganda in them. I called the UCLA Film School and asked for the most promising editor among their graduate students, and they sent over Francis Ford Coppola. So Francis’ first job was cutting the anti-American propaganda out of Russian science fiction films.

“I was shooting a Grand Prix Formula One picture in Europe called The Young Racers and Francis was the 2nd AD, and we had rebuilt a Volkswagen microbus into a traveling mini studio. Once the picture wrapped, since I had the nucleus of a crew and the costs were covered, I decided to make another film and gave Francis the opportunity to make his directorial debut with Dementia 13.”

Jack Nicholson: “When I started directing, my engineering background enabled me to learn the camera and editing fairly quickly, but I didn’t know enough about acting. So I enrolled in a method acting class. Jack was in it and was just 19 years old, but clearly the best actor in the class, so I gave him his first role in The Cry Baby Killer. And he was on his way.”

Corman directing John Hurt in Frankenstein Unbound.

Corman with Bridget Fonda and John Hurt on the set of Frankenstein Unbound

“From that sale [of his first script in 1953] and other outreach, I raised a grand total of $12,000 and used it to make a low-budget film that we..

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview