Why The Walk’s VFX Team Used Largest Amount of Cloud Computing in Film History
Director Robert Zemeckis’s long gestating project The Walk, on Philippe Petit’s once in a lifetime high-wire walk across the Twin Towers of The World Trade Center on August 7th, 1974 finally reached the screen this month, in glorious 3-D utilizing breathtaking visual effects. We spoke to VFX Supervisor Kevin Baillie & VFX Producer Camille Celluci by phone on how the production team was able to re- create in such immense detail both the Twin Towers and the 1974 New York skyline on a tight budget and production schedule, with a little help from the Cloud…
What’s you’re pre-production research like for a CGI project this size and with a meticulous director like Robert Zemeckis?
Kevin Baillie: There’s certainly a huge amount of work to recreate the Towers, and also New York looks incredibly different than it did in 1974, so we did an immense amount of research, gathering reference photography, Google images, microfiche research at the library – we called the NY Port Authority for information on stuff – a huge amount of information gathering. That was a joint effort between the Visual Effects team and the Production Design team, as they had to build a practical part of the rooftop for the actors to interact with.
Camille Celluci: I just thought of something. Bob [Zemeckis] has ben trying to make this film for nine plus years, and sometimes something happens when the time is right – one of the benefits of working on something for that long, besides the evolving film technology, is all the resources we have on the internet now versus ten years ago. Now there is a huge database [of research material] – a lot of people’s personal photos we [found] online, so we got that experience of the Towers into the film not just from official news documentation, but we got people’s personal feelings of it and that helped a lot.
Kevin Baillie: Production built a 40 foot x 60 foot L-shaped section of one of the corners of one of the Towers’s roof – that’s all we had stage space and budget for, because we also had to run about 100 feet of cable wire for all the wire walking action. The rest was a massive sea of green screen. Making sure that the production stage that was built was accurate to the Towers was important, and then we just extended 1400 feet to the ground [via CGI].
One thing to note about that – we had the blueprints for The Towers, which was awesome and helpful, and even though its every architect’s dream to have their blueprints rendered perfectly to the micrometer level, that’s not how buildings are actually built. They’re built by people using materials that have variances in them and even a slight difference in a bolt or panel here or there, is going to create a little bit of imperfection. So we actually had to spend several weeks after our [CGI] Towers were actually built, introducing imperfection into them, so that they didn’t feel “CG fake.”
So your digital artists had to go in and manually add imperfections into the CGI render? Could they do them all at once, automatically?
Kevin Baillie: We tried to do a mass based process, that did it all at once [made all the imperfection changes at once], by applying a mathematical noise to it [the CGI render of the Towerrs], to get it to be imperfect, and it didn’t look right, it actually made the Towers look almost small. So we hit the reset button on that and had an artist go in and spend two weeks tweaking individual panel gaps, a millimeter here and there, unique variations, and crafting a look that felt right. So panels might have to be individually altered to reflect if they came from different foundries, or had been sitting in the weather longer than other panels [based on the research]. The panel variations had to be very subtle. We called it the “Tetris” pattern, because when you looked at it, it looked like a subtle game of Tetris. All those things, plus little drips of dirt and other things painted in by artists, contributed to a look that was ultimately believable.
Camille Celluci: One of the things the reference photos provided was you could actually see inside the windows to make out desks and chairs and office furniture. At first you think the Towers are so massive you couldn’t make out inside the building, but in the photos you could make out this furniture. We didn’t think we’d need to build any inside stuff except for some lighting to reflect time of day or night. But when Atomic Fiction did a test and built some of that furniture into the interiors, while you can’t actually distinguish the individual pieces, it completely makes a difference to add the realism or not. So in the test they added furniture to one floor and left the other floor blank and when we looked at it we said, ‘we’ve got to add all the furniture.’ And all of that detail required a ridiculous amount of compute power.
Kevin Baillie: Between that and building out 1974 New York in great detail, every rooftop AC unit, rain gutter, and hot dog stand in the street was all built out. This movie – in order to get it done on time and on budget – we had to do a ton of render processing to get it all onto the screen. And to do that level of processing is really expensive. Normally you’d have your computer power, electricity, floor space, maintenance costs – it would require enormous physical data centers, and we didn’t have the time to build that [infrastructure]. So we actually made use of tools created at Atomic Fiction called Conductor, which lets the artists do all their computer processing for that rendering in the cloud. So this movie is the biggest use of cloud computing in the history of cinema. We did about 9.1 million processor hours of computing to get this movie done in time – which would be over 1000 years if done on a single processor.
How does that work?
Kevin Baillie: Essentially on the back end of Conductor is Google’s Cloud Platform. which is basically Google’s excess capacity, that we’re using. It’s great because we had teams in both California and Montreal working on this material, and since Google is obviously a worldwide operation, it has a crazy huge fiber network. So the teams were able to collaborate through Conductor between the two offices in the two different countries, and also around the world with the other VFX vendors on the production.
Instead of having giant, hot, power hungry, depreciating data centers sitting at one of our locations, we were able to scale up to sometimes 15,000 processors at once, on demand, and then when we were done, it would go down to zero. And we’d only pay for what we would use. It’s treating computing like electricity. And by doing it that way, our artists were not only able to get feedback much quicker, we actually saved about 50% of what it would have cost doing it in the traditional way.
I would assume this technology has only really become financially viable in the last 3-5 years?
Camille Celluci: It’s actually even less than that.
Kevin Baillie: Yes, so I’m VFX Supervisor on this film, but I co-founded Atomic Fiction five years ago, one of the vendors on this film, right when cloud computing was becoming a thing, and we were like ‘this is our business. We need to leverage the cloud because we don’t want to build a data center.’ If we went the traditional route of investing in a data center, we’d already be halfway through our second cycle of refresh [on hardware] if we had built a data infrastructure. And at the time everybody told us, ‘you’re nuts, this is never going to work.’ And I’m glad we didn’t listen to those people!
This film is a testament to how technology can help a filmmaker get a vision onto the screen that is grander than what the budget is, and do it responsibly.
Given Zemeckis’s shooting style on this film, which was made up of longer held shots with greater depth of field, coupled with minimal editing – compared to a typical Hollywood CGI film – how did that influence your process?
Camille Celluci: Usually when budgeting a visual effects film, you’re averaging anywhere from 3-5 seconds for how long you’d expect a visual effects shot to be to suspend audience disbelief, but in a Bob Zemeckis film, we had shots that were two minutes long.
Kevin Baillie: The reason why they were that way, was that at the very beginning of production Bob knew this was going to be a 3-D movie. And he really treated 3-D with respect, and saw it as a tool to be leveraged to help heighten the emotion of the film. We had meetings at the beginning of production before a single frame was shot, exploring the rules of 3-D, to get the entire production team on the same page. So everyone knew what we had to do to make the best 3-D film. And one of the key characteristics for 3-D is long shots, because even though it gives the audience time to inspect our work in painfully long moments of time [for us], it more importantly allows the audience to really explore that visual world and allows it to sink into the mind and get a grasp on the three dimensional world that’s depicted.
An average action movie these days will have 2,400-2,500 shots. This movie had 826 shots in total. These shots were three times as long as they are in an average movie these days, and it’s for that 3-D effect. And that’s why I feel audiences are coming away from it thinking that The Walk is one of the most spectacular 3-D movies out there.