Categories
Interview Present Special

Interview: Neil Huxley (Art Director, Avatar)

Neil Huxley

After posting about the UI in Avatar, I was so happy that Neil Huxley agreed to an interview about his work on that film. Neil was art director and motion graphics supervisor for Avatar‘s UI at Prime Focus VFX LA.

Q: Can you talk a little about your background and how you got started.

I was born in London, and lived there for 26 years then moved to Australia and now I’m based in LA. My design background is pretty varied but mostly in vfx design really. Started as a Flame Op at Digital Pictures Iloura in Melbourne, and then moved more into vfx design after art directing and designing the Salem’s Lot titles sequence for TNT.

Q: Who are your design inspirations?

Design inspirations for me are so many and range from the work that artists, graphic designers, photographers, sculptors, filmmakers, musicians etc. are doing. I’m inspired by artists like Peter Saville, Bill Henson, Auguste Rodin, Ashley Wood, Jan Tschichold, and Saul Bass.

Q: Have you worked a lot on interactive projects? How did you get involved in interface design for film?

Well, my first job upon leaving University was designing interactives back in ’98. These were mainly educational projects for museums and galleries. The Mark Neveldine and Brian Taylor-directed movie Gamer in 2008/09 was the first project where I really tackled interface design in a film context. That project then led me to Avatar.

 

Gamer

 

Q: So how did you get involved with Avatar?

Prime Focus President & Senior VFX Supervisor Chris Bond liked my work and asked me if I would like to get involved. Frantic Films (before they became Prime Focus) was primarily a VFX house; they had no art department really. Once the project was secured they hired me.

Q: What was your role? Were there a lot of others involved in the design and production?

I was art director and motion graphics supervisor for Prime Focus VFX LA. I supervised a team of about 6 After Effects animators (2 in LA and 4 were in Winnipeg) and a few cg artists who created the Immersive and Holotable environments. The entire production crew for us was about 90 led by Chris Bond who was vfx sup. The design and execution of the graphics was a fairly manageable part of the project — it was the rendering and compositing of the 200 or so shots we had that I knew was going to be the harder part. I think the compositing team had the most artists.

Q: What was the project’s brief? And how long did you have to get it done?

We were tasked with graphic insertion into clear practical plexes. The creative brief was to ground the images in believable reality, and have them display story and context-sensitive material while giving them an industrial/military feel. We inserted graphic animations into four types of screens: curve-plex, flat-plex, tri-plex and immersives. Plus the Holotable, which needed to display a 3D virtual lidar-type environment plus icons and data. I think we had about 7 months.

Avatar, Holotable

Q: How did you approach the solution? What were your inspirations?

We looked at a lot of modern military interfaces. Jim Cameron liked rooting this design in some sort of reality. This wasn’t alien technology; this was human and should look accordingly. I started mocking designs up in Photoshop and Illustrator and painting on frames from our shots. These ended up becoming our styleframes that we would work towards. We presented those on a weekly basis to the client and Jim (depending on who was available) get notes and revise the design until we had a look Jim was happy with. We looked at lidar scans for the Holotable and air traffic control screens for the immersives.

Everything we saw was very stripped back, minimal. Operators are not concerned with design ‘fluff,’ they want hard data. Usability, navigability and consistency were very important. For Jim, it had to make sense. So it needed to be functional and concise but maintaining a coherent cool visual language of a real and possible future.

Q: What software did you use?

A combination Adobe After Effects, Photoshop and Illustrator. Autodesk Max and Autodesk Maya, and eyeon Fusion, and a few great tools written in-house

Q: Was 3D part of the interfaces from the beginning? Were there any unique challenges because of the 3D?

3D was definitely conceived from the outset before we even came onto the project. Screens needed stereo depth from screen space back into z. This consisted of four layers of graphics for each screen: a, b, c and d. Some screens had multiple ‘c’ layers, which were the context-sensitive layers that the user would typically interact with via touch screen. The screen interface architecture for all screens was: layer ‘a’ = dash element, layer ‘b’ = navigation element, layer ‘c’ = context-sensitive pop-up windows and layer ‘d’ = main background data. Problems we had with layer separation spring to mind… 2 inches between layers was too big sometimes, things would be too pushed back and become distracting.

Also at the end of the day, we couldn’t steal the scene. The animation had to tread a fine line between being dynamic and interesting but subtle at the same time. Our job was to provide graphic animations to help the scene work, support the story points and add to the visual texture of the movie.

The Ops Center and Bio Lab scenes in Avatar included interactive holographic displays for dozens of screens and a ‘holotable,’ each comprising up to eight layers, rendered in different passes and composited. To enable easy replacement of revised graphics across the massive screen replacement task, we developed a custom screen art graphic script, SAGI. This enabled us to limit the need for additional personnel to manage data, deliver the most current edit consistently, reduce error by limiting manual data entry and minimize the need for artists to assemble shots.

Our pipeline department built a back-end database to associate screen art layers with shot, screen and edit information, and a front-end interface to enable users to interact with it. The UI artists could update textures in layers, adjust the timing of a layer, select shots that required rendering, manage depth layers by adding and deleting as necessary and view shot continuity — while checking the timing of screen art animation across multiple shots.

Avatar

Q: Looking at the film now, would you have done anything differently?

I don’t know if I would change anything because I think it works. I spoke with Jim at the cast and crew screening after party and he was very happy with the work we did. To have a visionary and perfectionist like James Cameron say he’s very happy and that the screens/holotable and immersive looked “f#$king cool” is good enough for me.

Q: What would your ideal next project be?

Avatar 2! I would work with James Cameron again in a heartbeat.

Thank you, Neil, for you time answering these questions.