Game Engines and MoCap Suits – The Rise of Instant Digital Visual Effects

Animation director, VFX supervisor and director/producer Mondo Ghulam talks about creating digital visual effects using game engines such as Unreal and motion capture suits. Ghulam has worked on several high end games, as well as film and television productions. He has gained over 20 years experience currently focusing on his first passion: Filmmaking.

The Use of Game Engines and Motion Capture Suits – A Common Practice?

In this day and age, on-screen storytelling is able to take advantage of a large range of technologies used to create visual effects in and video games. Two of the most striking methods adopted by filmmakers are the use of game engines and motion capture. Ghulam highlights the economical and creative progress these technologies have brought to VFX in films on nearly every budget level. In early days such as those of A.I. Artificial Intelligence (Steven Spielberg, 2001) a game engine was used in order to combine live action elements with a virtual set.


MG: ‘In order to give everyone an idea of what they were looking at [on the virtual set], they were tracking the camera in real time and then displayed a low detail version of the background from a game engine that was synched to the camera. It meant that they could look through a monitor and see a composite of the live action against the fully 3D CG background in its early state. So they can set composition, they can work out exactly where to place the camera and how to move it.’

Ghulam explains that game engines have become so far advanced rendering the quality undeniably high and the production costs comparatively low.

MG: ‘A project that I’m interested in doing will use a game engine because it will be long format and animated, and I don’t want to have to look for the money to render everything out. So with a game engine, you don’t need to do that, it will just play in real time. You just record it straight out of the computer, and all you might need to do is to buy some graphic cards that will power that to a high degree.’

Another technological key element in visual effects is motion capture. As one of the kickstarters of a new inertial mocap suit, Ghulam talks about how quickly body motion can be recorded with this compact piece of technology.

MG: ‘Ultimately if you can take the heat, you can wear this under a set of clothes, and it works wirelessly. So I can be sitting in this room, doing this interview whilst being recorded for my body motion in regular clothing.’

The animator outlines that although the footprint of a motion capture studio with dedicated rooms, expensive equipment and a large team of VFX artists and technicians who work hard to calibrate the environment before shooting, will create a high quality end product, the single person in a suit shows you how the level of compactness and mobility this technology has already achieved.

MG: ‘Eventually there will be no suits, it will be any room that you like, and things like Xbox Kinect are kinda already doing that. You can sit on your sofa and you can track your movement, it can tell where your hands are, it can see your face, it can even recognize your voice and distinguish you from someone else sitting there with you.’

Accessibility and Limitations- Possibilities vs. Perception

Talking about the rapid transformation of large and powerful computers running expensive software to substantially more compact gaming engines, as well as the internet as a key element for sharing and accessing information, Ghulam addresses the technical curve that has happened over the past three decades. Game engines have been around for quite a while, yet the possibility for
individuals to use them easily and economically has not.

MG: ‘In 1994 there was one magazine, that we could get hold of in Glasgow once every quarter, which had articles dedicated to little 3D projects but ultimately that was it. Now any laptop will run, for example, Unreal who have thousands of excellent videos all very well aggregated within sections, so if you’ve never even touched a game engine before, there is a whole series of very easy to watch videos that will walk you through it and get you doing things, that two hours before, you had no concept of, or any idea that you might even be able to do.’

Despite the progress there are limitations. One barrier, the animator believes, is perceptual. He argues that although high impact visuals can still take a huge bite out of the filming budget due to the high cost or complexity, making minor corrections and replacements for example, is possible to do for anyone on a much more economical level.

MG: ‘I’m gonna make a bold claim here, but you could subscribe to Adobe cloud, download Adobe After Effects, spend a couple of days looking at tutorials, learn how to track, put it an element in and get your shot. Now these are pretty common tools now, that had once been the preserve of a very few people in the past.’

Another set of limitations can be found in facial capture. Whereas motion capture suits are far advanced in their ability to instantly transfer an actor’s movements to a highly accurate level, capturing facial movements and expressions is a different story. Ghulam addresses the fact that although your face can be recorded, there are still overarching problems with deadness and inability to capture not only the physicality, but also the psychology of human facial expressions. At the moment, he adds, it is still the prerogative of the high end productions to animate emotions to a believable extent.

MG: ‘When we are talking about Dawn of the Planet of the Apes (Matt Reeves, 2014) and Avatar (James Cameron, 2009), there is a percentile at the end of it the quality scale that just pushes pushed it beyond anything that anyone else was doing at that particular time. And that takes lots of talented people, there is no automated way to get to where any of those films ended up. It takes a lot of really good artistry […] You look into the eyes of those characters and they are real.’

Creating a Virtual Skin- Futurism or Future?

Looking at the processes that have shaped virtual realities and storytelling in the last decades, many of the things we associate with futurism are already possible in the here and now. Ghulam uses Xbox 360 and Xbox Kinect as prime examples for the level of accessibility motion capture has moved into, given the fact that those devices can capture 3D performances and data in the comfort of a living room at very low cost. The animator believes that although, at that low-budget level, the quality of the characters are somewhat inaccurate and the resolution is low, in the near future, actors will be able to scan their bodies and save their digital selves, then 20 years later, when a role requires a younger version of them, they rehearse the part, put on their motion capture suit and virtually perform in their younger skin.

MG: ‘There is no reason why this couldn’t be done now in fact. What we lack is the library of all these great actors, that are alive today. We don’t have their 360 degree images from 30 or 40 years ago’

Ghulam explains that technology which can record the facial performance of an actor and translate it into a virtual character that doesn’t need prior calibration already exists. Industrial Light & Magic (ILM) developed a particular program that studies the face of the actor who presents himself to the camera:

This form of capture has also manifested itself in the idea of face-exchange.

MG: ‘There was a research paper earlier this year, it’s a different kind of technology, but they can record your face, then they can record my face, and then I can puppeteer your face with my face.’

The animator highlights the many facets of the use of such highly developed technology and states that futuristic elements conveyed in early sci-fi films such as identity theft and stealing a persons face have potentially become very real. Whilst recognizing rapidly improving technology as vitally important to filmmakers, Ghulam is very clear about the fact that without dedicated people there would be no progress. He highlights the essence of having skilled and motivated artists, whether they are on the creative or on the technical side of things, on a VFX or VR project.

MG: ‘A very common question I would get asked has come from actors: ‘This stuff is gonna replace all of us, isn’t it?’ and I would say: ‘You’re in the middle of the floor, there are a hundred cameras around you and 40-odd people behind the scenes. All that technology and all that effort is about capturing something that only you can do. Which part of this tells you that you’re being replaced?”

Ghulam strongly outlines the convergence of storytellers, technicians and actors on visual effects productions and the importance of individual input. He stresses the fact that with the popularity of video tutorials digital artists have the opportunity to gain and improve their skills on a rapid speed, valuable knowledge is shared instantly and evolving communities of artists are trying new technologies and push their development to new levels. He uses the realtime cinematography example of Hellblade (Epic Games, 2016), a video game where a large number of digital artists capture a character in real time into the Unreal Engine by leveraging the unique qualities of the actress.

MG: ‘I’m not trying to paint a utopia here. Obviously there are competitive  aspects to this, but there has always been a lot of people (in the cg community) willing to share it and it’s these people who are making it possible. It’s also someone that will step on the set and say: ‘I can do that.’ Although their minds might be screaming that it is impossible […] but there is nothing quite like seeing it all work in the end. You’re creating things, or helping to create things, that could not be created anywhere else in that way’

Making the Magic Happen- What’s Next?

Mondo Ghulam is currently working on several short film projects as writer, director and VFX artist, and is planning on moving into feature film production very soon.