Posts

Introduction to Motion Control: An IMIS event

Most of us remember the scene of Harry’s arrival at Hogwarts across the Great Lake with Hagrid, as the castle looms out at us from the darkness. Or have that legendary image in our minds of the Imperial Star Destroyer gliding ominously through deep space in Star Wars IV: A New Hope. But we also have most likely seen any of those commercials in which leaves of lettuce, slices of cheese and tomato and pieces of chicken fall exactly in place on top of a loaf of bread in slow motion. Funnily enough, all of these have something in common: they have been filmed using motion control.

What Is Motion Control?

The guys from Mark Roberts Motion Control, Peter Rush and Dorian Culmer, were there to tell us all about it. Motion control is a means to create difficult or “impossible” camera movements and special effects by accurately controlling the trajectory of the camera. Cameras are mounted onto robotic rigs controlled by a piece of software, and they’re able to move at very high speed with incredible precision. Therefore, the same movement can be repeated again and again, for example, to generate special and visual effects.

Although it seems like a pretty modern development, motion control actually started before digital times. Around the 80s, there was a very busy scene in London in particular, to create everything that wasn’t digital. As machines and skills improved in this area, they started filming models – which is how the previously mentioned Harry Potter and Star Wars scenes were made. Models were the main reasons for motion control; first they would film the model, and then they would integrate it with a background and other elements to create a scene.

London became the centre for commercials in the 80s and the 90s, with many big-time directors today, eventually moving on from commercials to film. A higher demand for fast machines surged, machines that could shoot a commercial in 1 or 2 days, or that could film 3 to 4 movements per day. This requirement was different of that in Hollywood, and it was Mark Roberts who started meeting this demand by creating these machines. The first one of the notably mobile machines was called “Cyclops”, which is still a company staple today, capable of filming 3 meters per second with great accuracy using high-end cameras such as the RED Dragon, flawlessly shooting in 6K.

Uses Of Motion Control

Motion control has countless uses, the main ones focussing on VFX creation and live action. Since the camera can follow exactly the same very precise path repeatedly, it is possible to get different layers (actors, background, foreground) that can be overlaid and matched together at the time of compositing. This can also be used to “clone” people, change foreground and background objects, for morphing – which is when one person transforms into another person or thing, a very popular use – or to put things together that couldn’t have possibly been filmed together.

Other uses within VFX include being capable of shooting a scene very accurately so that only one pass might be necessary in post – for example when the camera goes through a glass or an eyeball. It can also shoot forwards, backwards, change the scale (size of the movement) and the time of the movement. The latter is another very popular use, which is combined with compositing to create scaling shots – the most recent example is 2015’s Ant-Man. To create the main effect seen in the film, it is necessary to have exactly the same camera movement for the man and the background to later put them together, otherwise they wouldn’t match. Along the same lines, it is also possible to do scaling by taking footage that has been filmed without motion control, first by tracking the movement to create the initial camera path and then filming the foreground or background with the same path to put the scene together afterwards. Alongside with these, it is also widely used for VFX previsualisations.

Additional uses of motion control include high speed shots, with rigs that can film 4 metres per second (3 metres per second on tracks), which are popular with food commercials, since it can trigger other movements – this is how the ingredients fall on top of the bread. It is also utilised in animation – it is possible to create stop motion or go motion that have complex camera movements – in sports, such as the Olympic Games or Formula 1 and for space research.

Is Motion Control Necessary?

Sometimes it may seem that motion control is unnecessary. Why not fix it in post? Since the quality required in cinema features is an expensive and slow work path, it makes post-production for high resolution sequences also very expensive. It can also be very difficult when it comes to fixing incorrectly filmed VFX shots. Thus, it is normally more efficient to shoot correctly the first time using motion control rather than fixing it in post.

The one thing that motion control requires however, is lots of planning to be done properly. Therefore, the director usually gets together with the VFX Director and the DOP or Operator, and decides if it’s necessary, and if so, how to best work out the shots they need. The disadvantage is that most people aren’t actually aware or don’t know how long it takes to use motion control, or how much money they need to get it right properly. For this reason, if deciding to use motion control, it is best to get someone on board who is properly trained, knows the equipment required, how to use it and how long it will take. This way, the shoot will be properly planned and therefore the production will end up saving more by getting it properly done the first time, instead of wasting valuable resources such as time and money due to a wrong kit decision or last-minute changes.

The Basics of VFX

The other day I stood in front of a screen at the cinema while waiting for a friend, on which a bunch of trailers were playing. New York is getting blown up again, some ghost sailors hope to catch a quirky pirate dead or alive and humans are trying to save the world against robot juggernauts.

As people were coming out of different screenings, I couldn’t help but overhear some typical comments: “Man, those special effects were so sick!”. It seems like films are over-saturated with visual effects nowadays (there’s actually a difference between the terms “visual effects” and “special effects”, more on that later).

But hang on a second. There are old films with visual effects, so these aren’t anything new. Surely the technology back in the day wasn’t as advanced as it is today, so how did they do it? And how do they do it nowadays anyway? And where? By whom? Could I maybe do it too? What, for a living? For real?

Let’s start by explaining that difference between the terms “visual effects” (VFX) and “special effects”, commonly referred to as “practical effects”. Generally, practical effects are those which can be done while the scene is being captured. VFX are normally done in post-production and are those effects that would be impossible to achieve in the real world. Besides creating astonishing viewing experiences, this is one of the main three reasons why VFX are needed. The second sprouts when there could be a practical way of filming the scene required, but doing so might put someone at risk and the third has to do with cost effectiveness. Sometimes it is more practical to use VFX than to film a scene due to issues of scale, location or both, for example, when recreating period settings such as World War II or Victorian London.

While we are used to seeing all of those flashy, fancy VFX in action and fiction films, they are actually utilised in almost every film precisely for the reasons just mentioned above. Some argue that these are the type of VFX that are most impressive when they are integrated seamlessly and are unnoticeable by the audience. A glass breaking, a gunshot or even a car or a crowd are examples of this.

So, how are they done?

The software primarily used for VFX is called Nuke. As an outsider not knowing anything about VFX, I was one to believe that these were done in Adobe After Effects, however Nuke is the industry-standard software for films. While After Effects is also industry-standard, it is predominantly used to create motion graphics, another term not to be confused with VFX. The main difference is that After Effects is layer-based, while Nuke is node-based, which makes it quicker to work. Nuke is also a bit more complicated to use and learn, but counts with more functionality than AE.

Another big part of VFX is CGI (computer-generated imagery), rendering and 3D animation, for which software such as Maya, Renderman or Houdini is used. Fun fact: they don’t use Windows nor MacOS in the industry, computers mostly run on Linux.

Who creates the VFX then?

On a global scale, most top VFX houses have their headquarters or branches in the US, the UK (mainly London), Canada (mainly Vancouver and Montréal) and India. In London specifically, most companies are based in Soho. There are also different ramifications within VFX, with companies that specialise in film and others in TV, commercials or animation. Some of the best known companies are MPC, Double Negative, Framestore, Milk and The Mill, to name a few.

Going into further detail, there are different steps in terms of a VFX workflow: move matching, rotoscoping, compositing, matte painting, lighting and rendering or texturing are just a few of these, which are normally tied-in to a role of the same name. There are additional roles, such as VFX Producer, VFX Supervisor or Technical Director. And of course, runner. It is worth noting that in bigger companies, some of these roles normally specialise in a very particular task, for example hair and fur in beasts, animals or people or fire and smoke, to mention a couple, whereas in smaller companies artists are faced with doing more versatile work.

Now, what do I need to get my foot in the door and how?

Learning to use the software is obviously a major requirement, however it is not so obvious that the ideal core skills to possess include a good understanding of mathematics, design, computer science and physics. This is because as long as you have this core knowledge, it can be easily applied later to any piece of software, no matter how software and technology evolves or changes. For anyone passionate about VFX looking to strengthen their skills, I would recommend reading (and owning) the book “The Art and Science of Digital Compositing” by Ron Brinkmann, often referred to as “the Bible of VFX”.

Another ability to have which is as important as any technical skill is teamwork. A single VFX project can last for a year, 18 months or even two years. Therefore, it is very important to be a good teamplayer who is friendly and easy-going. Since projects can get very stressful, it is essential to be pleasant to deal with and able to produce quality work under pressure.

As for how to get your foot in the door, there are several alternatives. If you don’t have a lot of experience (you should still have a presentable showreel), you can try applying for a runner position at a company. This is one the lowest positions in film and you have to be prepared to make lots of tea and coffee for a long time. However, the bright side is that you will likely find yourself amongst talented artists and sooner or later, if you work hard and demonstrate interest and initiative, you’ll end up being given tasks actually related to your field of work.

Another alternative to consider are the training schemes that different companies run, such as MPC Academy or Envy Academy. Double Negative have their own Graduate Trainee Scheme or Framestore have several courses and training programmes, to name a few.

If you have a bit more experience and feel confident about your skills, you can try applying to junior positions which are opening up regularly, as companies are always on the lookout for new talent.

Last but not least, freelancing is another alternative, as companies often hire people for specific tasks on a project-to-project basis.

The VFX industry is a very exciting one, with lots of new potential coming its way thanks to the ever-developing advancements in VR, 360 cameras, virtual cameras and tools such as Lytro. It is also diversifying into areas outside of film, adapting previsualisations or simulations normally run in pre-production stages to apply them for design purposes, in industries as diverse as automotive, jewellery, construction or medical.

However, the industry might not be suitable for everyone. Working under stressful circumstances, long hours or mobility can be a constant, which can translate into a poor social or family life. But if you have the drive and passion to pursue a career in this industry, finding yourself working in a top-tier budget film or the next Hollywood blockbuster is a very real and stimulating possibility.

Game Engines and MoCap Suits – The Rise of Instant Digital Visual Effects

Animation director, VFX supervisor and director/producer Mondo Ghulam talks about creating digital visual effects using game engines such as Unreal and motion capture suits. Ghulam has worked on several high end games, as well as film and television productions. He has gained over 20 years experience currently focusing on his first passion: Filmmaking.

The Use of Game Engines and Motion Capture Suits – A Common Practice?

In this day and age, on-screen storytelling is able to take advantage of a large range of technologies used to create visual effects in and video games. Two of the most striking methods adopted by filmmakers are the use of game engines and motion capture. Ghulam highlights the economical and creative progress these technologies have brought to VFX in films on nearly every budget level. In early days such as those of A.I. Artificial Intelligence (Steven Spielberg, 2001) a game engine was used in order to combine live action elements with a virtual set.

 

MG: ‘In order to give everyone an idea of what they were looking at [on the virtual set], they were tracking the camera in real time and then displayed a low detail version of the background from a game engine that was synched to the camera. It meant that they could look through a monitor and see a composite of the live action against the fully 3D CG background in its early state. So they can set composition, they can work out exactly where to place the camera and how to move it.’

Ghulam explains that game engines have become so far advanced rendering the quality undeniably high and the production costs comparatively low.

MG: ‘A project that I’m interested in doing will use a game engine because it will be long format and animated, and I don’t want to have to look for the money to render everything out. So with a game engine, you don’t need to do that, it will just play in real time. You just record it straight out of the computer, and all you might need to do is to buy some graphic cards that will power that to a high degree.’

Another technological key element in visual effects is motion capture. As one of the kickstarters of a new inertial mocap suit, Ghulam talks about how quickly body motion can be recorded with this compact piece of technology.

MG: ‘Ultimately if you can take the heat, you can wear this under a set of clothes, and it works wirelessly. So I can be sitting in this room, doing this interview whilst being recorded for my body motion in regular clothing.’

The animator outlines that although the footprint of a motion capture studio with dedicated rooms, expensive equipment and a large team of VFX artists and technicians who work hard to calibrate the environment before shooting, will create a high quality end product, the single person in a suit shows you how the level of compactness and mobility this technology has already achieved.

MG: ‘Eventually there will be no suits, it will be any room that you like, and things like Xbox Kinect are kinda already doing that. You can sit on your sofa and you can track your movement, it can tell where your hands are, it can see your face, it can even recognize your voice and distinguish you from someone else sitting there with you.’

Accessibility and Limitations- Possibilities vs. Perception

Talking about the rapid transformation of large and powerful computers running expensive software to substantially more compact gaming engines, as well as the internet as a key element for sharing and accessing information, Ghulam addresses the technical curve that has happened over the past three decades. Game engines have been around for quite a while, yet the possibility for
individuals to use them easily and economically has not.

MG: ‘In 1994 there was one magazine, that we could get hold of in Glasgow once every quarter, which had articles dedicated to little 3D projects but ultimately that was it. Now any laptop will run, for example, Unreal who have thousands of excellent videos all very well aggregated within sections, so if you’ve never even touched a game engine before, there is a whole series of very easy to watch videos that will walk you through it and get you doing things, that two hours before, you had no concept of, or any idea that you might even be able to do.’

Despite the progress there are limitations. One barrier, the animator believes, is perceptual. He argues that although high impact visuals can still take a huge bite out of the filming budget due to the high cost or complexity, making minor corrections and replacements for example, is possible to do for anyone on a much more economical level.

MG: ‘I’m gonna make a bold claim here, but you could subscribe to Adobe cloud, download Adobe After Effects, spend a couple of days looking at tutorials, learn how to track, put it an element in and get your shot. Now these are pretty common tools now, that had once been the preserve of a very few people in the past.’

Another set of limitations can be found in facial capture. Whereas motion capture suits are far advanced in their ability to instantly transfer an actor’s movements to a highly accurate level, capturing facial movements and expressions is a different story. Ghulam addresses the fact that although your face can be recorded, there are still overarching problems with deadness and inability to capture not only the physicality, but also the psychology of human facial expressions. At the moment, he adds, it is still the prerogative of the high end productions to animate emotions to a believable extent.

MG: ‘When we are talking about Dawn of the Planet of the Apes (Matt Reeves, 2014) and Avatar (James Cameron, 2009), there is a percentile at the end of it the quality scale that just pushes pushed it beyond anything that anyone else was doing at that particular time. And that takes lots of talented people, there is no automated way to get to where any of those films ended up. It takes a lot of really good artistry […] You look into the eyes of those characters and they are real.’

Creating a Virtual Skin- Futurism or Future?

Looking at the processes that have shaped virtual realities and storytelling in the last decades, many of the things we associate with futurism are already possible in the here and now. Ghulam uses Xbox 360 and Xbox Kinect as prime examples for the level of accessibility motion capture has moved into, given the fact that those devices can capture 3D performances and data in the comfort of a living room at very low cost. The animator believes that although, at that low-budget level, the quality of the characters are somewhat inaccurate and the resolution is low, in the near future, actors will be able to scan their bodies and save their digital selves, then 20 years later, when a role requires a younger version of them, they rehearse the part, put on their motion capture suit and virtually perform in their younger skin.

MG: ‘There is no reason why this couldn’t be done now in fact. What we lack is the library of all these great actors, that are alive today. We don’t have their 360 degree images from 30 or 40 years ago’

Ghulam explains that technology which can record the facial performance of an actor and translate it into a virtual character that doesn’t need prior calibration already exists. Industrial Light & Magic (ILM) developed a particular program that studies the face of the actor who presents himself to the camera:

This form of capture has also manifested itself in the idea of face-exchange.

MG: ‘There was a research paper earlier this year, it’s a different kind of technology, but they can record your face, then they can record my face, and then I can puppeteer your face with my face.’

The animator highlights the many facets of the use of such highly developed technology and states that futuristic elements conveyed in early sci-fi films such as identity theft and stealing a persons face have potentially become very real. Whilst recognizing rapidly improving technology as vitally important to filmmakers, Ghulam is very clear about the fact that without dedicated people there would be no progress. He highlights the essence of having skilled and motivated artists, whether they are on the creative or on the technical side of things, on a VFX or VR project.

MG: ‘A very common question I would get asked has come from actors: ‘This stuff is gonna replace all of us, isn’t it?’ and I would say: ‘You’re in the middle of the floor, there are a hundred cameras around you and 40-odd people behind the scenes. All that technology and all that effort is about capturing something that only you can do. Which part of this tells you that you’re being replaced?”

Ghulam strongly outlines the convergence of storytellers, technicians and actors on visual effects productions and the importance of individual input. He stresses the fact that with the popularity of video tutorials digital artists have the opportunity to gain and improve their skills on a rapid speed, valuable knowledge is shared instantly and evolving communities of artists are trying new technologies and push their development to new levels. He uses the realtime cinematography example of Hellblade (Epic Games, 2016), a video game where a large number of digital artists capture a character in real time into the Unreal Engine by leveraging the unique qualities of the actress.

MG: ‘I’m not trying to paint a utopia here. Obviously there are competitive  aspects to this, but there has always been a lot of people (in the cg community) willing to share it and it’s these people who are making it possible. It’s also someone that will step on the set and say: ‘I can do that.’ Although their minds might be screaming that it is impossible […] but there is nothing quite like seeing it all work in the end. You’re creating things, or helping to create things, that could not be created anywhere else in that way’

Making the Magic Happen- What’s Next?

Mondo Ghulam is currently working on several short film projects as writer, director and VFX artist, and is planning on moving into feature film production very soon.

www.mondoghulam.com

Events

Designing the Future: Winning A-List Work & Designing For VFX with Blockbuster Directors

DESIGNING THE FUTURE – WINNING A-LIST WORK & DESIGNING FOR VFX WITH BLOCKBUSTER DIRECTORS

EVENT DETAILS

Come join us as Founder and Executive Creative Director of Territory Studio David Sheldon-Hicks will present: Designing The Future – Winning A-list Work & Designing For VFX With Blockbuster Directors.

Sheldon-Hicks will demonstrate the lifecycle of a project through the lens of Blade Runner 2049 (d. Denis Villeneuve) and Ready Player One (d. Steven Spielberg), while delving into:

  • Pitching
  • Working with Art Departments
  • Production
  • Post Production
  • Working with Directors Denis Villeneuve and Steven Spielberg

Sheldon-Hicks will also discuss the impact of immersive and experiential technology on moving image exhibition.

ABOUT DAVID SHELDON-HICKS:

With a background in graphic design, David’s career began in digital media before moving on to the fast-moving world of music videos, where his passion for the craft and creative of motion graphics led him to film, games and commercial campaigns.

As founder and Executive Creative Director of Territory Studio, David’s love of storytelling and technology, and eye for emotive details has established a reputation for beautifully crafted, design-led graphic narratives across genres and media.

Today, David’s multidisciplinary team thrives on future vision challenges, attracting diverse briefs across entertainment, brand, installations and emerging technology.

In recognition of the studio’s creative approach and achievements, David was named as one of 2018’s Creative Leaders 50, an annual scheme from Creative Review that recognises outstanding talent across UK and Europe.

In addition to winning Motion Awards and D&AD’s, the studio’s work for Blade Runner 2049 has been nominated for the annual Beazley Designs of the Year 2018.

Studio credits include motion graphics and visual effects for feature films, including Avengers: Infinity War, Ready Player One, Pacific Rim Uprising, Blade Runner 2049, Ghost In The Shell, The Martian, Mission Impossible: Rogue Nation, Avengers: Age of Ultron, Ex_Machina, Guardians of the Galaxy, Jupiter Ascending, Zero Dark Thirty, Prometheus, etc.

Games work includes Sony VR Worlds, Horizon: Zero Dawn, Forza Motorsport 5&6, Need for Speed, Killzone Mercenery, Killzone 3, Medal of Honor, Little Big Planet, etc.

Attracted by Territory’s multidisciplinary approach, the studio has delivered technology and product visualisations, branded content and installations for clients including 007 Elements, Amazon UK, Avatar XPRIZE, Barbican Centre, Facebook, Faraday Futures, HSBC, Formula 1, Investec, Jaguar,  Land Rover, Microsoft, Santander, Sony, Spyscape Museum, Virgin Atlantic, Virgin Cruises, Volvo, and many more.

An inspiring speaker, David shares his thoughts and experiences at international creative and industry events including Motion Plus Design Paris; Dept Festival Amsterdam; D&AD Festival, London; Clerkenwell Design Week, London; FITC Amsterdam; Playground Festival, Eindhoven; OFFF London; OFFF Barcelona; Digital Shoreditch, and many others.

About Territory Studio:

Territory Studio is a creative specialist with a unique approach to motion graphics. Drawing on deep expertise of narrative design for film, we blend creative with technology to realise compelling future facing designs, from concept through to delivery.

Working across diverse projects from features films and episodic formats, the team’s passion for story and designer’s eye for problem solving informs the studio’s approach to art department and visual effects briefs.

With growing studios in London and San Francisco, Territory’s film credits include Avengers: Endgame & Infinity War, Ready Player One, Blade Runner 2049, Ghost in the Shell, The Martian, Guardians of the Galaxy, Ex_Machina, Zero Dark Thirty, Prometheus, and more.