Towards the end of E3 this year, I enthusiastically tweeted that I “just saw the most amazing thing I think I've ever seen at an E3, and it wasn't a game. It's a tech that will be in a game.” Lots of you were interested to hear about it, so why has it taken me eight days to get around to actually telling you? Well, it’s because I wanted to be able to show you at least a small glimpse of what it does. I can breathlessly tell you how impressed I was until I’m blue in the face, but I wanted some kind of illustration to go along with it.

The technology is called MotionScan, and it’s from an Australian company called Depth Analysis. Their meeting room at E3 was tucked away in a darkened corner in the west hall of the LA Convention Center with just a little sign tacked to the door. When I was beckoned in, I ran into a gobsmacked-looking Jade Raymond on her way out, who was effusing about what she’d just seen as she said goodbye to the folks inside. Earlier that day I’d heard that representatives from studios across the games industry had seen the demo, and left looking similarly flabbergasted.

So what is it? Well, the bottom line is that it’s a groundbreaking 3D motion-capture system. No...wait...stay awake, come back. It’s not as dull as it sounds. Seriously. Unlike every other motion capture thing you’ve ever seen, this is a full performance capture system. It doesn’t just track movement, or grab animation data from actor’s faces as they speak their lines, it captures everything about an actor’s performance, and generates a fully-textured 3D model based on what it sees and hears.

The Most Impressive Thing I Saw At E3

Depth Analysis' Oliver Bao (left) and Team Bondi's Brendan McNamara (right).

Unlike the systems that we’ve all seen in countless boring magazine stories for the past 10 years with the little white balls glued to spandex body suits, MotionScan is much more sophisticated. It uses 32 high definition cameras (divided into 16 stereoscopic pairs) to capture every angle of an actor’s performance at 30 frames per second. From this data it generates a fully-textured 3D model (at the moment it’s just their heads, but later it will be full body) that incorporates every nuance, mannerism, and emotional detail from the performance.

To demonstrate this, Depth Analysis head of research Oliver Bao was joined by Team Bondi founder and director Brendan McNamara, who is overseeing the first game that will make use of the technology, Rockstar's L.A. Noire. To illustrate the system, they showed a series of performances from actors being used in the game, with video of the actor’s actual performance alongside the data captured with MotionScan. The first demo was simple. An actor spoke some lines and smiled, and it was eerily realistic. As Bao and McNamara advanced through subsequent demos, the performances became more and more emotionally engaging until they eventually showed me a scene in which a character was shown distraught about the murder of his wife. As he sobbed through his lines, every line in his face broadcast the angst his character was feeling. His eyes welled-up, and tears streamed down his face. As the scene played out, Bao demonstrated that it was a realtime 3D model by moving the actor’s disembodied head around the screen, and applying different lighting effects.