Interview with inventors of Face Rig

Community Post: This article was submitted by a member of our community. Find out how you can publish your own writing here!

When I lived in the bay area I became friends with a computer engineer who had a state of the art computer system. He had a webcam and a program that would allow you to become anything you wanted. But as I look back at that and other programs I have noticed that all of them are primitive. The folks over at FaceRig have noticed this as well and have created a groundbreaking tool to be used by everyone. I caught up with them and asked them a few questions about this new tool:

1) Please tell us about yourselves.


We are five game developers:  two programmers, one technical artist (me), one 3d artist and one 2d artist. We are also getting a lot of help from friends.


Myself as well as the 3d artist (Mihai)  and the 2d artist (Cristian) have been making games for almost 12 years, and we have all tons of experience and each of us have lead teams of developers on various projects. We’ve been wearing many hats in these 12 years.

The programmers are not as seasoned as us, they’ve only been in game development for about 3-4 years but they are really good and awesome to work with :).


I’ll talk with the others and make sure they are also okay with making their full identity public before doing this on their behalf (some may value their privacy more than me :D).

We do not want to drum up attention about exactly what games for exactly what publisher we’ve been on in the past because we do not want them to claim that we have used their fame to attract attention to us.


2) What is facerig?


FaceRig is a digital  alter ego engine. It is a virtual actor framework. It is a virtual pupeteering tool. It is a playground for creative people. It is a proxy presence software.  It is a project made with our hearts for like minded people who are equally gamers, dreamers, artists and misfits, just like us.


3) What inspired you guys to create such a groundbreaking tool?

We don’t really see it as being a groundbreaking tool, we just wanted to make something that we would love using…. something approachable, that would speak to people at a very direct level.. you know …instill in them the idea that they could be anything… get them to raise an eyebrow … to be creative and have fun.


The first time the idea struck us was one evening at the pub, after doing expression mocap for a game that we were working on back then.


4) How does facerig work and where can you use this tool?


Briefly put, it takes as input a video stream featuring a human head with a visible face, filmed more or less from the front,  (can also be a real time webcam stream), deduces on the fly the  positions and expression of the said human using the Visage SDK, synthesizes a digital avatar counterpart sporting the same head position and expression,  and feeds it forward as a video stream (as if coming from a real webcam , or just saves it to disk).


There are many modules in FaceRig, and each of them deserves its special explanation..

There’s the tracking module.

There’s the animation re-targeting module

There’s the scene setup and render module.

There’s the audio module.

There’s the virtual webcam module

and there are a few modules that aren’t even named in English just yet.


The tracking module will encompass all sensor input,and provide standardized sensor-agnostic data to the other modules.


Right now the bulk of its data is provided by the the webcam mocap module based on the visage SDK of  by Visage Technologies, who are a group of super smart people from Sweden with super impressive resumes and we really couldn’t have gone public with FaceRig, if it weren’t for their excellent VisageSDK. Their algorithms involve lots and lots of “really scary” math 🙂  Without this sdk FaceRig would have been in development for an extra year or two, easy, and even then the results maybe wouldn’t have been as good.

So literally the heart of FaceRig’s tracking module right now is the visage SDK from  Visage Technologies.


You can use it wherever you would normally use a webcam.

You can also just create movies with it.

We plan to figure out ways for it too interface with existing games. (but that  will also take some support from that game’s developers).


5) When this is released how many characters , backgrounds and props will be available?


Depends on the funding we get. A minimum of ten of each, but we are aware that’s very little. We can’t realistically ramp up production too much with just two artists, so we’ll either have to outsource or bring more artists aboard, and that takes funding. What we hope will happen though is to have tons of high quality community created models, because FaceRig is an OPEN creation platform for 3d artists and animators.


6) How easy will it be to create your own props , characters and backgrounds?


They will be able to be created with existing off the shelf tools, and they’ll have to go through an import process. It’ll be a learning curve at the start, the artists will learn our process,  and we will learn their needs .. but nothing unmanageable.


The question that i suspect many will want answered is: how easy will it be to create good-looking, almost Pixar-like quality props, character and backgrounds.


That’s a bit like a question “how easy is to be an awesome 3d artist” ?


Depends on how good and unique you want them to look, really, and your level of experience with the tools of your choice. Really good unique models can take several weeks to be put together for veteran artists, but in the end the result can be truly impressive.


7) Facerig boasts real time voice processing can you tell us a bit more how that works?


Facerig *will* boast… right now it is just a placeholder that serves to communicate the vision :).


The computer intercepts the audio stream from the real life microphone, applies the effects of your choice, and further feeds out the processed sound as if coming from a virtual microphone device. In Skype you select as sound input the virtual microphone device. (Conceptually-  the same way will be handling the image – with a virtual webcam device) .


Right now we are barely scratching the surface with sound, so nothing impressive to show just yet, just a run off the mill pitch changer. We do want to build a world class real time voice processing tool eventually, but we really haven’t truly started working on it just yet).


8) How much will this cost and what features are available at each level?


Starts at 5 USD for people who get in during the crowd-funding, for home, non commercial use.  The retail equivalent for this license ill be around 15 USD when it launches.

It goes all the way up there in price with professional software ( to about 500 or 600 USD, for the retail Studio version, that will also export numeric mocap values).


9) This will be available on which platforms and what’s the difference between the two?

At first Windows PC, and the most likely next one iOS.. but mainly it will be steered by the community (our Indiegogo backers will have a heavy say in it). Mobile avatars will be less visually impressive as desktop ones.

Community Post: This article was submitted by a member of our community. The views expressed are the opinions of the designated author, and do not reflect the opinions of the Overmental as a whole or any other individual. We will gladly cooperate in the removal of plagiarism or any copyright infringement. Please contact us here.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button