Hello there!
This is a quick test with an Iphone X using the TrueDepth technology and facial data wired to a rig based on Advanced Skeleton. The animation curves still being noisy and needs polishing (also my beard and light conditions weren't the best). However, with a little bit of tuning and clean up the results are actually promising, at least for a good base to start with.
The final outcome of having a home facial capture system is here. And getting a production ready looking can be based in two ideas:
1. Tweaking how the data is interpreted by the rig. Adding comprehensive limits to the target rig, filtering the noise depending on its nature, etc. In other words, a little bit of manual labour.
2. Deep Learning to clean non-human recognizable animation curves, reinforcing good patterns with hours of data.
Good news. I will implement a little bit of this in The Seed of Juna for some facial animation key shots.
Comments