Quickstart

Create a Deep IO Configuration File

Click on the Golaem Deep shelf icon  to call the user interface, or type glmDeepSettings as a MEL commande.

The UI presents the first part of the model configuration steps.

You can already save the configuration in a placeholder node in maya, by clicking on the bottom right save button.

The configuration file will be saved when calling sampling or training to insure that it is up to date when loaded by the relative command.

Configure the meshes and names

Reference Guide of this step :  MESHES AND NAMES CONFIGURATION

The first step is to configure the name you want to give to the asset, where you want to save the configuraiton file (.dio), and which meshes you want to deform.

You can add several meshes to a configuration. It avoids having too much models, but in some cases the training could achieve better result with split models if some bad correlations exist between rig inputs drivind each mesh.

Once the meshes are added to the configuration, you need to create a single skinCluster version of it, which will be used to compute the non linear deformation, and will also be used as based of the deformer once the model will be trained. 

You can find some SIM buttons in many places of the UI, that stands for Show In Maya : clicking on it will select it in the outliner, allowing to find it back quickly in the outliner with a right click "show selected".

[insert screen cap of SIM and outliner show selected]

Configure RIG Inputs

Reference Guide of this step :  RIG INPUTS

At this step, the goal is to define the minimal, yet sufficient, set of rig inputs that could drive the deformations we need to handle. Typical inputs are the macro rig inputs (as micro controls are usually driven by a combination of macro ones). 
You can make use of :

  • RIG joint matrices. This will be converted to a 4x3 = 12 float inputs for the model. That must only be used for skinCluster input joints, as there is a normalization shared by all the joint nodes translates.
  • manipulator matrices in mesh local spaces. This will be converted to a 4x3 = 12 float inputs for the model. This is suitable for any (local) matrix input. Normalization is done on all samples on a per matrix basis.
  • weights or any single floats. Each of them is normalized individually.

Extraction / Sampling

Reference Guide of this step :  Extraction

This is an easy step, yet it can be long. The sampling takes the current scene frame range as default if you click on the matching button. Note that samples are incrementals, and one can use a configuration on different scenes holding the same rig, to have more samples for training. Samples are only cleared at user request.
Sampling can take care of removing identical samples (same rig input values) if the user request it. We usually skip identical frames, which are probably rest poses, to avoid artificially making it more important to learn than others. Note that Deep Learning Model will learn best what it has the most in sampling.

Training the Model

Reference Guide of this step :  CREATING THE MODEL

Once the sampling is done, the model can use the samples to train itself.

In advanced configuration, the default parameters are often right.

The two most important parameters are :

  • the learning rate, which drives the convergence speed of the model. a value of 0.0001 or 0.00001 is probably what will be needeed to achieve good results.
  • the epoch number, which is the iteration count of the model to learn. some models may need as much as a few thousand iterations to be reliable. This is related to learning rate, a lower value in learning rate will need more epochs to converge.

The model can be trained incrementally, without starting from 0, if more samples have been added. This is true only if the meshes and RIG inputs do not change, or you will have to delete extraction & model, and sample/train from 0.

For all advanced parameters, see reverence guide about the training [ToDo Link]

At the end of the training, the tensorRT version of the model is exported, which can take a few seconds.

Adding and using the Deformer

Reference Guide of this step :  USING THE DEEP DEFORMER

The deformer can be added to the created linear mesh directly after training. If the deformer must be added on other scenes (but the best would usually be to add it to the referenced character scene), one can add it back by :

  • calling the UI
  • loading the model DIO file 
  • asking the creation of linear mesh
  • adding the deformer

Validating the model training

Reference Guide of this step :  ADDITIONAL TOOLS

Some tools are available to see if the model training went right :

  • The training curves displayed while training (and at the end of the training) should be going down in an asymptot style. If the curves go up then you probably needs some adjustement of the learning rate (or learn on less epochs).  You may also detect overfitting, which means there is not enough samples available to the model. 
  • You can display error between target mesh and the linear one with deformer, by calling the mesh compare deformer tool. it is a deformer that will be placed on the linear mesh, and asks for the target mesh name. it will display an error on green-red shades, with configurable error scale. You may need to enable the display color on linear mesh to see the result.
  • You can also display epistemic error directly on the deformer. This information is obtained by running the model several times while disabling random inputs. If the results are too different, that means that the model does not really know how to deform based on the inputs. You may need to enable the display color on linear mesh to see the result.