You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'd like to know if it's possible to build a complex facial model in safetensor. The idea would be to create a complementary node in ReActor that takes in an image folder and runs through each image to complete a full facial model (with different camera angles, different lighting, etc.), with a kind of incrementation of the facial model at each itteration.
I'm not familiar with the inner workings of ReActor or even .safetensor files, but the aim would be to be able to train a mini facial model with one or more dozen images rather than just one.
Objectives
The aim is that once the model has been built, we can use it as a basis for regular rendering (eye orientation, etc.).
Is this workflow already possible?
In this one, a first image is generated with a model that is passed on to a second one by reversing the detection, which should add information, but the weight of the output model doesn't change, which gives me the impression that only the last swap side saves its model.
Another approach
Is it possible to generate 100 different safetensors and then merge them with a mergeCheckpoint node?
Will this give good results? Or is this not how it works?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
An idea for micro-facials-models
Hello, I'd like to know if it's possible to build a complex facial model in safetensor. The idea would be to create a complementary node in ReActor that takes in an image folder and runs through each image to complete a full facial model (with different camera angles, different lighting, etc.), with a kind of incrementation of the facial model at each itteration.
I'm not familiar with the inner workings of ReActor or even .safetensor files, but the aim would be to be able to train a mini facial model with one or more dozen images rather than just one.
Objectives
The aim is that once the model has been built, we can use it as a basis for regular rendering (eye orientation, etc.).
Is this workflow already possible?
In this one, a first image is generated with a model that is passed on to a second one by reversing the detection, which should add information, but the weight of the output model doesn't change, which gives me the impression that only the last swap side saves its model.
Another approach
Is it possible to generate 100 different safetensors and then merge them with a mergeCheckpoint node?
Will this give good results? Or is this not how it works?
Beta Was this translation helpful? Give feedback.
All reactions