You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello all, and congratulations on the great repository!
I have sucessfully finetuned the stable audio open model on a couple of different tasks already. But so far I've had to define the stopping of training and choosing checkpoints by listen and evaluating samples manually.
It would be nice to have those on the PL training pipeline and log to WandB.
It is hard to assess overfitting, tune hyper-parameters and decide when to stop training by listening to samples and analyzing the train/loss curve.
Currently I am implementing a custom validation pipeline using the metrics described on the paper [and equivalents]: CLAPscore, FDopenl3, KLpasst.
The text was updated successfully, but these errors were encountered:
Hello all, and congratulations on the great repository!
I have sucessfully finetuned the stable audio open model on a couple of different tasks already. But so far I've had to define the stopping of training and choosing checkpoints by listen and evaluating samples manually.
It would be nice to have those on the PL training pipeline and log to WandB.
It is hard to assess overfitting, tune hyper-parameters and decide when to stop training by listening to samples and analyzing the train/loss curve.
Currently I am implementing a custom validation pipeline using the metrics described on the paper [and equivalents]: CLAPscore, FDopenl3, KLpasst.
The text was updated successfully, but these errors were encountered: