You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Sometimes the loss jumps around after a large number of epochs during training. It would be nice to use a learning rate scheduler.
The text was updated successfully, but these errors were encountered:
This is where the canned CLI tool is going to limit you if you want to extend the basic training loop. There's no realistic way to support all the things one might want to do in the CLI. Instead, I would suggest using the code in lddmm.py as a starting point if you'd like to add stuff. The reason is that this will be much quicker than having to add the argparse args and stuff each time we try something new. Since atlas building is not a predictive modeling task, we do not perform any sort of validation monitoring. One could just monitor the training loss and decrease the LR whenever it jumps, or do a line search at each step or... Also note that different LRs are needed for momenta and template image; do we just scale them both with the scheduler?
Is your feature request related to a problem? Please describe.
Sometimes the loss jumps around after a large number of epochs during training. It would be nice to use a learning rate scheduler.
The text was updated successfully, but these errors were encountered: