Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Learning Rate Scheduler #22

Open
rusheniii opened this issue Jun 23, 2021 · 1 comment
Open

Learning Rate Scheduler #22

rusheniii opened this issue Jun 23, 2021 · 1 comment

Comments

@rusheniii
Copy link

Is your feature request related to a problem? Please describe.
Sometimes the loss jumps around after a large number of epochs during training. It would be nice to use a learning rate scheduler.

@jacobhinkle
Copy link
Owner

This is where the canned CLI tool is going to limit you if you want to extend the basic training loop. There's no realistic way to support all the things one might want to do in the CLI. Instead, I would suggest using the code in lddmm.py as a starting point if you'd like to add stuff. The reason is that this will be much quicker than having to add the argparse args and stuff each time we try something new. Since atlas building is not a predictive modeling task, we do not perform any sort of validation monitoring. One could just monitor the training loss and decrease the LR whenever it jumps, or do a line search at each step or... Also note that different LRs are needed for momenta and template image; do we just scale them both with the scheduler?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants