-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Cyclic Learning Rate and Step-wise Learning Rate Scheduler #213
Conversation
The code has been cleaned. It is now the same with the upstream. |
the test example has been updated: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个file可以再小一点吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -30,6 +30,7 @@ def __init__( | |||
self.model = model.to(self.device) | |||
self.optimizer = get_optimizer(model_param=self.model.parameters(), **train_options["optimizer"]) | |||
self.lr_scheduler = get_lr_scheduler(optimizer=self.optimizer, **train_options["lr_scheduler"]) # add optmizer | |||
self.update_lr_per_step_flag = train_options["update_lr_per_step_flag"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果这个flag为false就不更新LR了?
@@ -129,6 +130,11 @@ def iteration(self, batch, ref_batch=None): | |||
loss.backward() | |||
#TODO: add clip large gradient | |||
self.optimizer.step() | |||
if self.update_lr_per_step_flag: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
没太理解这个开关的作用
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
学习率更新需要显式的使用 self.lr_scheduler.step()。添加这个开关可以在每个 iteration 里调用,否则是每个 epoch 调用一次
As mentioned in [issue 211], this PR aims to Implement Cyclic Learning Rate and Step-wise Learning Rate Scheduler.
Major changes include:
Minor change: