Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use accelerate's API to handle gradient accumulation #2

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

LiyouZhou
Copy link

Using current code, when training with fp16 the following error arises during gradient accumulation:

Traceback (most recent call last):
  File "/data/liyouzhou/study/GenieRedux/main.py", line 24, in <module>
    main()
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/hydra/main.py", line 94, in decorated_main
    _run_hydra(
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/hydra/_internal/utils.py", line 394, in _run_hydra
    _run_app(
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/hydra/_internal/utils.py", line 457, in _run_app
    run_and_report(
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/hydra/_internal/utils.py", line 223, in run_and_report
    raise ex
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/hydra/_internal/utils.py", line 220, in run_and_report
    return func()
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/hydra/_internal/utils.py", line 458, in <lambda>
    lambda: hydra.run(
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/hydra/_internal/hydra.py", line 132, in run
    _ = ret.return_value
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/hydra/core/utils.py", line 260, in return_value
    raise self._return_value
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/hydra/core/utils.py", line 186, in run_job
    ret.return_value = task_function(task_cfg)
  File "/data/liyouzhou/study/GenieRedux/main.py", line 14, in main
    train.run(config)
  File "/data/liyouzhou/study/GenieRedux/train.py", line 117, in run
    trainer.train()
  File "/data/liyouzhou/study/GenieRedux/training/trainer.py", line 682, in train
    logs = self.train_step(*args, **kwargs)
  File "/data/liyouzhou/study/GenieRedux/training/trainer.py", line 482, in train_step
    self.accelerator.clip_grad_norm_(
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/accelerate/accelerator.py", line 2157, in clip_grad_norm_
    self.unscale_gradients()
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/accelerate/accelerator.py", line 2107, in unscale_gradients
    self.scaler.unscale_(opt)
  File "/home/liyouzhou/anaconda3/envs/genie_redux/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 296, in unscale_
    raise RuntimeError(
RuntimeError: unscale_() has already been called on this optimizer since the last update().

This patch makes use of this API. Accelerate now takes care of gradient accumulation and all mixed precision settings work as expected.

@LiyouZhou
Copy link
Author

@NSavov @naser-kazemi Would you mind having a look at this PR please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant