You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've ran into an issue with fine-tuning. I've tried fine-tuning EfficientFormerV2-L with the --resume argument. However, I get the following error: Failed to find state_dict_ema, starting from loaded model weights when I launch training.
Can you please help me to resolve this error? Thank you in advance.
The text was updated successfully, but these errors were encountered:
I've tried using the --finetune parameter instead: python -m torch.distributed.launch --nproc_per_node=$nGPUs --use_env main.py --model $MODEL --data-path /data/path --output_dir efficientformerv2_l_out --batch-size 32 --finetune $CKPT --distillation-type none. Now I am getting the following error:
Traceback (most recent call last):
File "/data/ben/EfficientFormer/main.py", line 423, in <module>
main(args)
File "/data/ben/EfficientFormer/main.py", line 372, in main
train_stats = train_one_epoch(
File "/data/ben/EfficientFormer/util/engine.py", line 42, in train_one_epoch
outputs = model(samples)
File "/home/ben/anaconda3/envs/efficient_former/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ben/anaconda3/envs/efficient_former/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 994, in forward
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Hi,
First of all, thank you for the great work.
I've ran into an issue with fine-tuning. I've tried fine-tuning
EfficientFormerV2-L
with the--resume
argument. However, I get the following error:Failed to find state_dict_ema, starting from loaded model weights
when I launch training.Can you please help me to resolve this error? Thank you in advance.
The text was updated successfully, but these errors were encountered: