-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training fails with multiple GPUs after the 1st epoch #43
Comments
So my guess as to why this is happening is that the last dataloader batch is not being properly dropped or padded. Do you have your full command so I can reproduce? |
The issue happens regularly, almost every time, on a system with 3 1080Ti GPUs. On another system with 2 1080Tis and everything the same, the issue happens only in 10-20% of the time I use it with different size datasets. It happens at the end of the 1st epoch. I would agree that it seems to be data size dependent and could be related to the size of the last batch. I am using the latest pre-built pytorch for conda: Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56)
Here is the full command line I use: python3 -m multiproc main.py --load imdb_clf.pt --batch_size 64 --epochs 20 --lr 2e-4 --data ./data/twits/train_json.json &> twitlog.txt I have cuda 9.2 and cudnn7 |
I changed the batch size from 64 to 100 and the issue disappeared on the 3 GPU system, so you are correct - it is definitely related to the size of the last batch. You can reproduce it with any small size dataset by varying the batch size. |
I wasn't able to reproduce this on our smallest datasets. Could you print out the number of entries in your dataset for me, so I can try and create some synthetic data of the same length? Adding |
I changed my dataset size a bit and now I can't reproduce it too. Hopefully I will be able to get it to reproduce again if my data changes. |
I got it to reproduce on a 2 GPU system with data sizes: DATASET length: 42243230 422951 422408 full command line: python3 -u -m multiproc main.py --load lang_model.pt --batch_size 110 --epochs 2 --lr 6e-4 --data ./data/twitter/train_json.json &> med_log.txt Please let me know if you need the data file, I can upload it somewhere. On another note, the train_json.json that I use is 14.4 Gigabytes in size, yet the program requires 64GB of RAM + another 64 GB of swap space to run. Is there a way to improve its memory footprint to run it with larger data sets? |
Try the --lazy option. It will pull from disk and is meant to be used with large data files (such as the amazon reviews dataset). If you could upload the file somewhere that would be very helpful. Even if you just replace the entries in the data file all with garbage text. |
If I fun with multiple, in my case 3 GPUs, with python3 -m multiproc main.py ...., I get the following error after successfully completing the 1st epoch:
Traceback (most recent call last):
File "main.py", line 392, in
val_loss, skipped_iters = train(total_iters, skipped_iters, elapsed_time)
File "main.py", line 305, in train
model.allreduce_params()
File "sentiment-discovery/model/distributed.py", line 41, in allreduce_params
dist.all_reduce(coalesced)
File "/home/tester/anaconda3/lib/python3.6/site-packages/torch/distributed/init.py", line 324, in all_reduce
return torch._C._dist_all_reduce(tensor, op, group)
RuntimeError: [/opt/conda/conda-bld/pytorch_1532579245307/work/third_party/gloo/gloo/transport/tcp/buffer.cc:76] Read timeout [127.0.0.1]:32444
terminate called after throwing an instance of 'gloo::EnforceNotMet'
what(): [enforce fail at /opt/conda/conda-bld/pytorch_1532579245307/work/third_party/gloo/gloo/cuda_private.h:40] error == cudaSuccess. 29 vs 0. Error at: /opt/conda/conda-bld/pytorch_1532579245307/work/third_party/gloo/gloo/cuda_private.h:40: driver shutting down
The text was updated successfully, but these errors were encountered: