-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dataloader tensor size error #22
Comments
I think the resolution of your images is not consistent, please double-check. Note that you should check the undistorted images, not the very first input before you run COLMAP. |
Do you mean my dataset size? I have checked that their shapes are indeed consistent. Do I need to make some changes in the code? Beacuse they are all used to process dtu and tnt datasets. |
That doesn't make sense to me. If your images are all of the same resolution, then why the shape of tensors become different? |
Yes, I used colmap. I set the batchsize to 1 to run normally, indicating that there is no problem with my image size. |
Please check the resolution of the undistorted images (after running COLMAP). Sometimes if you use COLMAP to calibrate cameras with multiple camera models, the resulting distortion coeffienct would be different, leading to undistorted images with different resolution. |
Okay, I'll go check them again. Thank you for your patient reply! |
I got this problem,shape 512 is the param numdepth,datasets/data_eval_transform.py:128
|
Is your dataset your own or public?I set batchsize=1 to solve the problem. |
Hello, Thank you for your paper and code contributions!I encountered an error while testing a dataset created with colmap2mvsnet.py. When I set the batchsize to 1, it can be resolved without processing too much data.
Traceback (most recent call last):
File "eval.py", line 125, in
save_depth()
File "eval.py", line 87, in save_depth
for batch_idx, sample in enumerate(TestImgLoader):
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 652, in next
data = self._next_data()
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1347, in _next_data
return self._process_data(data)
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1373, in _process_data
data.reraise()
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/_utils.py", line 461, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 160, in default_collate
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 160, in
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 149, in default_collate
return default_collate([torch.as_tensor(b) for b in batch])
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 141, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [512] at entry 0 and [513] at entry 1
But the batchsize can only be set to 1, which is a bit inconvenient. Do you know why? My dataset photo size dimensions are consistent, I don't know why there are errors.
The text was updated successfully, but these errors were encountered: