Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataloader tensor size error #22

Open
lemonsstyle opened this issue Mar 30, 2023 · 8 comments
Open

Dataloader tensor size error #22

lemonsstyle opened this issue Mar 30, 2023 · 8 comments

Comments

@lemonsstyle
Copy link

Hello, Thank you for your paper and code contributions!I encountered an error while testing a dataset created with colmap2mvsnet.py. When I set the batchsize to 1, it can be resolved without processing too much data.

Traceback (most recent call last):
File "eval.py", line 125, in
save_depth()
File "eval.py", line 87, in save_depth
for batch_idx, sample in enumerate(TestImgLoader):
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 652, in next
data = self._next_data()
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1347, in _next_data
return self._process_data(data)
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1373, in _process_data
data.reraise()
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/_utils.py", line 461, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 160, in default_collate
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 160, in
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 149, in default_collate
return default_collate([torch.as_tensor(b) for b in batch])
File "/home/west/.conda/envs/d2hc/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 141, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [512] at entry 0 and [513] at entry 1

But the batchsize can only be set to 1, which is a bit inconvenient. Do you know why? My dataset photo size dimensions are consistent, I don't know why there are errors.

@QT-Zhu
Copy link
Owner

QT-Zhu commented Mar 30, 2023

I think the resolution of your images is not consistent, please double-check. Note that you should check the undistorted images, not the very first input before you run COLMAP.

@lemonsstyle
Copy link
Author

Do you mean my dataset size? I have checked that their shapes are indeed consistent. Do I need to make some changes in the code? Beacuse they are all used to process dtu and tnt datasets.

@QT-Zhu
Copy link
Owner

QT-Zhu commented Apr 2, 2023

That doesn't make sense to me. If your images are all of the same resolution, then why the shape of tensors become different?
Did you apply COLMAP to calibrate the images?

@lemonsstyle
Copy link
Author

Yes, I used colmap. I set the batchsize to 1 to run normally, indicating that there is no problem with my image size.

@QT-Zhu
Copy link
Owner

QT-Zhu commented Apr 4, 2023

Please check the resolution of the undistorted images (after running COLMAP). Sometimes if you use COLMAP to calibrate cameras with multiple camera models, the resulting distortion coeffienct would be different, leading to undistorted images with different resolution.
This is all I can imagine given the limited information from you.

@lemonsstyle
Copy link
Author

Okay, I'll go check them again. Thank you for your patient reply!

@FFFOCUS
Copy link

FFFOCUS commented Apr 12, 2023

I got this problem,shape 512 is the param numdepth,datasets/data_eval_transform.py:128
‘’‘
depth_values = np.arange(depth_min, depth_interval * self.ndepths + depth_min, depth_interval , dtype=np.float32)
’‘’
turn to

depth_values = np.linspace(depth_min, depth_interval * self.ndepths + depth_min, self.ndepths, endpoint=False, dtype=np.float32)

@lemonsstyle
Copy link
Author

Is your dataset your own or public?I set batchsize=1 to solve the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants