You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @Lydorn !
I've been finding this repo really useful in my research and have a certain issue with one of the experiments I'm running using your method. I am trying to use your frame field learning approach on a custom version of the INRIA dataset where each image tile has been spit into a hundred 500x500 patches. When I try to run the polygonize_mask.py script on this dataset using one of your pretrained runs, I get the following error:
RuntimeError: The size of tensor a (1024) must match the size of tensor b (500) at non-singleton dimension 3
I am using the 'inria_dataset_osm_mask_only.unet16' run for this. Do you have any suggestions on how I could resolve this? Or would I need to train a different model from scratch to allow processing my modified patch size? Looking forward to your reply. Thank you.
The text was updated successfully, but these errors were encountered:
Hi @Lydorn !
I've been finding this repo really useful in my research and have a certain issue with one of the experiments I'm running using your method. I am trying to use your frame field learning approach on a custom version of the INRIA dataset where each image tile has been spit into a hundred 500x500 patches. When I try to run the polygonize_mask.py script on this dataset using one of your pretrained runs, I get the following error:
RuntimeError: The size of tensor a (1024) must match the size of tensor b (500) at non-singleton dimension 3
I am using the 'inria_dataset_osm_mask_only.unet16' run for this. Do you have any suggestions on how I could resolve this? Or would I need to train a different model from scratch to allow processing my modified patch size? Looking forward to your reply. Thank you.
The text was updated successfully, but these errors were encountered: