Invalid shape for image data #8
-
Hello! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
I'll let @JJGO or @VictorButoi comment but I think you could try any image size that's a multiple of 32 (or 16?). The training was done at 128x128, but given that your domain is probably new, it's not obvious to me if something bigger would be substantially affected. Give it a shot! |
Beta Was this translation helpful? Give feedback.
-
While the model can handle inputs of any size due to its fully convolutional architecture, I don't anticipate the performance to extrapolate well. Specifically, since the model only goes through three downsampling steps (8x reduction), the bottleneck resolution in your images would be 62x55, which might not capture enough visual context. Although there's no harm in attempting it, I expect the model to perform better with inputs sized at 128x128. Another option to address the downsampling issue is to employ patch-based segmentation. This involves dividing the original images into 128x128 patches and then combining the predictions. Additionally, this approach would allow you to increase the overall support size even more. |
Beta Was this translation helpful? Give feedback.
-
Thanks a lot for the detailed explanation! I'll try this |
Beta Was this translation helpful? Give feedback.
While the model can handle inputs of any size due to its fully convolutional architecture, I don't anticipate the performance to extrapolate well. Specifically, since the model only goes through three downsampling steps (8x reduction), the bottleneck resolution in your images would be 62x55, which might not capture enough visual context. Although there's no harm in attempting it, I expect the model to perform better with inputs sized at 128x128.
Another option to address the downsampling issue is to employ patch-based segmentation. This involves dividing the original images into 128x128 patches and then combining the predictions. Additionally, this approach would allow you to increase the…