You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For segmentation, if you zoom the model output layer close enough, you can see that between segmentation masks for different classes, there is a one-pixel gap, see screenshot:
It happens because after stitching the model outputs, the contour is found by the OpenCV (findContours), which finds top-left corner of the contour pixel. Afterwards, it is converted to a polygon layer, which goes through all the contour points, therefore, on the opposite edges, we lose one pixel.
It should be fixed, but I do not have a good idea now on how to do it.
The text was updated successfully, but these errors were encountered:
I'm implementing raw result raster output for MapProcessorSegmentation so I can use thresholding, argmax and gdal_polygonize on the raw result instead of the default argmax + cv2.findContours. This is something I would have done anyway as some models use probability masks as input.
EDIT: It seems MapProcessorRegression does pretty much what I want, (i.e. the raw probability masks which I could convert to segmentation masks by thresholding), except that there are two layers instead of one. However, my post-processing routine could be easily adapted for that. Still, I'm implementing this as outputting raw segmentation mask is valuable.
But the edge effects are present in the argmaxed output (just full result img as a raster in QGIS, which I have already implemented in a quick and dirty way. Now I'm moving couple of raster functions from MapProcessorSuperresolution to processing_utlis and use those to save the rasters in MapProcessorSegmentation.
For segmentation, if you zoom the model output layer close enough, you can see that between segmentation masks for different classes, there is a one-pixel gap, see screenshot:
It happens because after stitching the model outputs, the contour is found by the OpenCV (findContours), which finds top-left corner of the contour pixel. Afterwards, it is converted to a polygon layer, which goes through all the contour points, therefore, on the opposite edges, we lose one pixel.
It should be fixed, but I do not have a good idea now on how to do it.
The text was updated successfully, but these errors were encountered: