You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm facing difficulties visualizing the eo_data in QGIS. Despite setting the RGB (B4, B3, B2) channels, I'm encountering patches that don't align with the Google satellite image. In addition, some images show padding around the borders of the images. I wonder why this happens in the images. The contrast enhancement setting in QGIS is set to Stretch to MinMax. The examples are shown in the pictures below.
I'm also curious about why cropped the images to 17 x 17 (some cases are 18 x 17, and France data are 25 x 17). This makes them relatively small and more challenging to recognize the contents in images. What is the strategy or benefit to crop train/val/test data in this way?
The other minor issue is the example code in this issue. While running >>> my_dataset = CropHarvest("data", Task(bounding_box=get_country_bbox("France")[1], normalize=True)),
I got an error: ValueError: Assigning CRS to a GeoDataFrame without a geometry column is not supported. Supply geometry using the 'geometry=' keyword argument, or by providing a DataFrame with column name 'geometry'.
This seems to be the issue in geopandas.read_file().
If anyone has suggestions or insights on addressing these issues, I would greatly appreciate it! Thank you in advance for your help.
The text was updated successfully, but these errors were encountered:
Despite setting the RGB (B4, B3, B2) channels, I'm encountering patches that don't align with the Google satellite image
Could you send the labels for which this is an issue? Are they the ones in the screenshots?
some images show padding around the borders of the images. I wonder why this happens in the images.
I suspect this happens because of our cloud masking algorithm. However, we take one of the central pixels when constructing the pixel time-series (and this is one of the reasons we export a patch even though we ultimately only use a pixel-timeseries).
I'm also curious about why cropped the images to 17 x 17 (some cases are 18 x 17, and France data are 25 x 17).
The purpose of the patches is to extract pixel timeseries which can be used to train models (so all the neighbouring pixels are discarded). We outline the general flow here:
The other minor issue is the example code
I have just reconfirmed this runs for me. Which version of geopandas are you running ( / could you share more information about your environment in general)?
Hello @gabrieltseng and CropHarvest team,
I'm facing difficulties visualizing the eo_data in QGIS. Despite setting the RGB (B4, B3, B2) channels, I'm encountering patches that don't align with the Google satellite image. In addition, some images show padding around the borders of the images. I wonder why this happens in the images. The contrast enhancement setting in QGIS is set to
Stretch to MinMax
. The examples are shown in the pictures below.I'm also curious about why cropped the images to 17 x 17 (some cases are 18 x 17, and France data are 25 x 17). This makes them relatively small and more challenging to recognize the contents in images. What is the strategy or benefit to crop train/val/test data in this way?
The other minor issue is the example code in this issue. While running
>>> my_dataset = CropHarvest("data", Task(bounding_box=get_country_bbox("France")[1], normalize=True))
,I got an error:
ValueError: Assigning CRS to a GeoDataFrame without a geometry column is not supported. Supply geometry using the 'geometry=' keyword argument, or by providing a DataFrame with column name 'geometry'
.This seems to be the issue in
geopandas.read_file()
.If anyone has suggestions or insights on addressing these issues, I would greatly appreciate it! Thank you in advance for your help.
The text was updated successfully, but these errors were encountered: