You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@burui11087 I am also interested in this question. I am training with sparse LiDAR point cloud data and am curious about the impact of "block_size" and "max_point_num" in the H5 files. What are the impacts of tuning the parameters of the H5 file?
Well, I guess in the case of airborne lidar you need to have a large block size. Otherwise misclassification (building roof labeled as ground) will occur on large building as you can see in my result. I changed the block size into 10 meter, and those errors are 10x10 meter. In that case I would change the block size into something larger.
Max_point_number is, I guess, the sample size in the H5 file. Point cloud is irregular data, in contrast to image, and we need a fix sample size to be trained in the network.
@aldinorizaldy thank you! I am testing the classification of curb ramps which have a smaller area. Would it make sense to reduce my block size down to 5 meters?
Yes, it does make sense. Actually, the default block size in semantic3d code is 5 m. I think you may start with semantic3d code but use your own data, as semantic3d is outdoor lidar scene.
Does anyone can explain the definition of block size and grid size in prepare_semantic3d_data.py line 24-25
What is the effect when we use bigger or smaller size to the accuracy?
I'd really appreciate any help. Thank you.
The text was updated successfully, but these errors were encountered: