Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When to release data capture by ipad / iphone? #6

Open
YiChenCityU opened this issue May 26, 2023 · 3 comments
Open

When to release data capture by ipad / iphone? #6

YiChenCityU opened this issue May 26, 2023 · 3 comments

Comments

@YiChenCityU
Copy link

Thanks very much.

@HengyiWang
Copy link
Owner

Thank you for your interest, I will try to release the tutorials and the related code for capturing data using ipad/iphone by the end of this week.

@wing-kit
Copy link

wing-kit commented Jun 9, 2023

Thank you for sharing the way to prepare dataset using iPhone/iPad LiDAR.
I am able to run the steps and get the reconstructed mesh using iPad Pro.

mesh

BTW Do you have an idea how to get a more photorealistic result?

I also tried the apartment dataset from nice-slam (captured from azure kinect dk). The appearance of the reconstructed mesh is also not promising, given that the azure one has much higher depth map resolution.

@HengyiWang
Copy link
Owner

Hi @wing-kit, it is great to see that you manage to capture data using iPhone/iPad and get the reconstruction results using Co-SLAM.

Regarding your question about obtaining more photorealistic results, I have some suggestions for you to try:

  1. In the latest commit of Co-SLAM, I added an option for rendering surface color. You can give it a try and see if it improves the photorealism of the reconstructed mesh. However, please keep in mind that the viewing direction is currently set to be the surface normal. If the actual viewing direction significantly deviates from the surface normal, the color might appear worse.

  2. Co-SLAM performs joint optimization of both geometry and color. To enhance the color quality, you can experiment with increasing the weight of the RGB loss or increasing the number of mapping iterations (iters: 10). Note that the current hyper-parameter setting focuses on computational efficiency and uses only 5% of pixels (n_pixels: 0.05) from each keyframe. You can consider increasing this percentage to obtain more pixel observations

  3. By default, the voxel size for mesh extraction in Co-SLAM is around 3cm, which might be slightly large for small objects. Reducing the voxel size further may help improve the small objects in the reconstructed mesh.

I hope these suggestions help you in achieving more photorealistic results with your reconstructed meshes. Let me know if you have any further questions or need assistance with anything else:)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants