You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope you don't mind me asking, but I'm curious about a few aspects of the code framework. I was wondering if you could shed some light on whether the LOD framework is designed for training data from multi-sensor LIVO outputs? Also, I'm considering adapting it based on R3LIV and would greatly appreciate any insights you might have on this approach. If you have a moment, could you perhaps share some specific details about this process? Lastly, I'm quite interested in learning more about the methods for obtaining depth maps in this context. Any information you could provide would be extremely helpful. Thank you in advance for your time and expertise.
The text was updated successfully, but these errors were encountered:
I don't know much about SLAM, are LIVO and R3LIV different LiDAR sensors? In general, all the .ply format point clouds can be adapted to this work. However, floating LiDAR points may cause artifacts in depths map and final results.
For depths map generation, please refer to #9
I hope you don't mind me asking, but I'm curious about a few aspects of the code framework. I was wondering if you could shed some light on whether the LOD framework is designed for training data from multi-sensor LIVO outputs? Also, I'm considering adapting it based on R3LIV and would greatly appreciate any insights you might have on this approach. If you have a moment, could you perhaps share some specific details about this process? Lastly, I'm quite interested in learning more about the methods for obtaining depth maps in this context. Any information you could provide would be extremely helpful. Thank you in advance for your time and expertise.
The text was updated successfully, but these errors were encountered: