This repository has been archived by the owner on Apr 19, 2021. It is now read-only.
Mesh rendering using depth estimation #13
Labels
enhancement
New feature or request
Machine Learning 🦾
Involves the architecture of computer vision models
With depth estimation and a panoramic image, we can render 3D objects that the user can walk around. With panoramic video, we can texture the objects as well. This would make the experience more like an interactive video game instead of a series of panoramas. Here is Ventura's code to do this for cylindrical panoramas, and a SLAM algorithm for tracking camera position.
Look into the techniques and build a toy example on a new branch.
The text was updated successfully, but these errors were encountered: