Skip to content

[CVPR 2024] MANUS: Markerless Grasp Capture using Articulated 3D Gaussians

Notifications You must be signed in to change notification settings

brown-ivl/manus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MANUS: Markerless Grasp Capture using Articulated 3D Gaussians

Chandradeep Pokhariya1, Ishaan N Shah 1*, Angela Xing2*, Zekun Li2, Kefan Chen2, Avinash Sharma1, Srinath Sridhar 2

1CVIT, IIIT Hyderabad    2Brown University    *Equal Contribution

Teaser Animation


Abstract

Understanding how we grasp objects with our hands has important applications in areas like robotics and mixed reality. However, this challenging problem requires accurate modeling of the contact between hands and objects. To capture grasps, existing methods use skeletons, meshes, or parametric models that does not represent hand shape accurately resulting in inaccurate contacts. We present MANUS, a method for Markerless Hand-Object Grasp Capture using Articulated 3D Gaussians. We build a novel articulated 3D Gaussians representation that extends 3D Gaussian splatting for high-fidelity representation of articulating hands. Since our representation uses Gaussian primitives optimized from the multi view pixel-aligned losses, it enables us to efficiently and accurately estimate contacts between the hand and the object. For the most accurate results, our method requires tens of camera views that current datasets do not provide. We therefore build MANUSGrasps, a new dataset that contains hand-object grasps viewed from 50+ cameras across 30+ scenes, 3 subjects, and comprising over 7M frames. In addition to extensive qualitative results, we also show that our method outperforms others on a quantitative contact evaluation method that uses paint transfer from the object to the hand.


How to start

MANUS is built on top of original Gaussian Splatting codebase and reuses functions from it's codebase heavily. Clone the repository using git clone --recursive <link>.

We can use conda to setup the python environment like this

bash setup_env.sh

Apart from the conda env, we use Blender to get novel views during test time. You can download the Blender(3.3) and provide the path to the bash file.

Codebase Info

  • config folder contains the config files for trainer, dataset, and different modules. These config parameters can be overridden in the bash script. To maintain the config, we use hyra-core.
  • src contains the main code, and data contains the essential data required.
  • main.py file contains driver code from which everything kickstarts.
  • submodules folder contains the differentiable rasterizer and knn provided by original Gaussian-Splatting repo.

MANUS-Grasps Dataset

Please check Dataset.md to get dataset information.

Download small batch of dataset from here

import gdown
gdown.download_folder(url = {url})

Optimization

To optimize object module on our dataset.

bash scripts/train/train_object.sh {SUBJECT}

To optimize hand module on our dataset.

bash scripts/train/train_hands.sh {SUBJECT} {EXP_NAME}

To composite the scene either for the grasp results of for the evaluation. Note that, you have to define objects sequence in the composite.sh script.

bash scripts/train/composite.sh {SUBJECT} {HAND_EXP_NAME} {results/eval}

License

This dataset is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.

CC BY-NC 4.0

To view a copy of this license, visit https://creativecommons.org/licenses/by-nc/4.0/.

Citation

@misc{pokhariya2023manus,
      title={MANUS: Markerless Hand-Object Grasp Capture using Articulated 3D Gaussians}, 
      author={Chandradeep Pokhariya and Ishaan N Shah and Angela Xing and Zekun Li and Kefan Chen and Avinash Sharma and Srinath Sridhar},
      year={2023},
      eprint={2312.02137},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

This work was supported by NSF CAREER grant #2143576, ONR DURIP grant N00014-23-1-2804, ONR grant N00014-22-1-259, a gift from Meta Reality Labs, and an AWS Cloud Credits award. We would like to thank George Konidaris, Stefanie Tellex and Dingxi Zhang. Additionally, we thank Bank of Baroda for partially funding Chandradeep’s travel expenses.

About

[CVPR 2024] MANUS: Markerless Grasp Capture using Articulated 3D Gaussians

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published