Skip to content

Latest commit

 

History

History
132 lines (76 loc) · 4.01 KB

README.md

File metadata and controls

132 lines (76 loc) · 4.01 KB

Personal code changes: The OpenLRM script was utilized to train the InstantMesh

Code License Weight License LRM

HF Models HF Demo

News

Setup

Installation

git clone https://github.com/Mrguanglei/Instantmesh_scriptData.git
cd OpenLRM

Environment

  • Install requirements for OpenLRM first.
    conda create --name Openlrm  python=3.9 -y
    pip install -r requirements.txt
    
  • Please then follow the xFormers installation guide to enable memory efficient attention inside DINOv2 encoder.

Quick Start

Dataset format

│——rendering_random_32views

│ │---object1/

│ │ │---000.png

│ │ │--- 000_normal.png

│ │ │--- 000_depth.png

│ │ │---001.png

│ │ │--- 001_normal.png

│ │ │--- 001_depth.png

│ │ │--- ......

│ │ │--- camera.npz

│ │---object2/

│ .......

│——valid_paths.json

valid_paths.json fomerat:

{ "good_objs": [ { "pose_path": "object1" }, { "pose_path": "object2" }, { "pose_path": "object3" } ...... ] }

Downloading the dataset

I took over 200 glb files from the Objaverse dataset and used the glb to render the dataset we needed

1.Downloading the dataset:Dataset address.

2.Place the glb file in the data folder

Modifying the script

  • Find the script that we need to modify scripts/data/objaverse/blender.sh,

    DIRECTORY="/your/path/OpenLRM/data"    #Put the path to the dataset file you downloaded here

Blender:

Run blender.sh .

This will automatically render the dataset we need above.

Tips

  • The recommended PyTorch version is >=2.1. Code is developed and tested under PyTorch 2.1.2.
  • If you encounter CUDA OOM issues, please try to reduce the frame_size in the inference configs.
  • You should be able to see UserWarning: xFormers is available if xFormers is actually working.
  • If there is no module in bpy and mathutils, please look up the information yourself.

Citation

If you find this work useful for your research, please consider citing:

@article{hong2023lrm,
  title={Lrm: Large reconstruction model for single image to 3d},
  author={Hong, Yicong and Zhang, Kai and Gu, Jiuxiang and Bi, Sai and Zhou, Yang and Liu, Difan and Liu, Feng and Sunkavalli, Kalyan and Bui, Trung and Tan, Hao},
  journal={arXiv preprint arXiv:2311.04400},
  year={2023}
}
@misc{openlrm,
  title = {OpenLRM: Open-Source Large Reconstruction Models},
  author = {Zexin He and Tengfei Wang},
  year = {2023},
  howpublished = {\url{https://github.com/3DTopia/OpenLRM}},
}

License