Skip to content

Commit

Permalink
chore(BOP): Delete deprecated parts of BOP challenge README
Browse files Browse the repository at this point in the history
  • Loading branch information
MartinSmeyer authored Mar 22, 2024
1 parent 6337ed6 commit eaf6d86
Showing 1 changed file with 0 additions and 68 deletions.
68 changes: 0 additions & 68 deletions examples/datasets/bop_challenge/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,71 +40,3 @@ blenderpoc run examples/datasets/bop_challenge/main_<bop_dataset_name>_<random/u
* `--num_scenes`: How many scenes with 25 images each to generate

Tip: If you have access to multiple GPUs, you can speedup the process by dividing the 2000 scenes into multiples of 40 scenes (40 scenes * 25 images make up one chunk of 1000 images). Therefore run the script in parallel with different output folders. At the end, rename and merge the scenes in a joint folder. For example, if you have 10 GPUs, set `--num_scenes=200` and run the script 10 times with different output folders.

### Complete the BlenderProc4BOP datasets

To save some time and not copy functionality we use the bop_toolkit to generate the [masks](
https://github.com/thodan/bop_toolkit/blob/master/scripts/calc_gt_masks.py), [scene_gt_info](https://github.com/thodan/bop_toolkit/blob/master/scripts/calc_gt_info.py) and [scene_gt_coco](https://github.com/thodan/bop_toolkit/blob/master/scripts/calc_gt_coco.py)

To install the `bop_toolkit` run

```bash
git clone https://github.com/thodan/bop_toolkit
cd bop_toolkit
pip install -r requirements.txt -e .
```

Then at the top of the scripts mentioned above set the following parameters (keep other parameters unchanged):
```python
p = {
# See dataset_params.py for options.
'dataset': '<bop_dataset_name>',

# Dataset split. Options: 'train', 'val', 'test'.
'dataset_split': 'train',

# Dataset split type. None = default. See dataset_params.py for options.
'dataset_split_type': 'pbr',

# Folder containing the BOP datasets.
'datasets_path': '<path/to/your/bop/datasets>',
}
```

To complete your BOP datasets, finally run:

```bash
python scripts/calc_gt_masks.py
python scripts/calc_gt_info.py
python scripts/calc_gt_coco.py
```

## Original Config file usage

Instead of running the python script once, we ran every config file 2000 times with 25 random cameras per scene. This has the disadvantage that objects need to be loaded at each run.

Download the necessary [BOP datasets](https://bop.felk.cvut.cz/datasets/) and the [bop-toolkit](https://github.com/thodan/bop_toolkit).

Execute in the BlenderProc main directory:

```
blenderproc download cc_textures
```

```
blenderpoc run examples/datasets/bop_challenge/<main_dataset.py>
<path_to_bop_data>
<bop_dataset_name>
<path_to_bop_toolkit>
resources/cctextures
examples/datasets/bop_challenge/output
```

* `examples/datasets/bop_challenge/<main_dataset.py>`: path to the python script file.
* `<path_to_bop_data>`: path to a folder containing BOP datasets.
* `<bop_dataset_name>`: name of BOP dataset.
* `<path_to_bop_toolkit>`: path to a bop_toolkit folder.
* `resources/cctextures`: path to CCTextures folder
* `examples/datasets/bop_challenge/output`: path to an output folder where the bop_data will be saved

This creates 25 images of a single scene. To create a whole dataset, simply run the command multiple times.

0 comments on commit eaf6d86

Please sign in to comment.