Skip to content

Latest commit

 

History

History
77 lines (67 loc) · 3.05 KB

README.md

File metadata and controls

77 lines (67 loc) · 3.05 KB

GREW Tutorial

This is for GREW-Benchmark. We report our result of 48% using the baseline model. In order for participants to better start the first step, we provide a tutorial on how to use OpenGait for GREW.

Preprocess the dataset

Download the raw dataset from the official link. You will get three compressed files, i.e. train.zip, test.zip and distractor.zip.

Step 1: Unzip train and test:

unzip -P password train.zip (password is the obtained password)
tar -xzvf train.tgz
cd train
ls *.tgz | xargs -n1 tar xzvf
unzip -P password test.zip (password is the obtained password)
tar -xzvf test.tgz
cd test & cd gallery
ls *.tgz | xargs -n1 tar xzvf
cd .. & cd probe
ls *.tgz | xargs -n1 tar xzvf

After unpacking these compressed files, run this command:

Step2 : To rearrange directory of GREW dataset, turning to id-type-view structure, Run

python datasets/GREW/rearrange_GREW.py --input_path Path_of_GREW-raw --output_path Path_of_GREW-rearranged

Step3: Transforming images to pickle file, run

python datasets/pretreatment.py --input_path Path_of_GREW-rearranged --output_path Path_of_GREW-pkl --dataset GREW

Then you will see the structure like:

  • Processed
    GREW-pkl
    ├── 00001train (subject in training set)
        ├── 00
            ├── 4XPn5Z28
                ├── 4XPn5Z28.pkl
            ├──5TXe8svE
                ├── 5TXe8svE.pkl
                ......
    ├── 00001 (subject in testing set)
        ├── 01
            ├── 79XJefi8
                ├── 79XJefi8.pkl
        ├── 02
            ├── t16VLaQf
                ├── t16VLaQf.pkl
    ├── probe
        ├── etaGVnWf
            ├── etaGVnWf.pkl
        ├── eT1EXpgZ
            ├── eT1EXpgZ.pkl
        ...
    ...
    

Train the dataset

Modify the dataset_root in ./config/baseline/baseline_GREW.yaml, and then run this command:

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./config/baseline/baseline_GREW.yaml --phase train

Get the submission file

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./config/baseline/baseline_GREW.yaml --phase test

The result will be generated in your working directory, you must rename and compress it as the requirements before submitting.

Evaluation locally

While the original grew treat both seq_01 and seq_02 as gallery, but there is no ground truth for probe. Therefore, it is nessesary to upload the submission file on grew competitation. We seperate test set to: seq_01 as gallery, seq_02 as probe. Then you can modify eval_func in the ./config/baseline/baseline_GREW.yaml to identification_real_scene, you can obtain result localy like setting of OUMVLP.