Skip to content

Commit

Permalink
Doc fix
Browse files Browse the repository at this point in the history
  • Loading branch information
dreamerlin committed Jul 7, 2020
1 parent 8dd15c8 commit 22d051b
Show file tree
Hide file tree
Showing 5 changed files with 7 additions and 132 deletions.
2 changes: 1 addition & 1 deletion docs/data_preparation.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ To ease usage, we provide tutorials of data deployment for each dataset.
- [Something-Something V2](https://20bn.com/datasets/something-something): See [preparing_sthv2.md](/tools/data/sthv2/)
- [Moments in Time](http://moments.csail.mit.edu/): See [preparing_mit.md](/tools/data/mit/)
- [Multi-Moments in Time](http://moments.csail.mit.edu/challenge_iccv_2019.html): See [preparing_mmit.md](/tools/data/mmit/)
- [ActivityNet_feature](): See [praparing_activitynet.md](/tools/data/activitynet/)
- ActivityNet_feature: See [praparing_activitynet.md](/tools/data/activitynet/)

Now, you can switch to [getting_started.md](getting_started.md) to train and test the model.

Expand Down
8 changes: 4 additions & 4 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -258,14 +258,14 @@ If you want to specify the working directory in the command, you can add an argu
Optional arguments are:
- `--validate` (**strongly recommended**): Perform evaluation at every k (default value is 5, which can be modified by changing the `interval` value in `evaluation` dict in each config file) epochs during the training.
- `--work-dir ${WORK_DIR}`: Override the working directory specified in the config file.
- `--resume-from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file.
- `--work_dir ${WORK_DIR}`: Override the working directory specified in the config file.
- `--resume_from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file.
- `--gpus ${GPU_NUM}`: Number of gpus to use, which is only applicable to non-distributed training.
- `--gpu_ids ${GPU_IDS}`: IDs of gpus to use, which is only applicable to non-distributed training.
- `--seed ${SEED}`: Seed id for random state in python, numpy and pytorch to generate random numbers.
- `--deterministic`: If specified, it will set deterministic options for CUDNN backend.
- `JOB_LAUNCHER`: Items for distributed job initialization launcher. Allowed choices are `none`, `pytorch`, `slurm`, `mpi`. Especially, if set to none, it will test in a non-distributed mode.
- `LOCAL_RANK`: ID for local rank. If not specified, it will be set to 0.
- `--autoscale-lr`: If specified, it will automatically scale lr with the number of gpus by [Linear Scaling Rule](https://arxiv.org/abs/1706.02677).
Difference between `resume-from` and `load-from`:
`resume-from` loads both the model weights and optimizer status, and the epoch is also inherited from the specified checkpoint. It is usually used for resuming the training process that is interrupted accidentally.
Expand Down Expand Up @@ -294,7 +294,7 @@ GPUS=16 ./tools/slurm_train.sh dev tsn_r50_k400 configs/recognition/tsn/tsn_r50_
You can check [slurm_train.sh](/tools/slurm_train.sh) for full arguments and environment variables.
If you have just multiple machines connected with ethernet, you can refer to
pytorch [launch utility](https://pytorch.org/docs/stable/distributed_deprecated.html#launch-utility).
pytorch [launch utility](https://pytorch.org/docs/stable/distributed.html#launch-utility).
Usually it is slow if you do not have high speed networking like infiniband.
### Launch multiple jobs on a single machine
Expand Down
2 changes: 1 addition & 1 deletion docs/merge_docs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ sed -i 's=](/=](http://gitlab.sz.sensetime.com/open-mmlab/mmaction-lite/tree/mas
sed -i 's=](/=](http://gitlab.sz.sensetime.com/open-mmlab/mmaction-lite/tree/master/=g' install.md
sed -i 's=](/=](http://gitlab.sz.sensetime.com/open-mmlab/mmaction-lite/tree/master/=g' tutorials.md
sed -i 's/..\/imgs/_images/g' tutorials.md
sed -i 's/](new_dataset.md)/](#tutorial-2-adding-new-dataset)]/g' tutorials.md
sed -i 's/](new_dataset.md)/](#tutorial-2-adding-new-dataset)/g' tutorials.md

cat localization_models.md recognition_models.md > modelzoo.md
sed -i '1i\# Modelzoo' modelzoo.md
Expand Down
125 changes: 0 additions & 125 deletions docs/parrots_start.md

This file was deleted.

2 changes: 1 addition & 1 deletion tools/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def parse_args():
help='number of gpus to use '
'(only applicable to non-distributed training)')
group_gpus.add_argument(
'--gpu-ids',
'--gpu_ids',
type=int,
nargs='+',
help='ids of gpus to use '
Expand Down

0 comments on commit 22d051b

Please sign in to comment.