See here for installation.
Collect demonstration data by teleoperation.
Generate a npy
format dataset for learning from teleoperation data:
$ python ../utils/make_dataset.py \
--in_dir ../teleop/teleop_data/<demo_name> --out_dir ./data/<demo_name> \
--train_ratio 0.8 --nproc `nproc` --skip 6 --cropped_img_size 280 --resized_img_size 64
The --cropped_img_size
option should be specified appropriately for each task.
Visualize the generated data (optional):
$ python ../utils/check_data.py --in_dir ./data/<demo_name> --idx 0
Train a model:
$ python ./bin/TrainSarnn.py \
--data_dir ./data/<demo_name> --log_dir ./log/<demo_name> \
--no_side_image --no_wrench --with_mask
The checkpoint file SARNN.pth
is saved in the directory specified by the --log_dir
option.
Visualize an animation of prediction (optional):
$ python ./bin/test.py --data_dir ./data/<demo_name> --filename ./log/<demo_name>/SARNN.pth --no_side_image --no_wrench
Visualize the internal representation of the RNN in prediction (optional):
$ python ./bin/test_pca.py --data_dir ./data/<demo_name> --filename ./log/<demo_name>/SARNN.pth --no_side_image --no_wrench
Run a trained policy:
$ python ./bin/rollout/RolloutSarnnMujocoUR5eCable.py \
--checkpoint ./log/<demo_name>/SARNN.pth \
--cropped_img_size 280 --skip 6 --world_idx 0
The --cropped_img_size
option must be the same as for dataset generation.
For more information on the technical details, please see the following paper:
@INPROCEEDINGS{SARNN_ICRA2022,
author = {Ichiwara, Hideyuki and Ito, Hiroshi and Yamamoto, Kenjiro and Mori, Hiroki and Ogata, Tetsuya},
title = {Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility},
booktitle = {International Conference on Robotics and Automation},
year = {2022},
pages = {5375-5381},
doi = {10.1109/ICRA46639.2022.9811940}
}