- LeRobot project from https://github.com/huggingface/lerobot/
On Docker host (Jetson native), first launch rerun.io (check the original instruction on lerobot repo)
pip install rerun-sdk
rerun
Then, start the docker container to run the visualization script.
jetson-containers run --shm-size=4g -w /opt/lerobot $(autotag lerobot) \
python3 lerobot/scripts/visualize_dataset.py \
--repo-id lerobot/pusht \
--episode-index 0
See the original instruction on lerobot repo.
jetson-containers run --shm-size=4g -w /opt/lerobot $(autotag lerobot) \
python3 lerobot/scripts/eval.py \
-p lerobot/diffusion_pusht \
eval.n_episodes=10 \
eval.batch_size=10
See the original instruction on lerobot repo.
jetson-containers run --shm-size=4g -w /opt/lerobot $(autotag lerobot) \
python3 lerobot/scripts/train.py \
policy=act \
env=aloha \
env.task=AlohaInsertion-v0 \
dataset_repo_id=lerobot/aloha_sim_insertion_human
On Jetson host side, we set an udev rule so that arms always get assigned the same device name as following.
/dev/ttyACM_kochleader
: Leader arm/dev/ttyACM_kochfollower
: Follower arm
First only connect the leader arm to Jetson and record the serial ID by running the following:
ll /dev/serial/by-id/
The output should look like this.
lrwxrwxrwx 1 root root 13 Sep 24 13:07 usb-ROBOTIS_OpenRB-150_BA98C8C350304A46462E3120FF121B06-if00 -> ../../ttyACM1
Then edit the first line of ./99-usb-serial.rules
like the following.
SUBSYSTEM=="tty", ATTRS{idVendor}=="2f5d", ATTRS{idProduct}=="2202", ATTRS{serial}=="BA98C8C350304A46462E3120FF121B06", SYMLINK+="ttyACM_kochleader"
SUBSYSTEM=="tty", ATTRS{idVendor}=="2f5d", ATTRS{idProduct}=="2202", ATTRS{serial}=="00000000000000000000000000000000", SYMLINK+="ttyACM_kochfollower"
Now disconnect the leader arm, and then only connect the follower arm to Jetson.
Repeat the same steps to record the serial to edit the second line of 99-usb-serial.rules
file.
$ ll /dev/serial/by-id/
lrwxrwxrwx 1 root root 13 Sep 24 13:07 usb-ROBOTIS_OpenRB-150_483F88DC50304A46462E3120FF0C081A-if00 -> ../../ttyACM0
$ vi ./data/lerobot/99-usb-serial.rules
You should have ./99-usb-serial.rules
now looking like this:
SUBSYSTEM=="tty", ATTRS{idVendor}=="2f5d", ATTRS{idProduct}=="2202", ATTRS{serial}=="BA98C8C350304A46462E3120FF121B06", SYMLINK+="ttyACM_kochleader"
SUBSYSTEM=="tty", ATTRS{idVendor}=="2f5d", ATTRS{idProduct}=="2202", ATTRS{serial}=="483F88DC50304A46462E3120FF0C081A", SYMLINK+="ttyACM_kochfollower"
Finally copy this under /etc/udev/rules.d/
(of host), and restart Jetson.
sudo cp ./99-usb-serial.rules /etc/udev/rules.d/
sudo reboot
After reboot, check if we now have achieved the desired fixed simlinks names for the arms.
ls -l /dev/ttyACM*
You should get something like this:
crw-rw---- 1 root dialout 166, 0 Sep 24 17:20 /dev/ttyACM0
crw-rw---- 1 root dialout 166, 1 Sep 24 16:13 /dev/ttyACM1
lrwxrwxrwx 1 root root 7 Sep 24 17:20 /dev/ttyACM_kochfollower -> ttyACM0
lrwxrwxrwx 1 root root 7 Sep 24 16:13 /dev/ttyACM_kochleader -> ttyACM1
cd jetson-containers
./packages/robots/lerobot/clone_lerobot_dir_under_data.sh
./packages/robots/lerobot/copy_overlay_files_in_data_lerobot.sh
./run.sh \
--csi2webcam --csi-capture-res='1640x1232@30' --csi-output-res='640x480@30' \
-v ${PWD}/data/lerobot/:/opt/lerobot/ \
$(./autotag lerobot)
You will now use your local PC to access the Jupyter Lab server running on Jetson on the same network.
Once the contianer starts, you should see lines like this printed.
JupyterLab URL: http://10.110.51.21:8888 (password "nvidia")
JupyterLab logs: /data/logs/jupyter.log
Copy and paste the address on your web browser and access the Jupyter Lab server.
Navigate to ./notebooks/
and open the first notebook.
Now follow the Jupyter notebook contents.
CONTAINERS
lerobot |
|
---|---|
Requires | L4T ['>=36'] |
Dependencies | build-essential pip_cache:cu122 cuda:12.2 cudnn python numpy cmake onnx pytorch:2.2 torchvision huggingface_hub rust transformers opencv:4.10.0 pyav h5py jupyterlab:main jupyterlab:myst |
Dockerfile | Dockerfile |
Images | dustynv/lerobot:r36.3.0 (2024-10-15, 7.6GB) dustynv/lerobot:r36.4.0 (2024-10-15, 6.3GB) |
CONTAINER IMAGES
Repository/Tag | Date | Arch | Size |
---|---|---|---|
dustynv/lerobot:r36.3.0 |
2024-10-15 |
arm64 |
7.6GB |
dustynv/lerobot:r36.4.0 |
2024-10-15 |
arm64 |
6.3GB |
Container images are compatible with other minor versions of JetPack/L4T:
• L4T R32.7 containers can run on other versions of L4T R32.7 (JetPack 4.6+)
• L4T R35.x containers can run on other versions of L4T R35.x (JetPack 5.1+)
RUN CONTAINER
To start the container, you can use jetson-containers run
and autotag
, or manually put together a docker run
command:
# automatically pull or build a compatible container image
jetson-containers run $(autotag lerobot)
# or explicitly specify one of the container images above
jetson-containers run dustynv/lerobot:r36.4.0
# or if using 'docker run' (specify image and mounts/ect)
sudo docker run --runtime nvidia -it --rm --network=host dustynv/lerobot:r36.4.0
jetson-containers run
forwards arguments todocker run
with some defaults added (like--runtime nvidia
, mounts a/data
cache, and detects devices)
autotag
finds a container image that's compatible with your version of JetPack/L4T - either locally, pulled from a registry, or by building it.
To mount your own directories into the container, use the -v
or --volume
flags:
jetson-containers run -v /path/on/host:/path/in/container $(autotag lerobot)
To launch the container running a command, as opposed to an interactive shell:
jetson-containers run $(autotag lerobot) my_app --abc xyz
You can pass any options to it that you would to docker run
, and it'll print out the full command that it constructs before executing it.
BUILD CONTAINER
If you use autotag
as shown above, it'll ask to build the container for you if needed. To manually build it, first do the system setup, then run:
jetson-containers build lerobot
The dependencies from above will be built into the container, and it'll be tested during. Run it with --help
for build options.