From 9e28274b95fc62ce72de72d20354fc92be903e2b Mon Sep 17 00:00:00 2001 From: Mingyo Seo Date: Mon, 23 Sep 2024 23:39:03 -0500 Subject: [PATCH] updated spacing --- index.markdown | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/index.markdown b/index.markdown index 398c145..d94b5ff 100644 --- a/index.markdown +++ b/index.markdown @@ -216,8 +216,9 @@ We tackle the problem of perceptive locomotion in dynamic environments. In this We introduce a control hierarchy where the high-level controller, trained with imitation learning, sets navigation commands and the low-level gait controller, trained with reinforcement learning, realizes the target commands through joint-space actuation. This combination enables us to effectively deploy the entire hierarchy on quadrupedal robots in real-world environments.

- -


Hierarchical Perceptive Locomotion Model

The high-level navigation policy generates the target velocity command at 10Hz from the onboard RGB-D camera observation and robot heading. The target velocity command, including linear and angular velocities, is used as input to the low-level gait controller along with the buffer of recent robot states. The low-level gait policy predicts the joint-space actions as the desired joint positions at 38Hz and sends them to the quadruped robot for actuation. More implementation details can be found in this page.

-

@@ -262,7 +262,6 @@ src="./src/figure/pipeline.png" style="width:100%;"> -

Deploying in Unseen Environments

@@ -324,8 +323,8 @@ src="./src/figure/pipeline.png" style="width:100%;"> -
+

Citation