title | abstract | keywords | layout | series | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | publisher | container-title | volume | genre | issued | extras | |||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Composable Action-Conditioned Predictors: Flexible Off-Policy Learning for Robot Navigation |
A general-purpose intelligent robot must be able to learn autonomously and be able to accomplish multiple tasks in order to be deployed in the real world. However, standard reinforcement learning approaches learn separate task-specific policies and assume the reward function for each task is known a priori. We propose a framework that learns event cues from off-policy data, and can flexibly combine these event cues at test time to accomplish different tasks. These event cue labels are not assumed to be known a priori, but are instead labeled using learned models, such as computer vision detectors, and then “backed up” in time using an action-conditioned predictive model. We show that a simulated robotic car and a real-world RC car can gather data and train fully autonomously without any human-provided labels beyond those needed to train the detectors, and then at test-time be able to accomplish a variety of different tasks. Videos of the experiments and code can be found at github.com/gkahn13/CAPs |
multi-task learning, rewards, prediction, robot navigation |
inproceedings |
Proceedings of Machine Learning Research |
kahn18a |
0 |
Composable Action-Conditioned Predictors: Flexible Off-Policy Learning for Robot Navigation |
806 |
816 |
806-816 |
806 |
false |
Kahn, Gregory and Villaflor, Adam and Abbeel, Pieter and Levine, Sergey |
|
2018-10-23 |
PMLR |
Proceedings of The 2nd Conference on Robot Learning |
87 |
inproceedings |
|
|