title | abstract | keywords | layout | series | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | publisher | container-title | volume | genre | issued | extras | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Reinforcement Learning of Active Vision for Manipulating Objects under Occlusions |
We consider artificial agents that learn to jointly control their gripper and camera in order to reinforcement learn manipulation policies in the presence of occlusions from distractor objects. Distractors often occlude the object of interest and cause it to disappear from the field of view. We propose hand/eye controllers that learn to move the camera to keep the object within the field of view and visible, in coordination to manipulating it to achieve the desired goal, e.g., pushing it to a target location. We incorporate structural biases of object-centric attention within our actor-critic architectures, which our experiments suggest to be a key for good performance. Our results further highlight the importance of curriculum with regards to environment difficulty. The resulting active vision / manipulation policies outperform static camera setups for a variety of cluttered environments. |
Manipulation, Reinforcement learning, Control |
inproceedings |
Proceedings of Machine Learning Research |
cheng18a |
0 |
Reinforcement Learning of Active Vision for Manipulating Objects under Occlusions |
422 |
431 |
422-431 |
422 |
false |
Cheng, Ricson and Agarwal, Arpit and Fragkiadaki, Katerina |
|
2018-10-23 |
PMLR |
Proceedings of The 2nd Conference on Robot Learning |
87 |
inproceedings |
|
|