Skip to content

Latest commit

 

History

History
51 lines (51 loc) · 2.04 KB

2018-10-23-hristov18a.md

File metadata and controls

51 lines (51 loc) · 2.04 KB
title abstract keywords layout series id month tex_title firstpage lastpage page order cycles bibtex_author author date address publisher container-title volume genre issued pdf extras
Interpretable Latent Spaces for Learning from Demonstration
Effective human-robot interaction, such as in robot learning from human demonstration, requires the learning agent to be able to ground abstract concepts (such as those contained within instructions) in a corresponding high-dimensional sensory input stream from the world. Models such as deep neural networks, with high capacity through their large parameter spaces, can be used to compress the high-dimensional sensory data to lower dimensional representations. These low-dimensional representations facilitate symbol grounding, but may not guarantee that the representation would be human-interpretable. We propose a method which utilises the grouping of user-defined symbols and their corresponding sensory observations in order to align the learnt compressed latent representation with the semantic notions contained in the abstract labels. We demonstrate this through experiments with both simulated and real-world object data, showing that such alignment can be achieved in a process of physical symbol grounding.
disentanglement learning, model interpretability, symbol grounding
inproceedings
Proceedings of Machine Learning Research
hristov18a
0
Interpretable Latent Spaces for Learning from Demonstration
957
968
957-968
957
false
Hristov, Yordan and Lascarides, Alex and Ramamoorthy, Subramanian
given family
Yordan
Hristov
given family
Alex
Lascarides
given family
Subramanian
Ramamoorthy
2018-10-23
PMLR
Proceedings of The 2nd Conference on Robot Learning
87
inproceedings
date-parts
2018
10
23