Skip to content

Latest commit

 

History

History
61 lines (61 loc) · 2.64 KB

2022-06-28-ahuja22a.md

File metadata and controls

61 lines (61 loc) · 2.64 KB
abstract booktitle title year layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Humans have a remarkable ability to disentangle complex sensory inputs (e.g., image, text) into simple factors of variation (e.g., shape, color) without much supervision. This ability has inspired many works that attempt to solve the following question: how do we invert the data generation process to extract those factors with minimal or no supervision? Several works in the literature on non-linear independent component analysis have established this negative result; without some knowledge of the data generation process or appropriate inductive biases, it is impossible to perform this inversion. In recent years, a lot of progress has been made on disentanglement under structural assumptions, e.g., when we have access to auxiliary information that makes the factors of variation conditionally independent. However, existing work requires a lot of auxiliary information, e.g., in supervised classification, it prescribes that the number of label classes should be at least equal to the total dimension of all factors of variation. In this work, we depart from these assumptions and ask: a) How can we get disentanglement when the auxiliary information does not provide conditional independence over the factors of variation? b) Can we reduce the amount of auxiliary information required for disentanglement? For a class of models where auxiliary information does not ensure conditional independence, we show theoretically and experimentally that disentanglement (to a large extent) is possible even when the auxiliary information dimension is much less than the dimension of the true latent representation.
First Conference on Causal Learning and Reasoning
Towards efficient representation identification in supervised learning
2022
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
ahuja22a
0
Towards efficient representation identification in supervised learning
19
43
19-43
19
false
Ahuja, Kartik and Mahajan, Divyat and Syrgkanis, Vasilis and Mitliagkas, Ioannis
given family
Kartik
Ahuja
given family
Divyat
Mahajan
given family
Vasilis
Syrgkanis
given family
Ioannis
Mitliagkas
2022-06-28
Proceedings of the First Conference on Causal Learning and Reasoning
177
inproceedings
date-parts
2022
6
28