Code for Paper, please cite :
@INPROCEEDINGS{yhe12NIPS, title={Learning the Dependency Structure of Latent Factors}, author={Y. He and Y. Qi and K. Kavukcuoglu and H. Park}, booktitle={Proceedings of Advances in Neural Information Processing Systems (NIPS)}, year={2012}, }
Abstract: In this paper, we study latent factor models with dependency structure in the la- tent space. We propose a general learning framework which induces sparsity on the undirected graphical model imposed on the vector of latent factors. A novel latent factor model SLFA is then proposed as a matrix factorization problem with a special regularization term that encourages collaborative reconstruction. The main benefit (novelty) of the model is that we can simultaneously learn the lower- dimensional representation for data and model the pairwise relationships between latent factors explicitly. An on-line learning algorithm is devised to make the model feasible for large-scale learning problems. Experimental results on two synthetic data and two real-world data sets demonstrate that pairwise relationships and latent factors learned by our model provide a more structured way of exploring high-dimensional data, and the learned representations achieve the state-of-the-art classification performance.
Paper PDF @ http://www.cs.cmu.edu/%7Eqyj/papersA08/12-slfa.pdf
Paper supplementary doc @ http://www.cs.cmu.edu/%7Eqyj/papersA08/12-nips-poster.pdf
Related NIPS poster slide @ http://www.cs.virginia.edu/yanjun/paperA14/2012_SLFA_NIPS.pdf