title | tags | authors | affiliations | date | bibliography | |||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ExpFamilyPCA.jl: A Julia Package for Exponential Family Principal Component Analysis |
|
|
|
9 September 2024 |
paper.bib |
Principal component analysis (PCA) [@PCA1; @PCA2; @PCA3] is popular for compressing, denoising, and interpreting high-dimensional data, but it underperforms on binary, count, and compositional data because the objective assumes data is normally distributed. Exponential family PCA (EPCA) [@EPCA] generalizes PCA to accommodate data from any exponential family distribution, making it more suitable for fields where these data types are common, such as geochemistry, marketing, genomics, political science, and machine learning [@composition; @elements].
ExpFamilyPCA.jl
is a library for EPCA written in Julia, a dynamic language for scientific computing [@Julia]. It is the first EPCA package in Julia and the first in any language to support EPCA for multiple distributions.
EPCA is used in reinforcement learning [@Roy], sample debiasing [@debiasing], and compositional analysis [@gans]. Wider adoption, however, remains limited due to the lack of implementations. The only other EPCA package is written in MATLAB and supports just one distribution [@epca-MATLAB]. This is surprising, as other Bregman-based optimization techniques have been successful in areas like mass spectrometry [@spectrum], ultrasound denoising [@ultrasound], topological data analysis [@topological], and robust clustering [@clustering]. These successes suggest that EPCA holds untapped potential in signal processing and machine learning.
The absence of a general EPCA library likely stems from the limited interoperability between fast symbolic differentiation and optimization libraries in popular languages like Python and C. Julia, by contrast, uses multiple dispatch which promotes high levels of generic code reuse [@dispatch]. Multiple dispatch allows ExpFamilyPCA.jl
to integrate fast symbolic differentiation [@symbolics], optimization [@optim], and numerically stable computation [@stable_exp] without requiring costly API conversions.1 As a result, ExpFamilyPCA.jl
delivers speed, stability, and flexibility, with built-in support for most common distributions (§ Supported Distributions) and flexible constructors for custom distributions (§ Custom Distributions).
Given a data matrix
where
This suggests that each observation
for
The PCA objective is equivalent to maximum likelihood estimation for a Gaussian model. Under this lens, each observation
To recover the latent structure
where
Following @forster, we define the exponential family as the set of distributions with densities of the form
where
The link function
The link function serves a role analogous to that in generalized linear models (GLMs) [@GLM]. In GLMs, the link function connects the linear predictor to the mean of the distribution, enabling flexibility in modeling various data types. Similarly, in EPCA, the link function maps the low-dimensional latent variables to the expectation parameters of the exponential family, thereby generalizing the linear assumptions of traditional PCA to accommodate diverse distributions (see appendix).
EPCA extends the probabilistic interpretation of PCA using a measure of statistical difference called the Bregman divergence [@Bregman; @Brad]. The Bregman divergence
This can be interpreted as the difference between
EPCA generalizes the PCA objective as a Bregman divergence between the data
where
-
$g(\theta)$ is the link function and the gradient of$G$ , -
$G(\theta)$ is a strictly convex, continuously differentiable function (usually the log-partition of an exponential family distribution), - and
$F(\mu)$ is the convex conjugate of$G$ defined by
This suggests that data from the exponential family is well-approximated by expectation parameters
Following @EPCA, we introduce a regularization term to ensure the optimum converges
where
The Poisson EPCA objective is the generalized Kullback-Leibler (KL) divergence (see appendix), making Poisson EPCA ideal for compressing discrete distribution data.
This is useful in applications like belief compression in reinforcement learning [@Roy], where high-dimensional belief states can be effectively reduced with minimal information loss. Below we recreate similar figures3 to @shortRoy and @Roy and observe that Poisson EPCA almost perfectly reconstructs a
ExpFamilyPCA.jl
includes efficient EPCA implementations for several exponential family distributions.
Julia | Description |
---|---|
BernoulliEPCA |
For binary data |
BinomialEPCA |
For count data with a fixed number of trials |
ContinuousBernoulliEPCA |
For modeling probabilities between |
GammaEPCA |
For positive continuous data |
GaussianEPCA |
Standard PCA for real-valued data |
NegativeBinomialEPCA |
For over-dispersed count data |
ParetoEPCA |
For modeling heavy-tailed distributions |
PoissonEPCA |
For count and discrete distribution data |
WeibullEPCA |
For modeling life data and survival analysis |
When working with custom distributions, certain specifications are often more convenient and computationally efficient than others. For example, inducing the gamma EPCA objective from the log-partition
In ExpFamilyPCA.jl
, we would write:
G(θ) = -log(-θ)
g(θ) = -1 / θ
gamma_epca = EPCA(indim, outdim, G, g, Val((:G, :g)); options = NegativeDomain())
A lengthier discussion of the EPCA
constructors and math is provided in the documentation.
Each EPCA
object supports a three-method interface: fit!
, compress
, and decompress
. fit!
trains the model and returns the compressed training data; compress
returns compressed input; and decompress
reconstructs the original data from the compressed representation.
X = sample_from_gamma(n1, indim) # matrix of gamma-distributed data
Y = sample_from_gamma(n2, indim)
X_compressed = fit!(gamma_epca, X)
Y_compressed = compress(gamma_epca, Y)
Y_reconstructed = decompress(gamma_epca, Y_compressed)
We thank Ryan Tibshirani, Arec Jamgochian, Robert Moss, and Dylan Asmar for their help and guidance.
Footnotes
-
Symbolic differentiation is essential for flexibly specifying the EPCA objective (see documentation). While numeric differentiation is faster, symbolic differentiation is performed only once to generate a closed form for the optimizer (e.g.,
Optim.jl
[@optim]), making it more efficient in practice. @logexp (which implements ideas from @stable_exp) mitigates overflow and underflow in exponential and logarithmic operations. ↩ -
In practice, we allow $\epsilon \geq 0$, because special cases of EPCA like traditional PCA are well-known to converge without regularization. Similarly, we pick $\mu_0$ to simplify terms in the objective. ↩
-
See Figure 3(a) in @shortRoy and Figure 12(c) in @Roy. ↩