Identifiability of latent-variable and structural-equation models: from linear to nonlinear
An old problem in multivariate statistics is that linear Gaussian models are often
unidentifiable. In factor analysis, an orthogonal rotation of the factors is unidentifiable, while …
unidentifiable. In factor analysis, an orthogonal rotation of the factors is unidentifiable, while …
Generative ai and process systems engineering: The next frontier
This review article explores how emerging generative artificial intelligence (GenAI) models,
such as large language models (LLMs), can enhance solution methodologies within process …
such as large language models (LLMs), can enhance solution methodologies within process …
Additive decoders for latent variables identification and cartesian-product extrapolation
We tackle the problems of latent variables identification and" out-of-support''image
generation in representation learning. We show that both are possible for a class of …
generation in representation learning. We show that both are possible for a class of …
Independent mechanism analysis, a new concept?
Independent component analysis provides a principled framework for unsupervised
representation learning, with solid theory on the identifiability of the latent code that …
representation learning, with solid theory on the identifiability of the latent code that …
Function classes for identifiable nonlinear independent component analysis
S Buchholz, M Besserve… - Advances in Neural …, 2022 - proceedings.neurips.cc
Unsupervised learning of latent variable models (LVMs) is widely used to represent data in
machine learning. When such model reflects the ground truth factors and the mechanisms …
machine learning. When such model reflects the ground truth factors and the mechanisms …
Disentanglement via latent quantization
In disentangled representation learning, a model is asked to tease apart a dataset's
underlying sources of variation and represent them independently of one another. Since the …
underlying sources of variation and represent them independently of one another. Since the …
Identifiable deep generative models via sparse decoding
We develop the sparse VAE for unsupervised representation learning on high-dimensional
data. The sparse VAE learns a set of latent factors (representations) which summarize the …
data. The sparse VAE learns a set of latent factors (representations) which summarize the …
Embrace the gap: VAEs perform independent mechanism analysis
Variational autoencoders (VAEs) are a popular framework for modeling complex data
distributions; they can be efficiently trained via variational inference by maximizing the …
distributions; they can be efficiently trained via variational inference by maximizing the …
Disentanglement with biological constraints: A theory of functional cell types
Neurons in the brain are often finely tuned for specific task variables. Moreover, such
disentangled representations are highly sought after in machine learning. Here we …
disentangled representations are highly sought after in machine learning. Here we …
Provable Compositional Generalization for Object-Centric Learning
Learning representations that generalize to novel compositions of known concepts is crucial
for bridging the gap between human and machine perception. One prominent effort is …
for bridging the gap between human and machine perception. One prominent effort is …