Identifiability of latent-variable and structural-equation models: from linear to nonlinear

A Hyvärinen, I Khemakhem, R Monti - Annals of the Institute of Statistical …, 2024 - Springer
An old problem in multivariate statistics is that linear Gaussian models are often
unidentifiable. In factor analysis, an orthogonal rotation of the factors is unidentifiable, while …

Generative ai and process systems engineering: The next frontier

B Decardi-Nelson, AS Alshehri, A Ajagekar… - Computers & Chemical …, 2024 - Elsevier
This review article explores how emerging generative artificial intelligence (GenAI) models,
such as large language models (LLMs), can enhance solution methodologies within process …

Additive decoders for latent variables identification and cartesian-product extrapolation

S Lachapelle, D Mahajan, I Mitliagkas… - Advances in …, 2024 - proceedings.neurips.cc
We tackle the problems of latent variables identification and" out-of-support''image
generation in representation learning. We show that both are possible for a class of …

Independent mechanism analysis, a new concept?

L Gresele, J Von Kügelgen, V Stimper… - Advances in neural …, 2021 - proceedings.neurips.cc
Independent component analysis provides a principled framework for unsupervised
representation learning, with solid theory on the identifiability of the latent code that …

Function classes for identifiable nonlinear independent component analysis

S Buchholz, M Besserve… - Advances in Neural …, 2022 - proceedings.neurips.cc
Unsupervised learning of latent variable models (LVMs) is widely used to represent data in
machine learning. When such model reflects the ground truth factors and the mechanisms …

Disentanglement via latent quantization

K Hsu, W Dorrell, J Whittington… - Advances in Neural …, 2024 - proceedings.neurips.cc
In disentangled representation learning, a model is asked to tease apart a dataset's
underlying sources of variation and represent them independently of one another. Since the …

Identifiable deep generative models via sparse decoding

GE Moran, D Sridhar, Y Wang, DM Blei - arXiv preprint arXiv:2110.10804, 2021 - arxiv.org
We develop the sparse VAE for unsupervised representation learning on high-dimensional
data. The sparse VAE learns a set of latent factors (representations) which summarize the …

Embrace the gap: VAEs perform independent mechanism analysis

P Reizinger, L Gresele, J Brady… - Advances in …, 2022 - proceedings.neurips.cc
Variational autoencoders (VAEs) are a popular framework for modeling complex data
distributions; they can be efficiently trained via variational inference by maximizing the …

Disentanglement with biological constraints: A theory of functional cell types

JCR Whittington, W Dorrell, S Ganguli… - The Eleventh …, 2023 - openreview.net
Neurons in the brain are often finely tuned for specific task variables. Moreover, such
disentangled representations are highly sought after in machine learning. Here we …

Provable Compositional Generalization for Object-Centric Learning

T Wiedemer, J Brady, A Panfilov, A Juhos… - arXiv preprint arXiv …, 2023 - arxiv.org
Learning representations that generalize to novel compositions of known concepts is crucial
for bridging the gap between human and machine perception. One prominent effort is …