When is unsupervised disentanglement possible?

D Horan, E Richardson… - Advances in Neural …, 2021 - proceedings.neurips.cc
A common assumption in many domains is that high dimensional data are a smooth
nonlinear function of a small number of independent factors. When is it possible to recover …

Dual swap disentangling

Z Feng, X Wang, C Ke, AX Zeng… - Advances in neural …, 2018 - proceedings.neurips.cc
Learning interpretable disentangled representations is a crucial yet challenging task. In this
paper, we propose a weakly semi-supervised method, termed as Dual Swap Disentangling …

Disentanglement via latent quantization

K Hsu, W Dorrell, J Whittington… - Advances in Neural …, 2024 - proceedings.neurips.cc
In disentangled representation learning, a model is asked to tease apart a dataset's
underlying sources of variation and represent them independently of one another. Since the …

Challenging common assumptions in the unsupervised learning of disentangled representations

F Locatello, S Bauer, M Lucic… - international …, 2019 - proceedings.mlr.press
The key idea behind the unsupervised learning of disentangled representations is that real-
world data is generated by a few explanatory factors of variation which can be recovered by …

An identifiable double vae for disentangled representations

G Mita, M Filippone, P Michiardi - … Conference on Machine …, 2021 - proceedings.mlr.press
A large part of the literature on learning disentangled representations focuses on variational
autoencoders (VAEs). Recent developments demonstrate that disentanglement cannot be …

Where and what? examining interpretable disentangled representations

X Zhu, C Xu, D Tao - … of the IEEE/CVF Conference on …, 2021 - openaccess.thecvf.com
Capturing interpretable variations has long been one of the goals in disentanglement
learning. However, unlike the independence assumption, interpretability has rarely been …

A sober look at the unsupervised learning of disentangled representations and their evaluation

F Locatello, S Bauer, M Lucic, G Rätsch, S Gelly… - Journal of Machine …, 2020 - jmlr.org
The idea behind the unsupervised learning of disentangled representations is that real-
world data is generated by a few explanatory factors of variation which can be recovered by …

The hessian penalty: A weak prior for unsupervised disentanglement

W Peebles, J Peebles, JY Zhu, A Efros… - Computer Vision–ECCV …, 2020 - Springer
Existing disentanglement methods for deep generative models rely on hand-picked priors
and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a …

Disentangling by factorising

H Kim, A Mnih - International conference on machine …, 2018 - proceedings.mlr.press
We define and address the problem of unsupervised learning of disentangled
representations on data generated from independent factors of variation. We propose …

Nashae: Disentangling representations through adversarial covariance minimization

E Yeats, F Liu, D Womble, H Li - European Conference on Computer …, 2022 - Springer
We present a self-supervised method to disentangle factors of variation in high-dimensional
data that does not rely on prior knowledge of the underlying variation profile (eg, no …