When is unsupervised disentanglement possible?
D Horan, E Richardson… - Advances in Neural …, 2021 - proceedings.neurips.cc
A common assumption in many domains is that high dimensional data are a smooth
nonlinear function of a small number of independent factors. When is it possible to recover …
nonlinear function of a small number of independent factors. When is it possible to recover …
Dual swap disentangling
Learning interpretable disentangled representations is a crucial yet challenging task. In this
paper, we propose a weakly semi-supervised method, termed as Dual Swap Disentangling …
paper, we propose a weakly semi-supervised method, termed as Dual Swap Disentangling …
Disentanglement via latent quantization
In disentangled representation learning, a model is asked to tease apart a dataset's
underlying sources of variation and represent them independently of one another. Since the …
underlying sources of variation and represent them independently of one another. Since the …
Challenging common assumptions in the unsupervised learning of disentangled representations
The key idea behind the unsupervised learning of disentangled representations is that real-
world data is generated by a few explanatory factors of variation which can be recovered by …
world data is generated by a few explanatory factors of variation which can be recovered by …
An identifiable double vae for disentangled representations
A large part of the literature on learning disentangled representations focuses on variational
autoencoders (VAEs). Recent developments demonstrate that disentanglement cannot be …
autoencoders (VAEs). Recent developments demonstrate that disentanglement cannot be …
Where and what? examining interpretable disentangled representations
Capturing interpretable variations has long been one of the goals in disentanglement
learning. However, unlike the independence assumption, interpretability has rarely been …
learning. However, unlike the independence assumption, interpretability has rarely been …
A sober look at the unsupervised learning of disentangled representations and their evaluation
The idea behind the unsupervised learning of disentangled representations is that real-
world data is generated by a few explanatory factors of variation which can be recovered by …
world data is generated by a few explanatory factors of variation which can be recovered by …
The hessian penalty: A weak prior for unsupervised disentanglement
Existing disentanglement methods for deep generative models rely on hand-picked priors
and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a …
and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a …
Disentangling by factorising
We define and address the problem of unsupervised learning of disentangled
representations on data generated from independent factors of variation. We propose …
representations on data generated from independent factors of variation. We propose …
Nashae: Disentangling representations through adversarial covariance minimization
We present a self-supervised method to disentangle factors of variation in high-dimensional
data that does not rely on prior knowledge of the underlying variation profile (eg, no …
data that does not rely on prior knowledge of the underlying variation profile (eg, no …