Dynamical variational autoencoders: A comprehensive review
Variational autoencoders (VAEs) are powerful deep generative models widely used to
represent high-dimensional complex data through a low-dimensional latent space learned …
represent high-dimensional complex data through a low-dimensional latent space learned …
Learning transferable visual models from natural language supervision
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined
object categories. This restricted form of supervision limits their generality and usability since …
object categories. This restricted form of supervision limits their generality and usability since …
Toward transparent ai: A survey on interpreting the inner structures of deep neural networks
The last decade of machine learning has seen drastic increases in scale and capabilities.
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
On disentangled representations learned from correlated data
The focus of disentanglement approaches has been on identifying independent factors of
variation in data. However, the causal variables underlying real-world observations are often …
variation in data. However, the causal variables underlying real-world observations are often …
Is a caption worth a thousand images? a controlled study for representation learning
The development of CLIP [Radford et al., 2021] has sparked a debate on whether language
supervision can result in vision models with more transferable representations than …
supervision can result in vision models with more transferable representations than …
When is unsupervised disentanglement possible?
D Horan, E Richardson… - Advances in Neural …, 2021 - proceedings.neurips.cc
A common assumption in many domains is that high dimensional data are a smooth
nonlinear function of a small number of independent factors. When is it possible to recover …
nonlinear function of a small number of independent factors. When is it possible to recover …
Unsupervised learning of disentangled representation via auto-encoding: A survey
In recent years, the rapid development of deep learning approaches has paved the way to
explore the underlying factors that explain the data. In particular, several methods have …
explore the underlying factors that explain the data. In particular, several methods have …
Soft-introvae: Analyzing and improving the introspective variational autoencoder
The recently introduced introspective variational autoencoder (IntroVAE) exhibits
outstanding image generations, and allows for amortized inference using an image encoder …
outstanding image generations, and allows for amortized inference using an image encoder …
Speaker adaptation using spectro-temporal deep features for dysarthric and elderly speech recognition
Despite the rapid progress of automatic speech recognition (ASR) technologies targeting
normal speech in recent decades, accurate recognition of dysarthric and elderly speech …
normal speech in recent decades, accurate recognition of dysarthric and elderly speech …
Disentanglement of correlated factors via hausdorff factorized support
A grand goal in deep learning research is to learn representations capable of generalizing
across distribution shifts. Disentanglement is one promising direction aimed at aligning a …
across distribution shifts. Disentanglement is one promising direction aimed at aligning a …