Improving disentangled representation learning with the beta bernoulli process

Prashnna Gyawali, Zhiyuan Li, Cameron Knight, Sandesh Ghimire, B. Milan Horacek, John Sapp, Linwei Wang

Producción científica: Capítulo en Libro/Reporte/Acta de conferenciaContribución a la conferencia

10 Citas (Scopus)

Resumen

To improve the ability of variational auto-encoders (VAE) to disentangle in the latent space, existing works mostly focus on enforcing the independence among the learned latent factors. However, the ability of these models to disentangle often decreases as the complexity of the generative factors increases. In this paper, we investigate the little-explored effect of the modeling capacity of a posterior density on the disentangling ability of the VAE. We note that the independence within and the complexity of the latent density are two different properties we constrain when regularizing the posterior density: while the former promotes the disentangling ability of VAE, the latter - if overly limited - creates an unnecessary competition with the data reconstruction objective in VAE. Therefore, if we preserve the independence but allow richer modeling capacity in the posterior density, we will lift this competition and thereby allow improved independence and data reconstruction at the same time. We investigate this theoretical intuition with a VAE that utilizes a non-parametric latent factor model, the Indian Buffet Process (IBP), as a latent density that is able to grow with the complexity of the data. Across two widely-used benchmark data sets (MNIST and dSprites) and two clinical data sets little explored for disentangled learning, we qualitatively and quantitatively demonstrated the improved disentangling performance of IBP-VAE over the state of the art. In the latter two clinical data sets riddled with complex factors of variations, we further demonstrated that unsupervised disentangling of nuisance factors via IBP-VAE - when combined with a supervised objective - can not only improve task accuracy in comparison to relevant supervised deep architectures, but also facilitate knowledge discovery related to task decision-making.

Idioma originalEnglish
Título de la publicación alojadaProceedings - 19th IEEE International Conference on Data Mining, ICDM 2019
EditoresJianyong Wang, Kyuseok Shim, Xindong Wu
EditorialInstitute of Electrical and Electronics Engineers Inc.
Páginas1078-1083
Número de páginas6
ISBN (versión digital)9781728146034
DOI
EstadoPublished - nov. 2019
Evento19th IEEE International Conference on Data Mining, ICDM 2019 - Beijing, China
Duración: nov. 8 2019nov. 11 2019

Serie de la publicación

NombreProceedings - IEEE International Conference on Data Mining, ICDM
Volumen2019-November
ISSN (versión impresa)1550-4786

Conference

Conference19th IEEE International Conference on Data Mining, ICDM 2019
País/TerritorioChina
CiudadBeijing
Período11/8/1911/11/19

Nota bibliográfica

Publisher Copyright:
© 2019 IEEE.

ASJC Scopus Subject Areas

  • General Engineering

Huella

Profundice en los temas de investigación de 'Improving disentangled representation learning with the beta bernoulli process'. En conjunto forman una huella única.

Citar esto

Gyawali, P., Li, Z., Knight, C., Ghimire, S., Horacek, B. M., Sapp, J., & Wang, L. (2019). Improving disentangled representation learning with the beta bernoulli process. En J. Wang, K. Shim, & X. Wu (Eds.), Proceedings - 19th IEEE International Conference on Data Mining, ICDM 2019 (pp. 1078-1083). Artículo 8970693 (Proceedings - IEEE International Conference on Data Mining, ICDM; Vol. 2019-November). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICDM.2019.00127