학술논문

Multi-Modal Latent Diffusion.
Document Type
Academic Journal
Author
Bounoua M; Ampere Software Technology, 06560 Valbonne, France.; Department of Data Science, EURECOM, 06410 Biot, France.; Franzese G; Department of Data Science, EURECOM, 06410 Biot, France.; Michiardi P; Department of Data Science, EURECOM, 06410 Biot, France.
Source
Publisher: MDPI Country of Publication: Switzerland NLM ID: 101243874 Publication Model: Electronic Cited Medium: Internet ISSN: 1099-4300 (Electronic) Linking ISSN: 10994300 NLM ISO Abbreviation: Entropy (Basel) Subsets: PubMed not MEDLINE
Subject
Language
English
Abstract
Multimodal datasets are ubiquitous in modern applications, and multimodal Variational Autoencoders are a popular family of models that aim to learn a joint representation of different modalities. However, existing approaches suffer from a coherence-quality tradeoff in which models with good generation quality lack generative coherence across modalities and vice versa. In this paper, we discuss the limitations underlying the unsatisfactory performance of existing methods in order to motivate the need for a different approach. We propose a novel method that uses a set of independently trained and unimodal deterministic autoencoders. Individual latent variables are concatenated into a common latent space, which is then fed to a masked diffusion model to enable generative modeling. We introduce a new multi-time training method to learn the conditional score network for multimodal diffusion. Our methodology substantially outperforms competitors in both generation quality and coherence, as shown through an extensive experimental campaign.