Any-to-Any Generation via Composable Diffusion

By Zineng Tang, Ziyi Yang, Chenguang Zhu, Michael Zeng, and Mohit Bansal
Published in Arxiv

Summary: The paper presents Composable Diffusion (CoDi), a novel generative model capable of generating any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities. Unlike existing generative AI systems, CoDi can generate multiple modalities in parallel and its input is not limited to a subset of modalities like text or image. Despite the absence of training datasets for many combinations of modalities, the authors propose to align modalities in both the input and output space.

Read the paper.

©2023 arXiv:2305.11846v1

Citation: Tang, Z., Yang, Z., Zhu, C., Zeng, M., & Bansal, M. (2023). Any-to-Any Generation via Composable Diffusion. arXiv preprint arXiv:2305.11846. Chicago

CC BY-SA: Creative Commons Attribution-ShareAlike
This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use. If you remix, adapt, or build upon the material, you must license the modified material under identical terms. Learn more.