Skip to content

πŸ“– This is a repository for organizing papers, codes and other resources related to unified multimodal models.

Notifications You must be signed in to change notification settings

showlab/Awesome-Unified-Multimodal-Models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

17 Commits
Β 
Β 
Β 
Β 

Repository files navigation

Awesome Unified Multimodal Models Awesome

This is a repository for organizing papers, codes and other resources related to unified multimodal models.

TAX

πŸ€” What are unified multimodal models?

Traditional multimodal models can be broadly categorized into two types: multimodal understanding and multimodal generation. Unified multimodal models aim to integrate these two tasks within a single framework. Such models are also referred to as Any-to-Any generation in the community. These models operate on the principle of multimodal input and multimodal output, enabling them to process and generate content across various modalities seamlessly.

πŸ”† This project is still on-going, pull requests are welcomed!!

If you have any suggestions (missing papers, new papers, or typos), please feel free to edit and pull a request. Just letting us know the title of papers can also be a great contribution to us. You can do this by open issue or contact us directly via email.

⭐ If you find this repo useful, please star it!!!

Unified Multimodal Understanding and Generation

Acknowledgements

This template is provided by Awesome-Video-Diffusion and Awesome-MLLM-Hallucination.

About

πŸ“– This is a repository for organizing papers, codes and other resources related to unified multimodal models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •