👀 Browse and Concentrate: Comprehending Multimodal Content via prior-LLM Context Fusion (ACL '24 Oral)
🌐 Homepage | 📖 arXiv | 🤗 Models
This repo includes codes and examples for paper Browse and Concentrate: Comprehending Multimodal Content via prior-LLM Context Fusion.
- [2024-12-20] Pre-training scripts released.
- [2024-12-17] Update condition context generation scripts.
- [2024-12-14] Release the training instructions. The full training scripts will be available soon.
- [2024-12-08] Pretraining data released. (These data are updated on 20 Dec 2024, please use the newst version.)
- [2024-05-16] This paper is accepted by ACL 2024 (main conference, oral). Information for our training data is updated.
- [2024-04-18] Code and cases for data generation released. The generated data are used for pretraining.
- [2024-03-18] Brote-IM-XXL model released, please download from this link.
- [2024-02-26] Project released.
We propose a paradigm Browse and Concentrate (Brote) for incorporating multimodal context before feeding features into the LLM, together with two approaches to implement our paradigm, Brote-EX and Brote-IM. The model structures are shown in the following figure.
Please refer to the data format described in MIC.
We create a dataset of 56k fewshot data samples (each data sample contains one or multiple images), resulting in 191k training instances (one image per instance). These instances are supposed to contain question-aware and cross-image information. The data construction pipeline is illustrated in the following figure.
Please download our pretraining dataset from the ModelScope link, or HuggingFace link.
We sampled about 500k data from MIC for model finetuning.
pip install -r requirements.txt
The full training scripts will be available soon.
-
Preparing for training data
- Download the pretraining data from ModelScope link, or HuggingFace link.
- Generate and save condition contexts using the original InstructBlip or MMICL models.
- We used encoder_last_hidden_state[eos_token_index] in our paper. You can also explore representations from othe layers or positions.
- The input data of this generate process comes from the 'input_text' and 'input_image' fields in the pretraining dataset.
- Please modify the required fields in run_script/gen_condition/get_conditions_gpt.sh. Here is an example of using the script:
bash run_script/gen_condition/get_conditions_gpt.sh 0 stage1_gpt_v0.parquet.gzip ./pretrain_data stage1_gpt_v0_condion.parquet.gzip
-
Unfreeze the parameters for query token and Q-Former (the others remain frozen), and conduct training targeting at the 'gpt_caption' field in the pretraining dataset.
-
Command to run:
bash run_script/pretrain/train_stage1.sh
-
Brote-EX
- Download the MIC dataset.
- Generate and save condition contexts using the original InstructBlip or MMICL models. Note that this refers to the condition contexts of MIC dataset following our data dropping strategies (discussed in section 3.4 in out paper), which is different from the pretrainig data.
- Unfreeze the parameters for query token, Q-Former, and query & values of attention layers of the LLM.
-
Brote-IM
- Download the MIC dataset.
- No need to generate condition contexts. You can directly fineutne from the pretrained model following the above instruction, or continue fineutning from Brote-EX (this works better).
- Unfreeze the parameters for query token, Q-Former, and query & values of attention layers of the LLM.
Please refer to the test.py file; files under the model dir are for test only, and will be updated soon for training.
To run the test script (ensure the required libraries are properly installed):
export CUDAID='please set you cuda id here'
export TASKID='please set the case id (from 1 to 5), or use the string 'all'(lowercase)'
CUDA_VISIBLE_DEVICES=$CUDAID python test.py $TASKID
(🐱 in this figure is a 6-year-old cat, his name is Alan.)
Please download our model from 🤗 Models.
📑 If you find our project helpful to your research, please consider citing:
@inproceedings{
wang2024browse,
title={Browse and Concentrate: Comprehending Multimodal Content via Prior-{LLM} Context Fusion},
author={Wang, Ziyue and Chen, Chi and Zhu, Yiqi and Luo, Fuwen and Li, Peng and Yan, Ming and Zhang, Ji and Huang, Fei and Sun, Maosong and Liu, Yang},
booktitle={The 62nd Annual Meeting of the Association for Computational Linguistics},
year={2024},
}
Our models are build upon MMICL and InstructBLIP.