mirror of
https://github.com/Vision-CAIR/MiniGPT-4.git
synced 2025-04-04 18:10:47 +00:00
146 lines
5.3 KiB
Markdown
146 lines
5.3 KiB
Markdown
# MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
|
|
[Deyao Zhu](https://tsutikgiau.github.io/)* (On Job Market!), [Jun Chen](https://junchen14.github.io/)* (On Job Market!), [Xiaoqian Shen](https://xiaoqian-shen.github.io), Xiang Li, and Mohamed Elhoseiny. *Equal Contribution
|
|
|
|
**King Abdullah University of Science and Technology**
|
|
|
|
<a href='https://minigpt-4.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='MiniGPT_4.pdf'><img src='https://img.shields.io/badge/Paper-PDF-red'></a>
|
|
|
|
|
|
## Online Demo
|
|
|
|
Click the image to chat with MiniGPT-4 around your images
|
|
[](https://minigpt-4.github.io)
|
|
|
|
|
|
## Examples
|
|
| | |
|
|
:-------------------------:|:-------------------------:
|
|
 | 
|
|
 | 
|
|
|
|
More examples can be found in the [project page](https://minigpt-4.github.io).
|
|
|
|
|
|
|
|
## Introduction
|
|
- MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer.
|
|
- The training of MiniGPT-4 consists of a first pretrain stage using roughly 5 million aligned image-text pairs for 10 hours on 4 A100s and a second finetuning stage using additional 3,500 carefully curated high-quality pairs for 7 minutes on 1 A100.
|
|
- MiniGPT-4 processes many emerging vision-language capabilities similar to those exhibited by GPT-4.
|
|

|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Getting Started
|
|
### Installation
|
|
|
|
**1. Prepare the code and the environment**
|
|
|
|
Git clone our repository, creating a python environment and ativate it via the following command
|
|
|
|
```bash
|
|
git clone https://github.com/Vision-CAIR/MiniGPT-4.git
|
|
cd MiniGPT-4
|
|
conda env create -f environment.yml
|
|
conda activate minigpt4
|
|
```
|
|
|
|
|
|
**2. Prepare the pretrained Vicuna weights**
|
|
|
|
The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B.
|
|
Please refer to their instructions [here](https://huggingface.co/lmsys/vicuna-13b-delta-v0) to obtaining the weights.
|
|
The final weights would be in a single folder with the following structure:
|
|
|
|
```
|
|
vicuna_weights
|
|
├── config.json
|
|
├── generation_config.json
|
|
├── pytorch_model.bin.index.json
|
|
├── pytorch_model-00001-of-00003.bin
|
|
...
|
|
```
|
|
|
|
Then, set the path to the vicuna weight in the model config file
|
|
[here](minigpt4/configs/models/minigpt4.yaml#L16) at Line 16.
|
|
|
|
**3. Prepare the pretrained MiniGPT-4 checkpoint**
|
|
|
|
To play with our pretrained model, download the pretrained checkpoint
|
|
[here](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link).
|
|
Then, set the path to the pretrained checkpoint in the evaluation config file
|
|
in [eval_configs/minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#L10) at Line 10.
|
|
|
|
|
|
|
|
### Launching Demo Locally
|
|
|
|
Try out our demo [demo.py](demo.py) on your local machine by running
|
|
|
|
```
|
|
python demo.py --cfg-path eval_configs/minigpt4_eval.yaml
|
|
```
|
|
|
|
|
|
|
|
### Training
|
|
The training of MiniGPT-4 contains two alignment stages.
|
|
|
|
**1. First pretraining stage**
|
|
|
|
In the first pretrained stage, the model is trained using image-text pairs from Laion and CC datasets
|
|
to align the vision and language model. To download and prepare the datasets, please check
|
|
our [first stage dataset preparation instruction](dataset/README_1_STAGE.md).
|
|
After the first stage, the visual features are mapped and can be understood by the language
|
|
model.
|
|
To launch the first stage training, run the following command. In our experiments, we use 4 A100.
|
|
You can change the save path in the config file
|
|
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage1_pretrain.yaml)
|
|
|
|
```bash
|
|
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage1_pretrain.yaml
|
|
```
|
|
|
|
**1. Second finetuning stage**
|
|
|
|
In the second stage, we use a small high quality image-text pair dataset created by ourselves
|
|
and convert it to a conversation format to further align MiniGPT-4.
|
|
To download and prepare our second stage dataset, please check our
|
|
[second stage dataset preparation instruction](dataset/README_2_STAGE.md).
|
|
To launch the second stage alignment,
|
|
first specify the path to the checkpoint file trained in stage 1 in
|
|
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage2_finetune.yaml).
|
|
You can also specify the output path there.
|
|
Then, run the following command. In our experiments, we use 1 A100.
|
|
|
|
```bash
|
|
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml
|
|
```
|
|
|
|
After the second stage alignment, MiniGPT-4 is able to talk about the image coherently and user-friendly.
|
|
|
|
|
|
|
|
|
|
## Acknowledgement
|
|
|
|
+ [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2)
|
|
+ [Vicuna](https://github.com/lm-sys/FastChat)
|
|
|
|
|
|
If you're using MiniGPT-4 in your research or applications, please cite using this BibTeX:
|
|
```bibtex
|
|
@misc{zhu2022minigpt4,
|
|
title={MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models},
|
|
author={Deyao Zhu and Jun Chen and Xiaoqian Shen and xiang Li and Mohamed Elhoseiny},
|
|
year={2023},
|
|
}
|
|
```
|
|
|
|
## License
|
|
This repository is under [BSD 3-Clause License](LICENSE.md).
|
|
Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with
|
|
BSD 3-Clause License [here](LICENSE_Lavis.md).
|