mirror of
https://github.com/Vision-CAIR/MiniGPT-4.git
synced 2025-04-05 02:20:47 +00:00
210 lines
11 KiB
Markdown
210 lines
11 KiB
Markdown
# MiniGPT-4 and MiniGPT-v2
|
|
|
|
|
|
**King Abdullah University of Science and Technology**
|
|
|
|
<a href='https://minigpt-4.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/abs/2304.10592'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> <a href='https://huggingface.co/spaces/Vision-CAIR/minigpt4'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a> <a href='https://huggingface.co/Vision-CAIR/MiniGPT-4'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a> [](https://colab.research.google.com/drive/1OK4kYsZphwt5DXchKkzMBjYF6jnkqh4R?usp=sharing) [](https://www.youtube.com/watch?v=__tftoxpBAw&feature=youtu.be)
|
|
|
|
## 💡 Get help - [Q&A](https://github.com/Vision-CAIR/MiniGPT-4/discussions/categories/q-a) or [Discord 💬](https://discord.gg/5WdJkjbAeE)
|
|
|
|
|
|
## News
|
|
Breaking! We release the first major update with our MiniGPT-v2
|
|
|
|
We now provide a llama 2 version of MiniGPT-4
|
|
|
|
## Online Demo
|
|
|
|
Click the image to chat with MiniGPT-v2 around your images
|
|
[](https://minigpt-v2.github.io/)
|
|
|
|
Click the image to chat with MiniGPT-4 around your images
|
|
[](https://minigpt-4.github.io)
|
|
|
|
|
|
## Examples
|
|
|
|

|
|
|
|
|
|
|
|
|
|
| | |
|
|
:-------------------------:|:-------------------------:
|
|
 | 
|
|
 | 
|
|
|
|
More examples can be found in the [project page](https://minigpt-4.github.io).
|
|
|
|
|
|
|
|
<!-- ## Introduction
|
|
- MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer.
|
|
- We train MiniGPT-4 with two stages. The first traditional pretraining stage is trained using roughly 5 million aligned image-text pairs in 10 hours using 4 A100s. After the first stage, Vicuna is able to understand the image. But the generation ability of Vicuna is heavily impacted.
|
|
- To address this issue and improve usability, we propose a novel way to create high-quality image-text pairs by the model itself and ChatGPT together. Based on this, we then create a small (3500 pairs in total) yet high-quality dataset.
|
|
- The second finetuning stage is trained on this dataset in a conversation template to significantly improve its generation reliability and overall usability. To our surprise, this stage is computationally efficient and takes only around 7 minutes with a single A100.
|
|
- MiniGPT-4 yields many emerging vision-language capabilities similar to those demonstrated in GPT-4. -->
|
|
|
|
|
|
<!--  -->
|
|
|
|
|
|
## Getting Started
|
|
### Installation
|
|
|
|
**1. Prepare the code and the environment**
|
|
|
|
Git clone our repository, creating a python environment and activate it via the following command
|
|
|
|
```bash
|
|
git clone https://github.com/Vision-CAIR/MiniGPT-4.git
|
|
cd MiniGPT-4
|
|
conda env create -f environment.yml
|
|
conda activate minigpt4
|
|
```
|
|
|
|
|
|
**2. Prepare the pretrained LLM weights**
|
|
|
|
Currently, we provide both Vicuna V0 and Llama 2 version of MiniGPT-4.
|
|
Download the corresponding LLM weights from the following huggingface space via clone the repository using git-lfs.
|
|
|
|
| Vicuna V0 13B | Vicuna V0 7B | Llama 2 Chat 7B |
|
|
:------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------:
|
|
[Downlad](https://huggingface.co/Vision-CAIR/vicuna/tree/main) | [Download](https://huggingface.co/Vision-CAIR/vicuna-7b/tree/main) | [Download](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/tree/main)
|
|
|
|
|
|
Then, set the path to the vicuna weight in the model config file
|
|
[here](minigpt4/configs/models/minigpt4_vicuna0.yaml#L18) at Line 18
|
|
and/or the path to the llama2 weight in the model config file
|
|
[here](minigpt4/configs/models/minigpt4_llama2.yaml#L15) at Line 15.
|
|
|
|
**3. Prepare the pretrained MiniGPT-4 checkpoint**
|
|
|
|
Download the pretrained checkpoints according to the Vicuna model you prepare.
|
|
<!--
|
|
| Checkpoint with Vicuna 13B | Checkpoint with Vicuna 7B | Checkpoint with LLaMA-2 Chat 7B | MiniGPT-v2 with LLaMA-2 chat |
|
|
:--------------------------------------------------:|:----------------------------------------------------------------:|
|
|
:-----------------------------------------------------------------:|
|
|
:------------------------------------------------------------------:
|
|
[Download](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link) | [Download](https://drive.google.com/file/d/1RY9jV0dyqLX-o38LrumkKRh6Jtaop58R/view?usp=sharing) | [Download](https://drive.google.com/file/d/11nAPjEok8eAGGEG1N2vXo3kBLCg0WgUk/view?usp=sharing) | [Download](https://drive.google.com/file/d/1aVbfW7nkCSYx99_vCRyP1sOlQiWVSnAl/view?usp=sharing) -->
|
|
|
|
| Checkpoint with Vicuna 13B | Checkpoint with Vicuna 7B | Checkpoint with LLaMA-2 Chat 7B | MiniGPT-v2 with LLaMA-2 chat |
|
|
|----------------------------|---------------------------|---------------------------------|------------------------------|
|
|
| [Download](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link) | [Download](https://drive.google.com/file/d/1RY9jV0dyqLX-o38LrumkKRh6Jtaop58R/view?usp=sharing) | [Download](https://drive.google.com/file/d/11nAPjEok8eAGGEG1N2vXo3kBLCg0WgUk/view?usp=sharing) | [Download](https://drive.google.com/file/d/1aVbfW7nkCSYx99_vCRyP1sOlQiWVSnAl/view?usp=sharing) |
|
|
|
|
|
|
|
|
Then, set the path to the pretrained checkpoint in the evaluation config file
|
|
in [eval_configs/minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#L10) at Line 8 for Vicuna version or [eval_configs/minigpt4_llama2_eval.yaml](eval_configs/minigpt4_llama2_eval.yaml#L10) for LLama2 version.
|
|
|
|
|
|
|
|
### Launching Demo Locally
|
|
|
|
Try out our demo [demo.py](demo.py) for the vicuna version on your local machine by running
|
|
|
|
```
|
|
python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
|
|
```
|
|
|
|
or for Llama 2 version by
|
|
|
|
```
|
|
python demo.py --cfg-path eval_configs/minigpt4_llama2_eval.yaml --gpu-id 0
|
|
```
|
|
|
|
or for MiniGPT-v2 version by
|
|
|
|
```
|
|
python demo_v2.py --cfg-path eval_configs/minigpt4v2_eval.yaml --gpu-id 0
|
|
```
|
|
|
|
|
|
|
|
|
|
To save GPU memory, LLMs loads as 8 bit by default, with a beam search width of 1.
|
|
This configuration requires about 23G GPU memory for 13B LLM and 11.5G GPU memory for 7B LLM.
|
|
For more powerful GPUs, you can run the model
|
|
in 16 bit by setting `low_resource` to `False` in the relevant config file
|
|
(line 6 of either [minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#6) if using Vicuna or [minigpt4_llama2_eval.yaml](eval_configs/minigpt4_llama2_eval.yaml#6) if using Llama 2) and use a larger beam search width.
|
|
|
|
Thanks [@WangRongsheng](https://github.com/WangRongsheng), you can also run our code on [Colab](https://colab.research.google.com/drive/1OK4kYsZphwt5DXchKkzMBjYF6jnkqh4R?usp=sharing)
|
|
|
|
|
|
### Training
|
|
The training of MiniGPT-4 contains two alignment stages.
|
|
|
|
**1. First pretraining stage**
|
|
|
|
In the first pretrained stage, the model is trained using image-text pairs from Laion and CC datasets
|
|
to align the vision and language model. To download and prepare the datasets, please check
|
|
our [first stage dataset preparation instruction](dataset/README_1_STAGE.md).
|
|
After the first stage, the visual features are mapped and can be understood by the language
|
|
model.
|
|
To launch the first stage training, run the following command. In our experiments, we use 4 A100.
|
|
You can change the save path in the config file
|
|
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage1_pretrain.yaml)
|
|
|
|
```bash
|
|
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage1_pretrain.yaml
|
|
```
|
|
|
|
A MiniGPT-4 checkpoint with only stage one training can be downloaded
|
|
[here (13B)](https://drive.google.com/file/d/1u9FRRBB3VovP1HxCAlpD9Lw4t4P6-Yq8/view?usp=share_link) or [here (7B)](https://drive.google.com/file/d/1HihQtCEXUyBM1i9DQbaK934wW3TZi-h5/view?usp=share_link).
|
|
Compared to the model after stage two, this checkpoint generate incomplete and repeated sentences frequently.
|
|
|
|
|
|
**2. Second finetuning stage**
|
|
|
|
In the second stage, we use a small high quality image-text pair dataset created by ourselves
|
|
and convert it to a conversation format to further align MiniGPT-4.
|
|
To download and prepare our second stage dataset, please check our
|
|
[second stage dataset preparation instruction](dataset/README_2_STAGE.md).
|
|
To launch the second stage alignment,
|
|
first specify the path to the checkpoint file trained in stage 1 in
|
|
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage2_finetune.yaml).
|
|
You can also specify the output path there.
|
|
Then, run the following command. In our experiments, we use 1 A100.
|
|
|
|
```bash
|
|
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml
|
|
```
|
|
|
|
After the second stage alignment, MiniGPT-4 is able to talk about the image coherently and user-friendly.
|
|
|
|
|
|
|
|
|
|
## Acknowledgement
|
|
|
|
+ [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2) The model architecture of MiniGPT-4 follows BLIP-2. Don't forget to check this great open-source work if you don't know it before!
|
|
+ [Lavis](https://github.com/salesforce/LAVIS) This repository is built upon Lavis!
|
|
+ [Vicuna](https://github.com/lm-sys/FastChat) The fantastic language ability of Vicuna with only 13B parameters is just amazing. And it is open-source!
|
|
+ [LLaMA](https://github.com/facebookresearch/llama) The strong open-sourced LLaMA 2 language model.
|
|
|
|
|
|
If you're using MiniGPT-4 in your research or applications, please cite using this BibTeX:
|
|
```bibtex
|
|
|
|
@article{Chen2023minigpt,
|
|
title={MiniGPT-v2: Large Language Model as a Unified Interface for Vision-Language Multi-task Learning},
|
|
author={Chen, jun and Deyao, Zhu and Shen, Xiaoqian and Li, Xiang, Liu Zechu, Zhang Pengchuan, Krishnamoorthi Raghuraman, Chandra Vikas, Xiong Yunyang and Elhoseiny, Mohamed},
|
|
journal={github},
|
|
year={2023}
|
|
}
|
|
|
|
@article{zhu2023minigpt,
|
|
title={MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models},
|
|
author={Zhu, Deyao and Chen, Jun and Shen, Xiaoqian and Li, Xiang and Elhoseiny, Mohamed},
|
|
journal={arXiv preprint arXiv:2304.10592},
|
|
year={2023}
|
|
}
|
|
```
|
|
|
|
|
|
## License
|
|
This repository is under [BSD 3-Clause License](LICENSE.md).
|
|
Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with
|
|
BSD 3-Clause License [here](LICENSE_Lavis.md).
|