mirror of
https://github.com/Vision-CAIR/MiniGPT-4.git
synced 2025-04-05 18:40:46 +00:00
Merge pull request #7 from junchen14/main
update finetune and eval readme
This commit is contained in:
commit
5e8c105fd3
3
.gitignore
vendored
3
.gitignore
vendored
@ -174,7 +174,8 @@ prompts/
|
||||
output/
|
||||
ckpt/
|
||||
divide_vqa.py
|
||||
jobs/
|
||||
|
||||
|
||||
*.slurm
|
||||
slurm*
|
||||
sbatch_generate*
|
26
dataset/Evaluation.md
Normal file
26
dataset/Evaluation.md
Normal file
@ -0,0 +1,26 @@
|
||||
|
||||
### OKVQA
|
||||
|
||||
### GQA
|
||||
Images and question-answer pairs will be loaded during the evaluation.
|
||||
|
||||
``` python run_eval.py xxxx ```
|
||||
|
||||
### VSR
|
||||
Images and question-answer pairs will be loaded during the evaluation.
|
||||
|
||||
``` python run_eval.py xxxx ```
|
||||
|
||||
### IconVQA
|
||||
|
||||
### VizWiz
|
||||
1. Download [`test.json`](https://vizwiz.cs.colorado.edu/VizWiz_final/vqa_data/Annotations.zip) and extract [`test.zip`](https://vizwiz.cs.colorado.edu/VizWiz_final/images/test.zip) to `test`. Put them under `your_path/vizwiz`.
|
||||
2. Single-GPU inference.
|
||||
``` python run_eval.py xxxx ```
|
||||
|
||||
### HM
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,133 +1,253 @@
|
||||
## Download the COCO captions, RefCOCO, RefCOCO+. RefCOCOg, visual genome, textcaps, LLaVA, gqa, AOK-VQA, OK-VQA, OCR-VQA, filtered Flickr-30k, multi-task conversation, and Unnatural instruction datasets
|
||||
|
||||
|
||||
Download the dataset
|
||||
|
||||
Image source | Download path
|
||||
--- | :---:
|
||||
COCO 2014 images | <a href="http://images.cocodataset.org/zips/train2014.zip">images</a> <a href="https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_train.json"> captions</a>
|
||||
COCO VQA | <a href="https://storage.googleapis.com/sfr-vision-language-research/LAVIS/datasets/vqav2/vqa_train.json">vqa train</a> <a href="https://storage.googleapis.com/sfr-vision-language-research/LAVIS/datasets/vqav2/vqa_val.json"> vqa val</a>
|
||||
Visual Genome | <a href="https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip">images part1</a> <a href="https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip">images part2</a>
|
||||
TextCaps | <a href="https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip">images</a> <a href="https://dl.fbaipublicfiles.com/textvqa/data/textcaps/TextCaps_0.1_train.json"> annotations</a>
|
||||
RefCOCO | <a href="https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco.zip"> annotations </a>
|
||||
RefCOCO+ | <a href="https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco+.zip"> annotations </a>
|
||||
RefCOCOg | <a href="https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcocog.zip"> annotations </a>
|
||||
LLaVA | <a href="https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/complex_reasoning_77k.json"> Compelex reasoning </a> <a href="https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/detail_23k.json"> Detailed description </a> <a href="https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/conversation_58k.json"> Conversation </a>
|
||||
OKVQA | <a href="https://storage.googleapis.com/sfr-vision-language-research/LAVIS/datasets/okvqa/okvqa_train.json"> annotations </a>
|
||||
AOK-VQA | <a href="https://prior-datasets.s3.us-east-2.amazonaws.com/aokvqa/aokvqa_v1p0.tar.gz"> annotations </a>
|
||||
OCR-VQA | <a href="https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing"> annotations </a>
|
||||
Filtered Flickr-30k | <a href="https://drive.google.com/drive/folders/19c_ggBI77AvdtYlPbuI0ZpnPz73T5teX?usp=sharing"> annotations </a>
|
||||
Multi-task conversation | <a href="https://drive.google.com/file/d/11HHqB2c29hbSk-WLxdta-nG8UCUrcCN1/view?usp=sharing"> annotations </a>
|
||||
Filtered unnatural instruction | <a href="https://drive.google.com/file/d/1lXNnBcb5WU-sc8Fe2T2N8J0NRw4sBLev/view?usp=sharing"> annotations </a>
|
||||
|
||||
|
||||
|
||||
### COCO captions
|
||||
Download the COCO 2014 images and captions
|
||||
|
||||
|
||||
### RefCOCO, RefCOCO+, RefCOCOg
|
||||
```
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── coco_captions
|
||||
│ ├── coco_images
|
||||
| ├── annotations
|
||||
| ├── coco_karpathy_train.json
|
||||
|
||||
```
|
||||
|
||||
Set **image_path** to the COCO 2014 image folder.
|
||||
Similarly, set **ann_path** to the coco_karpathy_train.json path
|
||||
- [minigpt4/configs/datasets/coco/caption.yaml](../minigpt4/configs/datasets/coco/caption.yaml)
|
||||
|
||||
### COCO VQA
|
||||
Download the vqa v2 train and validation json files
|
||||
|
||||
```
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── vqav2
|
||||
│ ├── vqa_train.json
|
||||
| ├── vqa_val.json
|
||||
```
|
||||
|
||||
Set **image_path** to the COCO 2014 image folder.
|
||||
Similarly, set **ann_path** to the vqa_train.json and vqa_val.json path
|
||||
- [minigpt4/configs/datasets/coco/defaults_vqa.yaml](../minigpt4/configs/datasets/coco/defaults_vqa.yaml)
|
||||
|
||||
|
||||
### Visual genome
|
||||
Download visiual genome images and annotation files
|
||||
|
||||
```
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── visual_genome
|
||||
│ ├── VG_100K
|
||||
│ ├── VG_100K_2
|
||||
| ├── region_descriptions.json
|
||||
```
|
||||
|
||||
Set **image_path** to visual_genome folder.
|
||||
Similarly, set **ann_path** to to visual_genome folder.
|
||||
|
||||
- [minigpt4/configs/datasets/vg/ref.yaml](../minigpt4/configs/datasets/vg/ref.yaml)
|
||||
|
||||
|
||||
### TextCaps
|
||||
Download the TextCaps images and annotation files
|
||||
|
||||
```
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── TextCaps
|
||||
│ ├── train_images
|
||||
│ ├── TextCaps_0.1_train.json
|
||||
```
|
||||
|
||||
Set **image_path** to TextCaps train_images folder.
|
||||
Similarly, set **ann_path** to the TextCaps_0.1_train.json path
|
||||
|
||||
- [minigpt4/configs/datasets/textcaps/caption.yaml](../minigpt4/configs/datasets/textcaps/caption.yaml)
|
||||
|
||||
### RefCOCO, RefCOCO+, RefCOCOg
|
||||
Download the RefCOCO, RefCOCO+, RefCOCOg annotation files
|
||||
|
||||
```
|
||||
Location_you_like
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── refcoco_annotations
|
||||
│ ├── refcoco
|
||||
| ├── instances.json
|
||||
| ├── refs(google).p
|
||||
| ├── refs(unc).p
|
||||
│ ├── refcoco+
|
||||
| ├── instances.json
|
||||
| ├── refs(unc).p
|
||||
│ ├── refcocog
|
||||
| ├── instances.json
|
||||
| ├── refs(google).p
|
||||
| ├── refs(und).p
|
||||
```
|
||||
|
||||
|
||||
Set **image_path** to the COCO 2014 image folder.
|
||||
Similarly, set **ann_path** in all the following configs to the above folder (Location_you_like) that contains refcoco, refcoco+, and refcocog.
|
||||
|
||||
- [minigpt4/configs/datasets/coco_bbox/refcoco.yaml](../minigpt4/configs/datasets/coco_bbox/refcoco.yaml)
|
||||
- [minigpt4/configs/datasets/coco_bbox/refcocog.yaml](../minigpt4/configs/datasets/coco_bbox/refcocog.yaml)
|
||||
- [minigpt4/configs/datasets/coco_bbox/refcocop.yaml](../minigpt4/configs/datasets/coco_bbox/refcocop.yaml)
|
||||
- [minigpt4/configs/datasets/coco_bbox/invrefcoco.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcoco.yaml)
|
||||
- [minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml)
|
||||
- [minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml)
|
||||
|
||||
### textcaps
|
||||
|
||||
### LLaVA
|
||||
|
||||
### gqa
|
||||
```
|
||||
Location_you_like
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── llava
|
||||
│ ├── conversation_58k.json
|
||||
│ ├── detail_23k.json
|
||||
│ ├── complex_reasoning_77k.json
|
||||
```
|
||||
|
||||
Set **image_path** to the COCO 2014 image folder.
|
||||
Similarly, set **ann_path** to the location of the previous downloaded conversation_58k.json,
|
||||
detail_23k.json, and complex_reasoning_77k.json in conversation.yaml, detail.yaml, and reason.yaml, respectively.
|
||||
|
||||
|
||||
- [minigpt4/configs/datasets/llava/conversation.yaml](../minigpt4/configs/datasets/llava/conversation.yaml)
|
||||
- [minigpt4/configs/datasets/llava/detail.yaml](../minigpt4/configs/datasets/llava/detail.yaml)
|
||||
- [minigpt4/configs/datasets/llava/reason.yaml](../minigpt4/configs/datasets/llava/reason.yaml)
|
||||
|
||||
|
||||
### OKVQA
|
||||
|
||||
|
||||
```
|
||||
Location_you_like
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── OKVQA
|
||||
│ ├── okvqa_train.json
|
||||
```
|
||||
|
||||
Set **image_path** to the COCO 2014 image folder.
|
||||
Similarly, set **ann_path** to the location of the OKVQA dataset
|
||||
- [minigpt4/configs/datasets/okvqa/defaults.yaml](../minigpt4/configs/datasets/okvqa/defaults.yaml)
|
||||
|
||||
|
||||
### COCO-VQA
|
||||
|
||||
- [OK-VQA Input Questions](https://okvqa.allenai.org/static/data/OpenEnded_mscoco_train2014_questions.json.zip)
|
||||
- [OK-VQA Annotations](https://okvqa.allenai.org/static/data/mscoco_train2014_annotations.json.zip)
|
||||
|
||||
|
||||
### AOK-VQA
|
||||
Download the AOK-VQA annotation dataset
|
||||
|
||||
```
|
||||
export AOKVQA_DIR=YOUR_DATASET_PATH
|
||||
mkdir -p ${AOKVQA_DIR}
|
||||
curl -fsSL https://prior-datasets.s3.us-east-2.amazonaws.com/aokvqa/aokvqa_v1p0.tar.gz | tar xvz -C ${AOKVQA_DIR}
|
||||
```
|
||||
|
||||
```
|
||||
Location_you_like
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── AOKVQA
|
||||
│ ├── aokvqa_v1p0_train.json
|
||||
```
|
||||
|
||||
|
||||
Set **image_path** to the COCO 2014 image folder.
|
||||
Similarly, set **ann_path** to the location of the AOKVQA dataset
|
||||
- [minigpt4/configs/datasets/aokvqa/defaults.yaml](../minigpt4/configs/datasets/aokvqa/defaults.yaml)
|
||||
|
||||
|
||||
|
||||
### OCR-VQA
|
||||
Download the OCR-VQA annotation files
|
||||
|
||||
```
|
||||
Location_you_like
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── OCR-VQA
|
||||
│ ├── images
|
||||
│ ├── dataset.json
|
||||
```
|
||||
|
||||
Set **image_path** as the OCR-VQA image folder.
|
||||
Similarly, set **ann_path** to the lhe OCR-VQA dataset.json
|
||||
- [minigpt4/configs/datasets/ocrvqa/ocrvqa.yaml](../minigpt4/configs/datasets/ocrvqa/ocrvqa.yaml)
|
||||
|
||||
|
||||
|
||||
### filtered Flickr-30k
|
||||
Download filtered Flickr-30k images and annotation files
|
||||
|
||||
```
|
||||
Location_you_like
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── filtered_flickr
|
||||
│ ├── images
|
||||
│ ├── captiontobbox.json
|
||||
│ ├── groundedcaption.json
|
||||
│ ├── phrasetobbox.json
|
||||
```
|
||||
|
||||
Set **image_path** as the flickr-30k images foler.
|
||||
Similarly, set **ann_path** to the groundedcaption.json, captiontobbox.json and phrasetobbox.json for the
|
||||
grounded image caption, caption to bbox, and phrase to bbox datasets.
|
||||
|
||||
- [minigpt4/configs/datasets/flickr/default.yaml](../minigpt4/configs/datasets/flickr/default.yaml)
|
||||
- [minigpt4/configs/datasets/flickr/caption_to_phrase.yaml](../minigpt4/configs/datasets/flickr/caption_to_phrase.yaml)
|
||||
- [minigpt4/configs/datasets/flickr/object_to_phrase.yaml](../minigpt4/configs/datasets/flickr/object_to_phrase.yaml)
|
||||
|
||||
|
||||
### Multi-task conversation
|
||||
Download the multi-task converstation dataset
|
||||
|
||||
```
|
||||
Location_you_like
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── multitask_conversation
|
||||
│ ├── multitask_conversation.json
|
||||
```
|
||||
|
||||
Set **image_path** as the COCO 2014 images folder.
|
||||
Similarly, set **ann_path** to the multitask_conversation.json file path
|
||||
|
||||
- [minigpt4/configs/datasets/multitask_conversation/default.yaml](../minigpt4/configs/datasets/multitask_conversation/default.yaml)
|
||||
|
||||
### Unnatural instruction
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
### Pre-training datasets download:
|
||||
We use the filtered synthetic captions prepared by BLIP. For more details about the dataset, please refer to [BLIP](https://github.com/salesforce/BLIP).
|
||||
|
||||
It requires ~2.3T to store LAION and CC3M+CC12M+SBU datasets
|
||||
|
||||
Image source | Filtered synthetic caption by ViT-L
|
||||
--- | :---:
|
||||
CC3M+CC12M+SBU | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/ccs_synthetic_filtered_large.json">Download</a>
|
||||
LAION115M | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/laion_synthetic_filtered_large.json">Download</a>
|
||||
|
||||
This will download two json files
|
||||
```
|
||||
ccs_synthetic_filtered_large.json
|
||||
laion_synthetic_filtered_large.json
|
||||
```
|
||||
|
||||
## prepare the data step-by-step
|
||||
|
||||
|
||||
### setup the dataset folder and move the annotation file to the data storage folder
|
||||
```
|
||||
export MINIGPT4_DATASET=/YOUR/PATH/FOR/LARGE/DATASET/
|
||||
mkdir ${MINIGPT4_DATASET}/cc_sbu
|
||||
mkdir ${MINIGPT4_DATASET}/laion
|
||||
mv ccs_synthetic_filtered_large.json ${MINIGPT4_DATASET}/cc_sbu
|
||||
mv laion_synthetic_filtered_large.json ${MINIGPT4_DATASET}/laion
|
||||
```
|
||||
|
||||
### Convert the scripts to data storate folder
|
||||
```
|
||||
cp convert_cc_sbu.py ${MINIGPT4_DATASET}/cc_sbu
|
||||
cp download_cc_sbu.sh ${MINIGPT4_DATASET}/cc_sbu
|
||||
cp convert_laion.py ${MINIGPT4_DATASET}/laion
|
||||
cp download_laion.sh ${MINIGPT4_DATASET}/laion
|
||||
```
|
||||
|
||||
|
||||
### Convert the laion and cc_sbu annotation file format to be img2dataset format
|
||||
```
|
||||
cd ${MINIGPT4_DATASET}/cc_sbu
|
||||
python convert_cc_sbu.py
|
||||
|
||||
cd ${MINIGPT4_DATASET}/laion
|
||||
python convert_laion.py
|
||||
```
|
||||
|
||||
### Download the datasets with img2dataset
|
||||
```
|
||||
cd ${MINIGPT4_DATASET}/cc_sbu
|
||||
sh download_cc_sbu.sh
|
||||
cd ${MINIGPT4_DATASET}/laion
|
||||
sh download_laion.sh
|
||||
```
|
||||
|
||||
|
||||
The final dataset structure
|
||||
Download the filtered unnatural instruction annotation files (we remove the very long sentences from the original unnatural instruction dataset)
|
||||
|
||||
```
|
||||
.
|
||||
├── ${MINIGPT4_DATASET}
|
||||
│ ├── cc_sbu
|
||||
│ ├── convert_cc_sbu.py
|
||||
│ ├── download_cc_sbu.sh
|
||||
│ ├── ccs_synthetic_filtered_large.json
|
||||
│ ├── ccs_synthetic_filtered_large.tsv
|
||||
│ └── cc_sbu_dataset
|
||||
│ ├── 00000.tar
|
||||
│ ├── 00000.parquet
|
||||
│ ...
|
||||
│ ├── laion
|
||||
│ ├── convert_laion.py
|
||||
│ ├── download_laion.sh
|
||||
│ ├── laion_synthetic_filtered_large.json
|
||||
│ ├── laion_synthetic_filtered_large.tsv
|
||||
│ └── laion_dataset
|
||||
│ ├── 00000.tar
|
||||
│ ├── 00000.parquet
|
||||
│ ...
|
||||
...
|
||||
Location_you_like
|
||||
├── ${MINIGPTv2_DATASET}
|
||||
│ ├── unnatural-instructions
|
||||
│ ├── filtered_unnatural_instruction.json
|
||||
```
|
||||
|
||||
There is no image path.
|
||||
Similarly, set **ann_path** to the filtered_unnatural_instruction.json file path
|
||||
|
||||
## Set up the dataset configuration files
|
||||
|
||||
Then, set up the LAION dataset loading path in
|
||||
[here](../minigpt4/configs/datasets/laion/defaults.yaml#L5) at Line 5 as
|
||||
${MINIGPT4_DATASET}/laion/laion_dataset/{00000..10488}.tar
|
||||
|
||||
and the Conceptual Captoin and SBU datasets loading path in
|
||||
[here](../minigpt4/configs/datasets/cc_sbu/defaults.yaml#L5) at Line 5 as
|
||||
${MINIGPT4_DATASET}/cc_sbu/cc_sbu_dataset/{00000..01255}.tar
|
||||
|
||||
|
||||
|
||||
- [minigpt4/configs/datasets/nlp/unnatural_instruction.yaml](../minigpt4/configs/datasets/nlp/unnatural_instruction.yaml)
|
@ -75,7 +75,7 @@ class LlamaForCausalLM(LlamaForCausalLMOrig):
|
||||
)
|
||||
|
||||
hidden_states = outputs[0]
|
||||
if self.config.pretraining_tp > 1:
|
||||
if hasattr(self.config, 'pretraining_tp') and self.config.pretraining_tp > 1:
|
||||
lm_head_slices = self.lm_head.weight.split(self.vocab_size // self.config.pretraining_tp, dim=0)
|
||||
logits = [F.linear(hidden_states, lm_head_slices[i]) for i in range(self.config.pretraining_tp)]
|
||||
logits = torch.cat(logits, dim=-1)
|
||||
|
@ -11,7 +11,6 @@ model:
|
||||
ckpt: "/ibex/project/c2090/minigpt4_ckpt/448_perforamnce_correct_v10_vg/20230925064/checkpoint_32.pth"
|
||||
use_grad_checkpoint: True
|
||||
chat_template: True
|
||||
# wanda_log: False
|
||||
lora_r: 64
|
||||
lora_alpha: 16
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user