From ad220a186ec5654d3be4d4500db653b08ce60694 Mon Sep 17 00:00:00 2001 From: junchen14 Date: Mon, 23 Oct 2023 20:41:59 +0300 Subject: [PATCH 01/22] update code --- .gitignore | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.gitignore b/.gitignore index 50d0c3e..610bccf 100755 --- a/.gitignore +++ b/.gitignore @@ -175,6 +175,6 @@ output/ ckpt/ divide_vqa.py - +*.slurm slurm* sbatch_generate* \ No newline at end of file From 83cfdbfeecf5ee6955a7ce24c5b1ce215deb8937 Mon Sep 17 00:00:00 2001 From: junchen14 Date: Mon, 23 Oct 2023 20:48:26 +0300 Subject: [PATCH 02/22] remove jobs --- .gitignore | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitignore b/.gitignore index 610bccf..7120f43 100755 --- a/.gitignore +++ b/.gitignore @@ -174,6 +174,7 @@ prompts/ output/ ckpt/ divide_vqa.py +jobs/ *.slurm slurm* From f0b6a9e7d77747ad1d73c02c35f04b41a6c23be4 Mon Sep 17 00:00:00 2001 From: Xiang Li <44761952+lx709@users.noreply.github.com> Date: Mon, 23 Oct 2023 21:11:35 +0300 Subject: [PATCH 03/22] Update README_MINIGPTv2_FINETUNE.md --- dataset/README_MINIGPTv2_FINETUNE.md | 29 ++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index 2d5c825..2bae2d7 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -1,23 +1,44 @@ ## Download the COCO captions, RefCOCO, RefCOCO+. RefCOCOg, visual genome, textcaps, LLaVA, gqa, AOK-VQA, OK-VQA, OCR-VQA, filtered Flickr-30k, multi-task conversation, and Unnatural instruction datasets -### COCO captions +After downloading all of them, organize the data as follows in `./playground/data`, +``` +├── coco +│ └── train2017 +├── gqa +│ └── images +├── ocr_vqa +│ └── images +├── textvqa +│ └── train_images +└── vg + ├── VG_100K + └── VG_100K_2 + + +### COCO captions +- [train2017](http://images.cocodataset.org/zips/train2017.zip) ### RefCOCO, RefCOCO+, RefCOCOg ### Visual genome - -### textcaps +- [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip) +### TextCaps ### LLaVA -### gqa +### TextVQA +- [train_val_images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip) +### GQA +- [images](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip) +- [Annotations](https://storage.googleapis.com/sfr-vision-language-research/LAVIS/datasets/gqa/testdev_balanced_questions.json) ### OKVQA ### AOK-VQA ### OCR-VQA +- [download script](https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing), **we save all files as `.jpg`** ### filtered Flickr-30k From 68df270f14dfd30213f95349c458bd1eced6c602 Mon Sep 17 00:00:00 2001 From: Xiang Li <44761952+lx709@users.noreply.github.com> Date: Mon, 23 Oct 2023 21:15:14 +0300 Subject: [PATCH 04/22] Update README_MINIGPTv2_FINETUNE.md --- dataset/README_MINIGPTv2_FINETUNE.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index 2bae2d7..438240f 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -14,7 +14,7 @@ After downloading all of them, organize the data as follows in `./playground/dat └── vg ├── VG_100K └── VG_100K_2 - +``` ### COCO captions - [train2017](http://images.cocodataset.org/zips/train2017.zip) From c2de397a8674b1546fad0f6f01088a42efde98cf Mon Sep 17 00:00:00 2001 From: Deyao Zhu Date: Mon, 23 Oct 2023 21:31:13 +0300 Subject: [PATCH 05/22] update refcoco preparation --- dataset/README_MINIGPTv2_FINETUNE.md | 46 ++++++++++++++++++++++++++-- 1 file changed, 43 insertions(+), 3 deletions(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index 2d5c825..be1e09b 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -2,16 +2,56 @@ ### COCO captions - ### RefCOCO, RefCOCO+, RefCOCOg -### Visual genome +Makesure you have the COCO 2014 images first. + +Then, +download RefCOCO, RefCOCO+, and RefCOCOg annotation files in the following links. + +- https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco.zip +- https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco+.zip +- https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcocog.zip + +Unzip these files to the location you like. It should have the structure like the following + +``` +Location_you_like +├── refcoco +│ ├── instances.json +│ ├── refs(google).p +│ └── refs(unc).p +├── refcoco+ +│ ├── instances.json +│ └── refs(unc).p +└── refcocog + ├── instances.json + ├── refs(google).p + └── refs(umd).p +``` + +Set **image_path** in all the following dataset configuration files to the COCO 2014 image folder. +Similarly, set **ann_path** in all the following configs to the above folder (Location_you_like) that contains refcoco, refcoco+, and refcocog. + +- [minigpt4/configs/refcoco.yaml](../minigpt4/configs/refcoco.yaml) +- [minigpt4/configs/refcocog.yaml](../minigpt4/configs/refcocog.yaml) +- [minigpt4/configs/refcocop.yaml](../minigpt4/configs/refcocop.yaml) +- [minigpt4/configs/invrefcoco.yaml](../minigpt4/configs/invrefcoco.yaml) +- [minigpt4/configs/invrefcocog.yaml](../minigpt4/configs/invrefcocog.yaml) +- [minigpt4/configs/invrefcocop.yaml](../minigpt4/configs/invrefcocop.yaml) + + + +### Visual Genome ### textcaps ### LLaVA -### gqa + + + +### GQA ### OKVQA From 114852b529ffb2b8fd5dc68e5653fb7f6b9bdacd Mon Sep 17 00:00:00 2001 From: Deyao Zhu Date: Mon, 23 Oct 2023 21:34:47 +0300 Subject: [PATCH 06/22] update refococo --- dataset/README_MINIGPTv2_FINETUNE.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index c582acc..3d09ffe 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -19,11 +19,13 @@ After downloading all of them, organize the data as follows in `./playground/dat ### COCO captions - [train2017](http://images.cocodataset.org/zips/train2017.zip) -### RefCOCO, RefCOCO+, RefCOCOg + ### Visual genome - [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip) ### TextCaps + +### RefCOCO, RefCOCO+, RefCOCOg Makesure you have the COCO 2014 images first. Then, From ab520c89fc76d14b623aeea0a8e3bd5134ed36fd Mon Sep 17 00:00:00 2001 From: Xiang Li <44761952+lx709@users.noreply.github.com> Date: Mon, 23 Oct 2023 21:41:39 +0300 Subject: [PATCH 07/22] Update README_MINIGPTv2_FINETUNE.md --- dataset/README_MINIGPTv2_FINETUNE.md | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index 3d09ffe..f658adc 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -26,7 +26,7 @@ After downloading all of them, organize the data as follows in `./playground/dat ### TextCaps ### RefCOCO, RefCOCO+, RefCOCOg -Makesure you have the COCO 2014 images first. +Make sure you have the COCO 2014 images first. Then, download RefCOCO, RefCOCO+, and RefCOCOg annotation files in the following links. @@ -71,14 +71,10 @@ Similarly, set **ann_path** in all the following configs to the above folder (Lo ### LLaVA ### TextVQA -- [train_val_images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip) -### GQA -- [images](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip) -- [Annotations](https://storage.googleapis.com/sfr-vision-language-research/LAVIS/datasets/gqa/testdev_balanced_questions.json) - - +Images, and question-answer pairs will be loaded during evaluation. ### GQA +Images, and question-answer pairs will be loaded during evaluation. ### OKVQA From 50df66e81e437091b7a55e68c793dee2b3b4f5f0 Mon Sep 17 00:00:00 2001 From: Xiang Li <44761952+lx709@users.noreply.github.com> Date: Mon, 23 Oct 2023 21:42:58 +0300 Subject: [PATCH 08/22] Create Evaluation.md --- dataset/Evaluation.md | 1 + 1 file changed, 1 insertion(+) create mode 100644 dataset/Evaluation.md diff --git a/dataset/Evaluation.md b/dataset/Evaluation.md new file mode 100644 index 0000000..8b13789 --- /dev/null +++ b/dataset/Evaluation.md @@ -0,0 +1 @@ + From fa19bc09f21c220c564148c9d7ed86d3c1ced523 Mon Sep 17 00:00:00 2001 From: Xiang Li <44761952+lx709@users.noreply.github.com> Date: Mon, 23 Oct 2023 21:45:22 +0300 Subject: [PATCH 09/22] Update README_MINIGPTv2_FINETUNE.md --- dataset/README_MINIGPTv2_FINETUNE.md | 22 ---------------------- 1 file changed, 22 deletions(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index f658adc..280622e 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -1,26 +1,10 @@ ## Download the COCO captions, RefCOCO, RefCOCO+. RefCOCOg, visual genome, textcaps, LLaVA, gqa, AOK-VQA, OK-VQA, OCR-VQA, filtered Flickr-30k, multi-task conversation, and Unnatural instruction datasets -After downloading all of them, organize the data as follows in `./playground/data`, - -``` -├── coco -│ └── train2017 -├── gqa -│ └── images -├── ocr_vqa -│ └── images -├── textvqa -│ └── train_images -└── vg - ├── VG_100K - └── VG_100K_2 -``` ### COCO captions - [train2017](http://images.cocodataset.org/zips/train2017.zip) - ### Visual genome - [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip) ### TextCaps @@ -70,12 +54,6 @@ Similarly, set **ann_path** in all the following configs to the above folder (Lo ### LLaVA -### TextVQA -Images, and question-answer pairs will be loaded during evaluation. - -### GQA -Images, and question-answer pairs will be loaded during evaluation. - ### OKVQA ### AOK-VQA From b15fec91a5f6c164aa8148b577f8b19a08217057 Mon Sep 17 00:00:00 2001 From: Deyao Zhu Date: Mon, 23 Oct 2023 21:47:29 +0300 Subject: [PATCH 10/22] update llava --- dataset/README_MINIGPTv2_FINETUNE.md | 30 ++++++++++++++++++++++------ 1 file changed, 24 insertions(+), 6 deletions(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index 3d09ffe..512b1ff 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -23,6 +23,7 @@ After downloading all of them, organize the data as follows in `./playground/dat ### Visual genome - [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip) + ### TextCaps ### RefCOCO, RefCOCO+, RefCOCOg @@ -55,12 +56,12 @@ Location_you_like Set **image_path** in all the following dataset configuration files to the COCO 2014 image folder. Similarly, set **ann_path** in all the following configs to the above folder (Location_you_like) that contains refcoco, refcoco+, and refcocog. -- [minigpt4/configs/refcoco.yaml](../minigpt4/configs/refcoco.yaml) -- [minigpt4/configs/refcocog.yaml](../minigpt4/configs/refcocog.yaml) -- [minigpt4/configs/refcocop.yaml](../minigpt4/configs/refcocop.yaml) -- [minigpt4/configs/invrefcoco.yaml](../minigpt4/configs/invrefcoco.yaml) -- [minigpt4/configs/invrefcocog.yaml](../minigpt4/configs/invrefcocog.yaml) -- [minigpt4/configs/invrefcocop.yaml](../minigpt4/configs/invrefcocop.yaml) +- [minigpt4/configs/datasets/coco_bbox/refcoco.yaml](../minigpt4/configs/datasets/coco_bbox/refcoco.yaml) +- [minigpt4/configs/datasets/coco_bbox/refcocog.yaml](../minigpt4/configs/datasets/coco_bbox/refcocog.yaml) +- [minigpt4/configs/datasets/coco_bbox/refcocop.yaml](../minigpt4/configs/datasets/coco_bbox/refcocop.yaml) +- [minigpt4/configs/datasets/coco_bbox/invrefcoco.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcoco.yaml) +- [minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml) +- [minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml) @@ -69,6 +70,23 @@ Similarly, set **ann_path** in all the following configs to the above folder (Lo ### textcaps ### LLaVA +Makesure you have the COCO 2014 images first. + +Download Llava annotation files in the following link to the place you like. + +- https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/conversation_58k.json +- https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/detail_23k.json +- https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/complex_reasoning_77k.json + +Set **image_path** in all the following dataset configuration files to the COCO 2014 image folder. +Similarly, set **ann_path** to the location of the previous downloaded conversation_58k.json, +detail_23k.json, and complex_reasoning_77k.json in conversation.yaml, detail.yaml, and reason.yaml, respectively. + + +- [minigpt4/configs/datasets/llava/conversation.yaml](../minigpt4/configs/datasets/llava/conversation.yaml) +- [minigpt4/configs/datasets/llava/detail.yaml](../minigpt4/configs/datasets/llava/detail.yaml) +- [minigpt4/configs/datasets/llava/reason.yaml](../minigpt4/configs/datasets/llava/reason.yaml) + ### TextVQA - [train_val_images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip) From f13ad74b267b53f9abab23a4c48fb46796cf61d2 Mon Sep 17 00:00:00 2001 From: Xiang Li <44761952+lx709@users.noreply.github.com> Date: Mon, 23 Oct 2023 21:48:58 +0300 Subject: [PATCH 11/22] Update Evaluation.md --- dataset/Evaluation.md | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/dataset/Evaluation.md b/dataset/Evaluation.md index 8b13789..9e3ff86 100644 --- a/dataset/Evaluation.md +++ b/dataset/Evaluation.md @@ -1 +1,21 @@ +### OKVQA + +### GQA +Images and question-answer pairs will be loaded during the evaluation. +''' +python run_eval.py xxxx +''' +### VSR +Images and question-answer pairs will be loaded during the evaluation. + +### IconVQA + +### VizWiz + +### HM + + + + + From 75f87692971a1a4b5107c1f2b8e1926b275bde2d Mon Sep 17 00:00:00 2001 From: Xiang Li <44761952+lx709@users.noreply.github.com> Date: Mon, 23 Oct 2023 21:49:30 +0300 Subject: [PATCH 12/22] Update Evaluation.md --- dataset/Evaluation.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/dataset/Evaluation.md b/dataset/Evaluation.md index 9e3ff86..e2b366c 100644 --- a/dataset/Evaluation.md +++ b/dataset/Evaluation.md @@ -3,11 +3,15 @@ ### GQA Images and question-answer pairs will be loaded during the evaluation. -''' +``` python run_eval.py xxxx -''' +``` + ### VSR Images and question-answer pairs will be loaded during the evaluation. +``` +python run_eval.py xxxx +``` ### IconVQA From 45a97de8cc02e1b85bb92bf5a194a2569a0a5ada Mon Sep 17 00:00:00 2001 From: Deyao Zhu Date: Mon, 23 Oct 2023 21:49:33 +0300 Subject: [PATCH 13/22] remove unused pretrain set in the finetune readme --- dataset/README_MINIGPTv2_FINETUNE.md | 107 --------------------------- 1 file changed, 107 deletions(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index 070fbf5..924ecca 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -84,110 +84,3 @@ detail_23k.json, and complex_reasoning_77k.json in conversation.yaml, detail.yam ### Multi-task conversation ### Unnatural instruction - - - - - - - - - - - - - -### Pre-training datasets download: -We use the filtered synthetic captions prepared by BLIP. For more details about the dataset, please refer to [BLIP](https://github.com/salesforce/BLIP). - -It requires ~2.3T to store LAION and CC3M+CC12M+SBU datasets - -Image source | Filtered synthetic caption by ViT-L ---- | :---: -CC3M+CC12M+SBU | Download -LAION115M | Download - -This will download two json files -``` -ccs_synthetic_filtered_large.json -laion_synthetic_filtered_large.json -``` - -## prepare the data step-by-step - - -### setup the dataset folder and move the annotation file to the data storage folder -``` -export MINIGPT4_DATASET=/YOUR/PATH/FOR/LARGE/DATASET/ -mkdir ${MINIGPT4_DATASET}/cc_sbu -mkdir ${MINIGPT4_DATASET}/laion -mv ccs_synthetic_filtered_large.json ${MINIGPT4_DATASET}/cc_sbu -mv laion_synthetic_filtered_large.json ${MINIGPT4_DATASET}/laion -``` - -### Convert the scripts to data storate folder -``` -cp convert_cc_sbu.py ${MINIGPT4_DATASET}/cc_sbu -cp download_cc_sbu.sh ${MINIGPT4_DATASET}/cc_sbu -cp convert_laion.py ${MINIGPT4_DATASET}/laion -cp download_laion.sh ${MINIGPT4_DATASET}/laion -``` - - -### Convert the laion and cc_sbu annotation file format to be img2dataset format -``` -cd ${MINIGPT4_DATASET}/cc_sbu -python convert_cc_sbu.py - -cd ${MINIGPT4_DATASET}/laion -python convert_laion.py -``` - -### Download the datasets with img2dataset -``` -cd ${MINIGPT4_DATASET}/cc_sbu -sh download_cc_sbu.sh -cd ${MINIGPT4_DATASET}/laion -sh download_laion.sh -``` - - -The final dataset structure - -``` -. -├── ${MINIGPT4_DATASET} -│ ├── cc_sbu -│ ├── convert_cc_sbu.py -│ ├── download_cc_sbu.sh -│ ├── ccs_synthetic_filtered_large.json -│ ├── ccs_synthetic_filtered_large.tsv -│ └── cc_sbu_dataset -│ ├── 00000.tar -│ ├── 00000.parquet -│ ... -│ ├── laion -│ ├── convert_laion.py -│ ├── download_laion.sh -│ ├── laion_synthetic_filtered_large.json -│ ├── laion_synthetic_filtered_large.tsv -│ └── laion_dataset -│ ├── 00000.tar -│ ├── 00000.parquet -│ ... -... -``` - - -## Set up the dataset configuration files - -Then, set up the LAION dataset loading path in -[here](../minigpt4/configs/datasets/laion/defaults.yaml#L5) at Line 5 as -${MINIGPT4_DATASET}/laion/laion_dataset/{00000..10488}.tar - -and the Conceptual Captoin and SBU datasets loading path in -[here](../minigpt4/configs/datasets/cc_sbu/defaults.yaml#L5) at Line 5 as -${MINIGPT4_DATASET}/cc_sbu/cc_sbu_dataset/{00000..01255}.tar - - - From 0c27a75fd693df2eef6af871fcfe61ff6d637e87 Mon Sep 17 00:00:00 2001 From: Xiang Li <44761952+lx709@users.noreply.github.com> Date: Mon, 23 Oct 2023 21:51:36 +0300 Subject: [PATCH 14/22] Update Evaluation.md --- dataset/Evaluation.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/dataset/Evaluation.md b/dataset/Evaluation.md index e2b366c..34118f3 100644 --- a/dataset/Evaluation.md +++ b/dataset/Evaluation.md @@ -3,19 +3,20 @@ ### GQA Images and question-answer pairs will be loaded during the evaluation. -``` -python run_eval.py xxxx -``` + +``` python run_eval.py xxxx ``` ### VSR Images and question-answer pairs will be loaded during the evaluation. -``` -python run_eval.py xxxx -``` + +``` python run_eval.py xxxx ``` ### IconVQA ### VizWiz +1. Download [`test.json`](https://vizwiz.cs.colorado.edu/VizWiz_final/vqa_data/Annotations.zip) and extract [`test.zip`](https://vizwiz.cs.colorado.edu/VizWiz_final/images/test.zip) to `test`. Put them under `your_path/vizwiz`. +2. Single-GPU inference. +``` python run_eval.py xxxx ``` ### HM From 41c050de7661763df95aca8d7e6601481e33d058 Mon Sep 17 00:00:00 2001 From: ZhuDeyao Date: Mon, 23 Oct 2023 21:57:25 +0300 Subject: [PATCH 15/22] Update modeling_llama.py for transformers package compatibility --- minigpt4/models/modeling_llama.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/minigpt4/models/modeling_llama.py b/minigpt4/models/modeling_llama.py index 6d28020..5d59a53 100644 --- a/minigpt4/models/modeling_llama.py +++ b/minigpt4/models/modeling_llama.py @@ -75,7 +75,7 @@ class LlamaForCausalLM(LlamaForCausalLMOrig): ) hidden_states = outputs[0] - if self.config.pretraining_tp > 1: + if hasattr(self.config, 'pretraining_tp') and self.config.pretraining_tp > 1: lm_head_slices = self.lm_head.weight.split(self.vocab_size // self.config.pretraining_tp, dim=0) logits = [F.linear(hidden_states, lm_head_slices[i]) for i in range(self.config.pretraining_tp)] logits = torch.cat(logits, dim=-1) From f955e62227b140135f2daa4c89acec3517c4931a Mon Sep 17 00:00:00 2001 From: XiaoqianShen <64844805+xiaoqian-shen@users.noreply.github.com> Date: Mon, 23 Oct 2023 19:22:16 +0000 Subject: [PATCH 16/22] Update README_MINIGPTv2_FINETUNE.md --- dataset/README_MINIGPTv2_FINETUNE.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index 924ecca..9eb3dcf 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -74,8 +74,18 @@ detail_23k.json, and complex_reasoning_77k.json in conversation.yaml, detail.yam ### OKVQA +- [OK-VQA Input Questions](https://okvqa.allenai.org/static/data/OpenEnded_mscoco_train2014_questions.json.zip) +- [OK-VQA Annotations](https://okvqa.allenai.org/static/data/mscoco_train2014_annotations.json.zip) +- Images are from COCO + ### AOK-VQA +``` +export AOKVQA_DIR=YOUR_DATASET_PATH +mkdir -p ${AOKVQA_DIR} +curl -fsSL https://prior-datasets.s3.us-east-2.amazonaws.com/aokvqa/aokvqa_v1p0.tar.gz | tar xvz -C ${AOKVQA_DIR} +``` + ### OCR-VQA - [download script](https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing), **we save all files as `.jpg`** From ac626b3d9a4eaf883c00919d4fe654083a844af7 Mon Sep 17 00:00:00 2001 From: XiaoqianShen <64844805+xiaoqian-shen@users.noreply.github.com> Date: Mon, 23 Oct 2023 19:27:01 +0000 Subject: [PATCH 17/22] Update README_MINIGPTv2_FINETUNE.md --- dataset/README_MINIGPTv2_FINETUNE.md | 1 + 1 file changed, 1 insertion(+) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index 9eb3dcf..edbdf2f 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -76,6 +76,7 @@ detail_23k.json, and complex_reasoning_77k.json in conversation.yaml, detail.yam - [OK-VQA Input Questions](https://okvqa.allenai.org/static/data/OpenEnded_mscoco_train2014_questions.json.zip) - [OK-VQA Annotations](https://okvqa.allenai.org/static/data/mscoco_train2014_annotations.json.zip) +- [okvqa_train](https://storage.googleapis.com/sfr-vision-language-research/LAVIS/datasets/okvqa/okvqa_train.json) - Images are from COCO ### AOK-VQA From dc84c5e5c7c95f9e084f6fed12c3b2db56ee1fcd Mon Sep 17 00:00:00 2001 From: XiaoqianShen <64844805+xiaoqian-shen@users.noreply.github.com> Date: Mon, 23 Oct 2023 19:29:37 +0000 Subject: [PATCH 18/22] Update README_MINIGPTv2_FINETUNE.md --- dataset/README_MINIGPTv2_FINETUNE.md | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index edbdf2f..0181f7d 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -10,6 +10,9 @@ ### TextCaps +-[TextCaps_0.1_train](https://dl.fbaipublicfiles.com/textvqa/data/textcaps/TextCaps_0.1_train.json) +-[Images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip) + ### RefCOCO, RefCOCO+, RefCOCOg Make sure you have the COCO 2014 images first. @@ -47,12 +50,6 @@ Similarly, set **ann_path** in all the following configs to the above folder (Lo - [minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml) - [minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml) - - -### Visual Genome - -### textcaps - ### LLaVA Makesure you have the COCO 2014 images first. From b93d40f23cfc69ba1c68d8fc78014099869e47f2 Mon Sep 17 00:00:00 2001 From: XiaoqianShen <64844805+xiaoqian-shen@users.noreply.github.com> Date: Mon, 23 Oct 2023 19:31:05 +0000 Subject: [PATCH 19/22] Update README_MINIGPTv2_FINETUNE.md --- dataset/README_MINIGPTv2_FINETUNE.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index 0181f7d..f7af60d 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -10,8 +10,8 @@ ### TextCaps --[TextCaps_0.1_train](https://dl.fbaipublicfiles.com/textvqa/data/textcaps/TextCaps_0.1_train.json) --[Images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip) +- [TextCaps_0.1_train](https://dl.fbaipublicfiles.com/textvqa/data/textcaps/TextCaps_0.1_train.json) +- [Images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip) ### RefCOCO, RefCOCO+, RefCOCOg Make sure you have the COCO 2014 images first. From d8c301006df54d234d7b7e4abeb3f7e4dea19db8 Mon Sep 17 00:00:00 2001 From: junchen14 Date: Tue, 24 Oct 2023 06:31:52 +0300 Subject: [PATCH 20/22] add data paths --- dataset/README_MINIGPTv2_FINETUNE.md | 41 +++++++++++++++++++++++++- train_configs/minigpt_v2_finetune.yaml | 1 - 2 files changed, 40 insertions(+), 2 deletions(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index f7af60d..5bc978e 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -1,11 +1,45 @@ ## Download the COCO captions, RefCOCO, RefCOCO+. RefCOCOg, visual genome, textcaps, LLaVA, gqa, AOK-VQA, OK-VQA, OCR-VQA, filtered Flickr-30k, multi-task conversation, and Unnatural instruction datasets +Download the dataset + +Image source | Download path +--- | :---: +COCO 2014 images | images captions +Visual Genome | images part1 images part2 +TextCaps | images annotations +RefCOCO | annotations +RefCOCO+ | annotations +RefCOCOg | annotations +LLaVA | Compelex reasoning Detailed description Conversation +OKVQA | annotations +AOK-VQA | annotations +OCR-VQA | annotations +Filtered Flickr-30k | images: annotations: annotations +Multi-task conversation | annotations +Filtered unnatural instruction | annotations + + +. +├── ${MINIGPTv2_DATASET} +│ ├── coco_captions +│ ├── coco_images +| ├── annotations +| ├── coco_karpathy_train.json + + ### COCO captions -- [train2017](http://images.cocodataset.org/zips/train2017.zip) + + + +Download the COCO 2014 images +- [train2014](http://images.cocodataset.org/zips/train2014.zip) + + ### Visual genome + - [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip) ### TextCaps @@ -69,10 +103,14 @@ detail_23k.json, and complex_reasoning_77k.json in conversation.yaml, detail.yam - [minigpt4/configs/datasets/llava/reason.yaml](../minigpt4/configs/datasets/llava/reason.yaml) + + + ### OKVQA - [OK-VQA Input Questions](https://okvqa.allenai.org/static/data/OpenEnded_mscoco_train2014_questions.json.zip) - [OK-VQA Annotations](https://okvqa.allenai.org/static/data/mscoco_train2014_annotations.json.zip) + - [okvqa_train](https://storage.googleapis.com/sfr-vision-language-research/LAVIS/datasets/okvqa/okvqa_train.json) - Images are from COCO @@ -89,6 +127,7 @@ curl -fsSL https://prior-datasets.s3.us-east-2.amazonaws.com/aokvqa/aokvqa_v1p0. ### filtered Flickr-30k + ### Multi-task conversation ### Unnatural instruction diff --git a/train_configs/minigpt_v2_finetune.yaml b/train_configs/minigpt_v2_finetune.yaml index 4039ea6..89d595a 100644 --- a/train_configs/minigpt_v2_finetune.yaml +++ b/train_configs/minigpt_v2_finetune.yaml @@ -11,7 +11,6 @@ model: ckpt: "/ibex/project/c2090/minigpt4_ckpt/448_perforamnce_correct_v10_vg/20230925064/checkpoint_32.pth" use_grad_checkpoint: True chat_template: True - # wanda_log: False lora_r: 64 lora_alpha: 16 From 86908b631461c75ac3fdcd0f9d601e2e483c0af6 Mon Sep 17 00:00:00 2001 From: junchen14 Date: Tue, 24 Oct 2023 06:35:19 +0300 Subject: [PATCH 21/22] update readme --- dataset/README_MINIGPTv2_FINETUNE.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index 5bc978e..a526d28 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -5,28 +5,30 @@ Download the dataset Image source | Download path --- | :---: -COCO 2014 images | images captions +COCO 2014 images | images    captions Visual Genome | images part1 images part2 TextCaps | images annotations RefCOCO | annotations RefCOCO+ | annotations RefCOCOg | annotations -LLaVA | Compelex reasoning Detailed description Conversation +LLaVA | Compelex reasoning    Detailed description    Conversation OKVQA | annotations AOK-VQA | annotations OCR-VQA | annotations -Filtered Flickr-30k | images: annotations: annotations +Filtered Flickr-30k | annotations Multi-task conversation | annotations -Filtered unnatural instruction | annotations +Filtered unnatural instruction | annotations - -. +``` ├── ${MINIGPTv2_DATASET} │ ├── coco_captions │ ├── coco_images | ├── annotations | ├── coco_karpathy_train.json +``` + + ### COCO captions From 1d0c37d924e8a5127e13f09bca488e6495cef0f8 Mon Sep 17 00:00:00 2001 From: junchen14 Date: Tue, 24 Oct 2023 09:04:24 +0300 Subject: [PATCH 22/22] add datasets --- dataset/README_MINIGPTv2_FINETUNE.md | 212 +++++++++++++++++++++------ 1 file changed, 165 insertions(+), 47 deletions(-) diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md index a526d28..5da190b 100644 --- a/dataset/README_MINIGPTv2_FINETUNE.md +++ b/dataset/README_MINIGPTv2_FINETUNE.md @@ -6,6 +6,7 @@ Download the dataset Image source | Download path --- | :---: COCO 2014 images | images    captions +COCO VQA | vqa train    vqa val Visual Genome | images part1 images part2 TextCaps | images annotations RefCOCO | annotations @@ -16,8 +17,14 @@ OKVQA | annotations OCR-VQA | annotations Filtered Flickr-30k | annotations -Multi-task conversation | annotations -Filtered unnatural instruction | annotations +Multi-task conversation | annotations +Filtered unnatural instruction | annotations + + + +### COCO captions +Download the COCO 2014 images and captions + ``` ├── ${MINIGPTv2_DATASET} @@ -28,55 +35,79 @@ Filtered unnatural instruction |