diff --git a/.gitignore b/.gitignore
index 50d0c3e..7120f43 100755
--- a/.gitignore
+++ b/.gitignore
@@ -174,7 +174,8 @@ prompts/
output/
ckpt/
divide_vqa.py
+jobs/
-
+*.slurm
slurm*
sbatch_generate*
\ No newline at end of file
diff --git a/dataset/Evaluation.md b/dataset/Evaluation.md
new file mode 100644
index 0000000..34118f3
--- /dev/null
+++ b/dataset/Evaluation.md
@@ -0,0 +1,26 @@
+
+### OKVQA
+
+### GQA
+Images and question-answer pairs will be loaded during the evaluation.
+
+``` python run_eval.py xxxx ```
+
+### VSR
+Images and question-answer pairs will be loaded during the evaluation.
+
+``` python run_eval.py xxxx ```
+
+### IconVQA
+
+### VizWiz
+1. Download [`test.json`](https://vizwiz.cs.colorado.edu/VizWiz_final/vqa_data/Annotations.zip) and extract [`test.zip`](https://vizwiz.cs.colorado.edu/VizWiz_final/images/test.zip) to `test`. Put them under `your_path/vizwiz`.
+2. Single-GPU inference.
+``` python run_eval.py xxxx ```
+
+### HM
+
+
+
+
+
diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md
index 2d5c825..5da190b 100644
--- a/dataset/README_MINIGPTv2_FINETUNE.md
+++ b/dataset/README_MINIGPTv2_FINETUNE.md
@@ -1,133 +1,253 @@
## Download the COCO captions, RefCOCO, RefCOCO+. RefCOCOg, visual genome, textcaps, LLaVA, gqa, AOK-VQA, OK-VQA, OCR-VQA, filtered Flickr-30k, multi-task conversation, and Unnatural instruction datasets
+
+Download the dataset
+
+Image source | Download path
+--- | :---:
+COCO 2014 images | images captions
+COCO VQA | vqa train vqa val
+Visual Genome | images part1 images part2
+TextCaps | images annotations
+RefCOCO | annotations
+RefCOCO+ | annotations
+RefCOCOg | annotations
+LLaVA | Compelex reasoning Detailed description Conversation
+OKVQA | annotations
+AOK-VQA | annotations
+OCR-VQA | annotations
+Filtered Flickr-30k | annotations
+Multi-task conversation | annotations
+Filtered unnatural instruction | annotations
+
+
+
### COCO captions
+Download the COCO 2014 images and captions
-### RefCOCO, RefCOCO+, RefCOCOg
+```
+├── ${MINIGPTv2_DATASET}
+│ ├── coco_captions
+│ ├── coco_images
+| ├── annotations
+| ├── coco_karpathy_train.json
+
+```
+
+Set **image_path** to the COCO 2014 image folder.
+Similarly, set **ann_path** to the coco_karpathy_train.json path
+- [minigpt4/configs/datasets/coco/caption.yaml](../minigpt4/configs/datasets/coco/caption.yaml)
+
+### COCO VQA
+Download the vqa v2 train and validation json files
+
+```
+├── ${MINIGPTv2_DATASET}
+│ ├── vqav2
+│ ├── vqa_train.json
+| ├── vqa_val.json
+```
+
+Set **image_path** to the COCO 2014 image folder.
+Similarly, set **ann_path** to the vqa_train.json and vqa_val.json path
+- [minigpt4/configs/datasets/coco/defaults_vqa.yaml](../minigpt4/configs/datasets/coco/defaults_vqa.yaml)
+
### Visual genome
+Download visiual genome images and annotation files
+
+```
+├── ${MINIGPTv2_DATASET}
+│ ├── visual_genome
+│ ├── VG_100K
+│ ├── VG_100K_2
+| ├── region_descriptions.json
+```
+
+Set **image_path** to visual_genome folder.
+Similarly, set **ann_path** to to visual_genome folder.
+
+- [minigpt4/configs/datasets/vg/ref.yaml](../minigpt4/configs/datasets/vg/ref.yaml)
+
+
+### TextCaps
+Download the TextCaps images and annotation files
+
+```
+├── ${MINIGPTv2_DATASET}
+│ ├── TextCaps
+│ ├── train_images
+│ ├── TextCaps_0.1_train.json
+```
+
+Set **image_path** to TextCaps train_images folder.
+Similarly, set **ann_path** to the TextCaps_0.1_train.json path
+
+- [minigpt4/configs/datasets/textcaps/caption.yaml](../minigpt4/configs/datasets/textcaps/caption.yaml)
+
+### RefCOCO, RefCOCO+, RefCOCOg
+Download the RefCOCO, RefCOCO+, RefCOCOg annotation files
+
+```
+Location_you_like
+├── ${MINIGPTv2_DATASET}
+│ ├── refcoco_annotations
+│ ├── refcoco
+| ├── instances.json
+| ├── refs(google).p
+| ├── refs(unc).p
+│ ├── refcoco+
+| ├── instances.json
+| ├── refs(unc).p
+│ ├── refcocog
+| ├── instances.json
+| ├── refs(google).p
+| ├── refs(und).p
+```
+
+
+Set **image_path** to the COCO 2014 image folder.
+Similarly, set **ann_path** in all the following configs to the above folder (Location_you_like) that contains refcoco, refcoco+, and refcocog.
+
+- [minigpt4/configs/datasets/coco_bbox/refcoco.yaml](../minigpt4/configs/datasets/coco_bbox/refcoco.yaml)
+- [minigpt4/configs/datasets/coco_bbox/refcocog.yaml](../minigpt4/configs/datasets/coco_bbox/refcocog.yaml)
+- [minigpt4/configs/datasets/coco_bbox/refcocop.yaml](../minigpt4/configs/datasets/coco_bbox/refcocop.yaml)
+- [minigpt4/configs/datasets/coco_bbox/invrefcoco.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcoco.yaml)
+- [minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml)
+- [minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml)
-### textcaps
### LLaVA
-### gqa
+```
+Location_you_like
+├── ${MINIGPTv2_DATASET}
+│ ├── llava
+│ ├── conversation_58k.json
+│ ├── detail_23k.json
+│ ├── complex_reasoning_77k.json
+```
+
+Set **image_path** to the COCO 2014 image folder.
+Similarly, set **ann_path** to the location of the previous downloaded conversation_58k.json,
+detail_23k.json, and complex_reasoning_77k.json in conversation.yaml, detail.yaml, and reason.yaml, respectively.
+
+
+- [minigpt4/configs/datasets/llava/conversation.yaml](../minigpt4/configs/datasets/llava/conversation.yaml)
+- [minigpt4/configs/datasets/llava/detail.yaml](../minigpt4/configs/datasets/llava/detail.yaml)
+- [minigpt4/configs/datasets/llava/reason.yaml](../minigpt4/configs/datasets/llava/reason.yaml)
+
### OKVQA
+
+```
+Location_you_like
+├── ${MINIGPTv2_DATASET}
+│ ├── OKVQA
+│ ├── okvqa_train.json
+```
+
+Set **image_path** to the COCO 2014 image folder.
+Similarly, set **ann_path** to the location of the OKVQA dataset
+- [minigpt4/configs/datasets/okvqa/defaults.yaml](../minigpt4/configs/datasets/okvqa/defaults.yaml)
+
+
+### COCO-VQA
+
+- [OK-VQA Input Questions](https://okvqa.allenai.org/static/data/OpenEnded_mscoco_train2014_questions.json.zip)
+- [OK-VQA Annotations](https://okvqa.allenai.org/static/data/mscoco_train2014_annotations.json.zip)
+
+
### AOK-VQA
+Download the AOK-VQA annotation dataset
+
+```
+export AOKVQA_DIR=YOUR_DATASET_PATH
+mkdir -p ${AOKVQA_DIR}
+curl -fsSL https://prior-datasets.s3.us-east-2.amazonaws.com/aokvqa/aokvqa_v1p0.tar.gz | tar xvz -C ${AOKVQA_DIR}
+```
+
+```
+Location_you_like
+├── ${MINIGPTv2_DATASET}
+│ ├── AOKVQA
+│ ├── aokvqa_v1p0_train.json
+```
+
+
+Set **image_path** to the COCO 2014 image folder.
+Similarly, set **ann_path** to the location of the AOKVQA dataset
+- [minigpt4/configs/datasets/aokvqa/defaults.yaml](../minigpt4/configs/datasets/aokvqa/defaults.yaml)
+
+
### OCR-VQA
+Download the OCR-VQA annotation files
+
+```
+Location_you_like
+├── ${MINIGPTv2_DATASET}
+│ ├── OCR-VQA
+│ ├── images
+│ ├── dataset.json
+```
+
+Set **image_path** as the OCR-VQA image folder.
+Similarly, set **ann_path** to the lhe OCR-VQA dataset.json
+- [minigpt4/configs/datasets/ocrvqa/ocrvqa.yaml](../minigpt4/configs/datasets/ocrvqa/ocrvqa.yaml)
+
+
### filtered Flickr-30k
+Download filtered Flickr-30k images and annotation files
+
+```
+Location_you_like
+├── ${MINIGPTv2_DATASET}
+│ ├── filtered_flickr
+│ ├── images
+│ ├── captiontobbox.json
+│ ├── groundedcaption.json
+│ ├── phrasetobbox.json
+```
+
+Set **image_path** as the flickr-30k images foler.
+Similarly, set **ann_path** to the groundedcaption.json, captiontobbox.json and phrasetobbox.json for the
+grounded image caption, caption to bbox, and phrase to bbox datasets.
+
+- [minigpt4/configs/datasets/flickr/default.yaml](../minigpt4/configs/datasets/flickr/default.yaml)
+- [minigpt4/configs/datasets/flickr/caption_to_phrase.yaml](../minigpt4/configs/datasets/flickr/caption_to_phrase.yaml)
+- [minigpt4/configs/datasets/flickr/object_to_phrase.yaml](../minigpt4/configs/datasets/flickr/object_to_phrase.yaml)
+
### Multi-task conversation
+Download the multi-task converstation dataset
+
+```
+Location_you_like
+├── ${MINIGPTv2_DATASET}
+│ ├── multitask_conversation
+│ ├── multitask_conversation.json
+```
+
+Set **image_path** as the COCO 2014 images folder.
+Similarly, set **ann_path** to the multitask_conversation.json file path
+
+- [minigpt4/configs/datasets/multitask_conversation/default.yaml](../minigpt4/configs/datasets/multitask_conversation/default.yaml)
### Unnatural instruction
-
-
-
-
-
-
-
-
-
-
-
-
-
-### Pre-training datasets download:
-We use the filtered synthetic captions prepared by BLIP. For more details about the dataset, please refer to [BLIP](https://github.com/salesforce/BLIP).
-
-It requires ~2.3T to store LAION and CC3M+CC12M+SBU datasets
-
-Image source | Filtered synthetic caption by ViT-L
---- | :---:
-CC3M+CC12M+SBU | Download
-LAION115M | Download
-
-This will download two json files
-```
-ccs_synthetic_filtered_large.json
-laion_synthetic_filtered_large.json
-```
-
-## prepare the data step-by-step
-
-
-### setup the dataset folder and move the annotation file to the data storage folder
-```
-export MINIGPT4_DATASET=/YOUR/PATH/FOR/LARGE/DATASET/
-mkdir ${MINIGPT4_DATASET}/cc_sbu
-mkdir ${MINIGPT4_DATASET}/laion
-mv ccs_synthetic_filtered_large.json ${MINIGPT4_DATASET}/cc_sbu
-mv laion_synthetic_filtered_large.json ${MINIGPT4_DATASET}/laion
-```
-
-### Convert the scripts to data storate folder
-```
-cp convert_cc_sbu.py ${MINIGPT4_DATASET}/cc_sbu
-cp download_cc_sbu.sh ${MINIGPT4_DATASET}/cc_sbu
-cp convert_laion.py ${MINIGPT4_DATASET}/laion
-cp download_laion.sh ${MINIGPT4_DATASET}/laion
-```
-
-
-### Convert the laion and cc_sbu annotation file format to be img2dataset format
-```
-cd ${MINIGPT4_DATASET}/cc_sbu
-python convert_cc_sbu.py
-
-cd ${MINIGPT4_DATASET}/laion
-python convert_laion.py
-```
-
-### Download the datasets with img2dataset
-```
-cd ${MINIGPT4_DATASET}/cc_sbu
-sh download_cc_sbu.sh
-cd ${MINIGPT4_DATASET}/laion
-sh download_laion.sh
-```
-
-
-The final dataset structure
+Download the filtered unnatural instruction annotation files (we remove the very long sentences from the original unnatural instruction dataset)
```
-.
-├── ${MINIGPT4_DATASET}
-│ ├── cc_sbu
-│ ├── convert_cc_sbu.py
-│ ├── download_cc_sbu.sh
-│ ├── ccs_synthetic_filtered_large.json
-│ ├── ccs_synthetic_filtered_large.tsv
-│ └── cc_sbu_dataset
-│ ├── 00000.tar
-│ ├── 00000.parquet
-│ ...
-│ ├── laion
-│ ├── convert_laion.py
-│ ├── download_laion.sh
-│ ├── laion_synthetic_filtered_large.json
-│ ├── laion_synthetic_filtered_large.tsv
-│ └── laion_dataset
-│ ├── 00000.tar
-│ ├── 00000.parquet
-│ ...
-...
+Location_you_like
+├── ${MINIGPTv2_DATASET}
+│ ├── unnatural-instructions
+│ ├── filtered_unnatural_instruction.json
```
+There is no image path.
+Similarly, set **ann_path** to the filtered_unnatural_instruction.json file path
-## Set up the dataset configuration files
-
-Then, set up the LAION dataset loading path in
-[here](../minigpt4/configs/datasets/laion/defaults.yaml#L5) at Line 5 as
-${MINIGPT4_DATASET}/laion/laion_dataset/{00000..10488}.tar
-
-and the Conceptual Captoin and SBU datasets loading path in
-[here](../minigpt4/configs/datasets/cc_sbu/defaults.yaml#L5) at Line 5 as
-${MINIGPT4_DATASET}/cc_sbu/cc_sbu_dataset/{00000..01255}.tar
-
-
-
+- [minigpt4/configs/datasets/nlp/unnatural_instruction.yaml](../minigpt4/configs/datasets/nlp/unnatural_instruction.yaml)
\ No newline at end of file
diff --git a/minigpt4/models/modeling_llama.py b/minigpt4/models/modeling_llama.py
index 6d28020..5d59a53 100644
--- a/minigpt4/models/modeling_llama.py
+++ b/minigpt4/models/modeling_llama.py
@@ -75,7 +75,7 @@ class LlamaForCausalLM(LlamaForCausalLMOrig):
)
hidden_states = outputs[0]
- if self.config.pretraining_tp > 1:
+ if hasattr(self.config, 'pretraining_tp') and self.config.pretraining_tp > 1:
lm_head_slices = self.lm_head.weight.split(self.vocab_size // self.config.pretraining_tp, dim=0)
logits = [F.linear(hidden_states, lm_head_slices[i]) for i in range(self.config.pretraining_tp)]
logits = torch.cat(logits, dim=-1)
diff --git a/train_configs/minigpt_v2_finetune.yaml b/train_configs/minigpt_v2_finetune.yaml
index 4039ea6..89d595a 100644
--- a/train_configs/minigpt_v2_finetune.yaml
+++ b/train_configs/minigpt_v2_finetune.yaml
@@ -11,7 +11,6 @@ model:
ckpt: "/ibex/project/c2090/minigpt4_ckpt/448_perforamnce_correct_v10_vg/20230925064/checkpoint_32.pth"
use_grad_checkpoint: True
chat_template: True
- # wanda_log: False
lora_r: 64
lora_alpha: 16