add datasets

This commit is contained in:
junchen14 2023-10-24 09:04:24 +03:00
parent 86908b6314
commit 1d0c37d924

View File

@ -6,6 +6,7 @@ Download the dataset
Image source | Download path Image source | Download path
--- | :---: --- | :---:
COCO 2014 images | <a href="http://images.cocodataset.org/zips/train2014.zip">images</a> &nbsp;&nbsp; <a href="https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_train.json"> captions</a> COCO 2014 images | <a href="http://images.cocodataset.org/zips/train2014.zip">images</a> &nbsp;&nbsp; <a href="https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_train.json"> captions</a>
COCO VQA | <a href="https://storage.googleapis.com/sfr-vision-language-research/LAVIS/datasets/vqav2/vqa_train.json">vqa train</a> &nbsp;&nbsp; <a href="https://storage.googleapis.com/sfr-vision-language-research/LAVIS/datasets/vqav2/vqa_val.json"> vqa val</a>
Visual Genome | <a href="https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip">images part1</a> <a href="https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip">images part2</a> Visual Genome | <a href="https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip">images part1</a> <a href="https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip">images part2</a>
TextCaps | <a href="https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip">images</a> <a href="https://dl.fbaipublicfiles.com/textvqa/data/textcaps/TextCaps_0.1_train.json"> annotations</a> TextCaps | <a href="https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip">images</a> <a href="https://dl.fbaipublicfiles.com/textvqa/data/textcaps/TextCaps_0.1_train.json"> annotations</a>
RefCOCO | <a href="https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco.zip"> annotations </a> RefCOCO | <a href="https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco.zip"> annotations </a>
@ -16,8 +17,14 @@ OKVQA | <a href="https://storage.googleapis.com/sfr-vision-language-research/LAV
AOK-VQA | <a href="https://prior-datasets.s3.us-east-2.amazonaws.com/aokvqa/aokvqa_v1p0.tar.gz"> annotations </a> AOK-VQA | <a href="https://prior-datasets.s3.us-east-2.amazonaws.com/aokvqa/aokvqa_v1p0.tar.gz"> annotations </a>
OCR-VQA | <a href="https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing"> annotations </a> OCR-VQA | <a href="https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing"> annotations </a>
Filtered Flickr-30k | <a href="https://drive.google.com/drive/folders/19c_ggBI77AvdtYlPbuI0ZpnPz73T5teX?usp=sharing"> annotations </a> Filtered Flickr-30k | <a href="https://drive.google.com/drive/folders/19c_ggBI77AvdtYlPbuI0ZpnPz73T5teX?usp=sharing"> annotations </a>
Multi-task conversation <a href="https://drive.google.com/file/d/11HHqB2c29hbSk-WLxdta-nG8UCUrcCN1/view?usp=sharing"> annotations </a> Multi-task conversation | <a href="https://drive.google.com/file/d/11HHqB2c29hbSk-WLxdta-nG8UCUrcCN1/view?usp=sharing"> annotations </a>
Filtered unnatural instruction <a href="https://drive.google.com/file/d/1lXNnBcb5WU-sc8Fe2T2N8J0NRw4sBLev/view?usp=sharing"> annotations </a> Filtered unnatural instruction | <a href="https://drive.google.com/file/d/1lXNnBcb5WU-sc8Fe2T2N8J0NRw4sBLev/view?usp=sharing"> annotations </a>
### COCO captions
Download the COCO 2014 images and captions
``` ```
├── ${MINIGPTv2_DATASET} ├── ${MINIGPTv2_DATASET}
@ -28,55 +35,79 @@ Filtered unnatural instruction <a href="https://drive.google.com/file/d/1lXN
``` ```
Set **image_path** to the COCO 2014 image folder.
Similarly, set **ann_path** to the coco_karpathy_train.json path
- [minigpt4/configs/datasets/coco/caption.yaml](../minigpt4/configs/datasets/coco/caption.yaml)
### COCO VQA
Download the vqa v2 train and validation json files
### COCO captions ```
├── ${MINIGPTv2_DATASET}
│ ├── vqav2
│ ├── vqa_train.json
Download the COCO 2014 images | ├── vqa_val.json
- [train2014](http://images.cocodataset.org/zips/train2014.zip) ```
Set **image_path** to the COCO 2014 image folder.
Similarly, set **ann_path** to the vqa_train.json and vqa_val.json path
- [minigpt4/configs/datasets/coco/defaults_vqa.yaml](../minigpt4/configs/datasets/coco/defaults_vqa.yaml)
### Visual genome ### Visual genome
Download visiual genome images and annotation files
```
├── ${MINIGPTv2_DATASET}
│ ├── visual_genome
│ ├── VG_100K
│ ├── VG_100K_2
| ├── region_descriptions.json
```
Set **image_path** to visual_genome folder.
Similarly, set **ann_path** to to visual_genome folder.
- [minigpt4/configs/datasets/vg/ref.yaml](../minigpt4/configs/datasets/vg/ref.yaml)
- [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip)
### TextCaps ### TextCaps
Download the TextCaps images and annotation files
- [TextCaps_0.1_train](https://dl.fbaipublicfiles.com/textvqa/data/textcaps/TextCaps_0.1_train.json) ```
- [Images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip) ├── ${MINIGPTv2_DATASET}
│ ├── TextCaps
│ ├── train_images
│ ├── TextCaps_0.1_train.json
```
Set **image_path** to TextCaps train_images folder.
Similarly, set **ann_path** to the TextCaps_0.1_train.json path
- [minigpt4/configs/datasets/textcaps/caption.yaml](../minigpt4/configs/datasets/textcaps/caption.yaml)
### RefCOCO, RefCOCO+, RefCOCOg ### RefCOCO, RefCOCO+, RefCOCOg
Make sure you have the COCO 2014 images first. Download the RefCOCO, RefCOCO+, RefCOCOg annotation files
Then,
download RefCOCO, RefCOCO+, and RefCOCOg annotation files in the following links.
- https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco.zip
- https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco+.zip
- https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcocog.zip
Unzip these files to the location you like. It should have the structure like the following
``` ```
Location_you_like Location_you_like
├── refcoco ├── ${MINIGPTv2_DATASET}
│ ├── instances.json │ ├── refcoco_annotations
│ ├── refs(google).p │ ├── refcoco
│ └── refs(unc).p | ├── instances.json
├── refcoco+ | ├── refs(google).p
│ ├── instances.json | ├── refs(unc).p
│ └── refs(unc).p │ ├── refcoco+
└── refcocog | ├── instances.json
├── instances.json | ├── refs(unc).p
├── refs(google).p │ ├── refcocog
└── refs(umd).p | ├── instances.json
| ├── refs(google).p
| ├── refs(und).p
``` ```
Set **image_path** in all the following dataset configuration files to the COCO 2014 image folder.
Set **image_path** to the COCO 2014 image folder.
Similarly, set **ann_path** in all the following configs to the above folder (Location_you_like) that contains refcoco, refcoco+, and refcocog. Similarly, set **ann_path** in all the following configs to the above folder (Location_you_like) that contains refcoco, refcoco+, and refcocog.
- [minigpt4/configs/datasets/coco_bbox/refcoco.yaml](../minigpt4/configs/datasets/coco_bbox/refcoco.yaml) - [minigpt4/configs/datasets/coco_bbox/refcoco.yaml](../minigpt4/configs/datasets/coco_bbox/refcoco.yaml)
@ -86,16 +117,19 @@ Similarly, set **ann_path** in all the following configs to the above folder (Lo
- [minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml) - [minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocog.yaml)
- [minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml) - [minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml](../minigpt4/configs/datasets/coco_bbox/invrefcocop.yaml)
### LLaVA ### LLaVA
Makesure you have the COCO 2014 images first.
Download Llava annotation files in the following link to the place you like. ```
Location_you_like
├── ${MINIGPTv2_DATASET}
│ ├── llava
│ ├── conversation_58k.json
│ ├── detail_23k.json
│ ├── complex_reasoning_77k.json
```
- https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/conversation_58k.json Set **image_path** to the COCO 2014 image folder.
- https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/detail_23k.json
- https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/complex_reasoning_77k.json
Set **image_path** in all the following dataset configuration files to the COCO 2014 image folder.
Similarly, set **ann_path** to the location of the previous downloaded conversation_58k.json, Similarly, set **ann_path** to the location of the previous downloaded conversation_58k.json,
detail_23k.json, and complex_reasoning_77k.json in conversation.yaml, detail.yaml, and reason.yaml, respectively. detail_23k.json, and complex_reasoning_77k.json in conversation.yaml, detail.yaml, and reason.yaml, respectively.
@ -105,18 +139,29 @@ detail_23k.json, and complex_reasoning_77k.json in conversation.yaml, detail.yam
- [minigpt4/configs/datasets/llava/reason.yaml](../minigpt4/configs/datasets/llava/reason.yaml) - [minigpt4/configs/datasets/llava/reason.yaml](../minigpt4/configs/datasets/llava/reason.yaml)
### OKVQA ### OKVQA
```
Location_you_like
├── ${MINIGPTv2_DATASET}
│ ├── OKVQA
│ ├── okvqa_train.json
```
Set **image_path** to the COCO 2014 image folder.
Similarly, set **ann_path** to the location of the OKVQA dataset
- [minigpt4/configs/datasets/okvqa/defaults.yaml](../minigpt4/configs/datasets/okvqa/defaults.yaml)
### COCO-VQA
- [OK-VQA Input Questions](https://okvqa.allenai.org/static/data/OpenEnded_mscoco_train2014_questions.json.zip) - [OK-VQA Input Questions](https://okvqa.allenai.org/static/data/OpenEnded_mscoco_train2014_questions.json.zip)
- [OK-VQA Annotations](https://okvqa.allenai.org/static/data/mscoco_train2014_annotations.json.zip) - [OK-VQA Annotations](https://okvqa.allenai.org/static/data/mscoco_train2014_annotations.json.zip)
- [okvqa_train](https://storage.googleapis.com/sfr-vision-language-research/LAVIS/datasets/okvqa/okvqa_train.json)
- Images are from COCO
### AOK-VQA ### AOK-VQA
Download the AOK-VQA annotation dataset
``` ```
export AOKVQA_DIR=YOUR_DATASET_PATH export AOKVQA_DIR=YOUR_DATASET_PATH
@ -124,12 +169,85 @@ mkdir -p ${AOKVQA_DIR}
curl -fsSL https://prior-datasets.s3.us-east-2.amazonaws.com/aokvqa/aokvqa_v1p0.tar.gz | tar xvz -C ${AOKVQA_DIR} curl -fsSL https://prior-datasets.s3.us-east-2.amazonaws.com/aokvqa/aokvqa_v1p0.tar.gz | tar xvz -C ${AOKVQA_DIR}
``` ```
```
Location_you_like
├── ${MINIGPTv2_DATASET}
│ ├── AOKVQA
│ ├── aokvqa_v1p0_train.json
```
Set **image_path** to the COCO 2014 image folder.
Similarly, set **ann_path** to the location of the AOKVQA dataset
- [minigpt4/configs/datasets/aokvqa/defaults.yaml](../minigpt4/configs/datasets/aokvqa/defaults.yaml)
### OCR-VQA ### OCR-VQA
- [download script](https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing), **we save all files as `.jpg`** Download the OCR-VQA annotation files
```
Location_you_like
├── ${MINIGPTv2_DATASET}
│ ├── OCR-VQA
│ ├── images
│ ├── dataset.json
```
Set **image_path** as the OCR-VQA image folder.
Similarly, set **ann_path** to the lhe OCR-VQA dataset.json
- [minigpt4/configs/datasets/ocrvqa/ocrvqa.yaml](../minigpt4/configs/datasets/ocrvqa/ocrvqa.yaml)
### filtered Flickr-30k ### filtered Flickr-30k
Download filtered Flickr-30k images and annotation files
```
Location_you_like
├── ${MINIGPTv2_DATASET}
│ ├── filtered_flickr
│ ├── images
│ ├── captiontobbox.json
│ ├── groundedcaption.json
│ ├── phrasetobbox.json
```
Set **image_path** as the flickr-30k images foler.
Similarly, set **ann_path** to the groundedcaption.json, captiontobbox.json and phrasetobbox.json for the
grounded image caption, caption to bbox, and phrase to bbox datasets.
- [minigpt4/configs/datasets/flickr/default.yaml](../minigpt4/configs/datasets/flickr/default.yaml)
- [minigpt4/configs/datasets/flickr/caption_to_phrase.yaml](../minigpt4/configs/datasets/flickr/caption_to_phrase.yaml)
- [minigpt4/configs/datasets/flickr/object_to_phrase.yaml](../minigpt4/configs/datasets/flickr/object_to_phrase.yaml)
### Multi-task conversation ### Multi-task conversation
Download the multi-task converstation dataset
```
Location_you_like
├── ${MINIGPTv2_DATASET}
│ ├── multitask_conversation
│ ├── multitask_conversation.json
```
Set **image_path** as the COCO 2014 images folder.
Similarly, set **ann_path** to the multitask_conversation.json file path
- [minigpt4/configs/datasets/multitask_conversation/default.yaml](../minigpt4/configs/datasets/multitask_conversation/default.yaml)
### Unnatural instruction ### Unnatural instruction
Download the filtered unnatural instruction annotation files (we remove the very long sentences from the original unnatural instruction dataset)
```
Location_you_like
├── ${MINIGPTv2_DATASET}
│ ├── unnatural-instructions
│ ├── filtered_unnatural_instruction.json
```
There is no image path.
Similarly, set **ann_path** to the filtered_unnatural_instruction.json file path
- [minigpt4/configs/datasets/nlp/unnatural_instruction.yaml](../minigpt4/configs/datasets/nlp/unnatural_instruction.yaml)