MiniGPT-4/dataset/README_MINIGPTv2_FINETUNE.md

6.2 KiB

Download the COCO captions, RefCOCO, RefCOCO+. RefCOCOg, visual genome, textcaps, LLaVA, gqa, AOK-VQA, OK-VQA, OCR-VQA, filtered Flickr-30k, multi-task conversation, and Unnatural instruction datasets

COCO captions

Visual genome

TextCaps

RefCOCO, RefCOCO+, RefCOCOg

Make sure you have the COCO 2014 images first.

Then, download RefCOCO, RefCOCO+, and RefCOCOg annotation files in the following links.

Unzip these files to the location you like. It should have the structure like the following

Location_you_like
├── refcoco
│   ├── instances.json
│   ├── refs(google).p
│   └── refs(unc).p
├── refcoco+
│   ├── instances.json
│   └── refs(unc).p
└── refcocog
    ├── instances.json
    ├── refs(google).p
    └── refs(umd).p

Set image_path in all the following dataset configuration files to the COCO 2014 image folder. Similarly, set ann_path in all the following configs to the above folder (Location_you_like) that contains refcoco, refcoco+, and refcocog.

Visual Genome

textcaps

LLaVA

Makesure you have the COCO 2014 images first.

Download Llava annotation files in the following link to the place you like.

Set image_path in all the following dataset configuration files to the COCO 2014 image folder. Similarly, set ann_path to the location of the previous downloaded conversation_58k.json, detail_23k.json, and complex_reasoning_77k.json in conversation.yaml, detail.yaml, and reason.yaml, respectively.

OKVQA

AOK-VQA

OCR-VQA

filtered Flickr-30k

Multi-task conversation

Unnatural instruction

Pre-training datasets download:

We use the filtered synthetic captions prepared by BLIP. For more details about the dataset, please refer to BLIP.

It requires ~2.3T to store LAION and CC3M+CC12M+SBU datasets

Image source Filtered synthetic caption by ViT-L
CC3M+CC12M+SBU Download
LAION115M Download

This will download two json files

ccs_synthetic_filtered_large.json
laion_synthetic_filtered_large.json

prepare the data step-by-step

setup the dataset folder and move the annotation file to the data storage folder

export MINIGPT4_DATASET=/YOUR/PATH/FOR/LARGE/DATASET/
mkdir ${MINIGPT4_DATASET}/cc_sbu
mkdir ${MINIGPT4_DATASET}/laion
mv ccs_synthetic_filtered_large.json ${MINIGPT4_DATASET}/cc_sbu
mv laion_synthetic_filtered_large.json ${MINIGPT4_DATASET}/laion

Convert the scripts to data storate folder

cp convert_cc_sbu.py ${MINIGPT4_DATASET}/cc_sbu
cp download_cc_sbu.sh ${MINIGPT4_DATASET}/cc_sbu
cp convert_laion.py ${MINIGPT4_DATASET}/laion
cp download_laion.sh ${MINIGPT4_DATASET}/laion

Convert the laion and cc_sbu annotation file format to be img2dataset format

cd ${MINIGPT4_DATASET}/cc_sbu
python convert_cc_sbu.py

cd ${MINIGPT4_DATASET}/laion
python convert_laion.py

Download the datasets with img2dataset

cd ${MINIGPT4_DATASET}/cc_sbu
sh download_cc_sbu.sh
cd ${MINIGPT4_DATASET}/laion
sh download_laion.sh

The final dataset structure

.
├── ${MINIGPT4_DATASET}
│   ├── cc_sbu
│       ├── convert_cc_sbu.py
│       ├── download_cc_sbu.sh
│       ├── ccs_synthetic_filtered_large.json
│       ├── ccs_synthetic_filtered_large.tsv
│       └── cc_sbu_dataset
│           ├── 00000.tar
│           ├── 00000.parquet
│           ...
│   ├── laion
│       ├── convert_laion.py
│       ├── download_laion.sh
│       ├── laion_synthetic_filtered_large.json
│       ├── laion_synthetic_filtered_large.tsv
│       └── laion_dataset
│           ├── 00000.tar
│           ├── 00000.parquet
│           ...
...   

Set up the dataset configuration files

Then, set up the LAION dataset loading path in here at Line 5 as ${MINIGPT4_DATASET}/laion/laion_dataset/{00000..10488}.tar

and the Conceptual Captoin and SBU datasets loading path in here at Line 5 as ${MINIGPT4_DATASET}/cc_sbu/cc_sbu_dataset/{00000..01255}.tar