diff --git a/dataset/README_MINIGPTv2_FINETUNE.md b/dataset/README_MINIGPTv2_FINETUNE.md
index a750c26..4056b1b 100644
--- a/dataset/README_MINIGPTv2_FINETUNE.md
+++ b/dataset/README_MINIGPTv2_FINETUNE.md
@@ -15,7 +15,7 @@ RefCOCOg | annotations
AOK-VQA | annotations
OCR-VQA | annotations
-GQA | images annotations
+GQA | images annotations
Filtered Flickr-30k | annotations
Multi-task conversation | annotations
Filtered unnatural instruction | annotations
@@ -180,21 +180,24 @@ Location_you_like
│ ├── dataset.json
```
+Set **image_path** as the ocrvqa/images folder.
+Similarly, set **ann_path** to the dataset.json
+- [minigpt4/configs/datasets/ocrvqa/ocrvqa.yaml](../minigpt4/configs/datasets/ocrvqa/ocrvqa.yaml)
+
### GQA
-Download the GQA annotation files
-download the images with loadDataset.py script
+Download the GQA annotation files and images
```
Location_you_like
├── ${MINIGPTv2_DATASET}
-│ ├── ocrvqa
+│ ├── gqa
│ ├── images
-│ ├── dataset.json
+│ ├── train_balanced_questions.json
```
-Set **image_path** as the OCR-VQA image folder.
-Similarly, set **ann_path** to the lhe OCR-VQA dataset.json
-- [minigpt4/configs/datasets/ocrvqa/ocrvqa.yaml](../minigpt4/configs/datasets/ocrvqa/ocrvqa.yaml)
+Set **image_path** as the gqa/images folder.
+Similarly, set **ann_path** to the train_balanced_questions.json
+- [minigpt4/configs/datasets/gqa/balanced_val.yaml](../minigpt4/configs/datasets/gqa/balanced_val.yaml)