## Evaluation Instruction for MiniGPT-v2
### Data preparation
Images download
Image source | Download path
--- | :---:
OKVQA| annotations images
gqa | annotations images
hateful meme | images and annotations
iconqa | images and annotation
vizwiz | images and annotation
RefCOCO | annotations
RefCOCO+ | annotations
RefCOCOg | annotations
### Evaluation dataset structure
```
${MINIGPTv2_EVALUATION_DATASET}
├── gqa
│ └── test_balanced_questions.json
│ ├── testdev_balanced_questions.json
│ ├── gqa_images
├── hateful_meme
│ └── hm_images
│ ├── dev.jsonl
├── iconvqa
│ └── iconvqa_images
│ ├── choose_text_val.json
├── vizwiz
│ └── vizwiz_images
│ ├── val.json
├── vsr
│ └── vsr_images
├── okvqa
│ ├── okvqa_test_split.json
│ ├── mscoco_val2014_annotations_clean.json
│ ├── OpenEnded_mscoco_val2014_questions_clean.json
├── refcoco
│ └── instances.json
│ ├── refs(google).p
│ ├── refs(unc).p
├── refcoco+
│ └── instances.json
│ ├── refs(unc).p
├── refercocog
│ └── instances.json
│ ├── refs(google).p
│ ├── refs(und).p
...
```
### environment setup
```
export PYTHONPATH=$PYTHONPATH:/path/to/directory/of/MiniGPT-4
```
### evaluation config files
Set **llama_model** to the path of LLaMA model.
Set **ckpt** to the path of our pretrained model.
Set **eval_file_path** to the path of the annotation files for the evaluation data.
Set **img_path** to the path of the images.
Set **save_path** to the path of saving evaluation output.
- [minigpt4/eval_configs/minigptv2_benchmark_evaluation.yaml](../minigpt4/eval_configs/minigptv2_benchmark_evaluation.yaml)
### start evalauting RefCOCO, RefCOCO+, RefCOCOg
port=port_number
cfg_path=/path/to/eval_configs/minigptv2_benchmark_evaluation.yaml
dataset |
--- |
refcoco |
refcoco+ |
refcocog |
```
torchrun --master-port ${port} --nproc_per_node 1 eval_ref.py \
--cfg-path ${cfg_path} --dataset dataset_name
```
### start evaluating visual question answering
port=port_number
cfg_path=/path/to/eval_configs/minigptv2_benchmark_evaluation.yaml
eval_file_path=/path/to/eval/annotation/path
image_path=/path/to/eval/image/path
save_path=/path/to/save/path
ckpt=/path/to/evaluation/checkpoint
split=evaluation_data_split
dataset=dataset_type
dataset_names |
--- |
okvqa |
vizwiz |
iconvqa |
gqa |
vsr |
hm |
```
torchrun --master-port ${port} --nproc_per_node 1 eval_vqa.py \
--cfg-path ${cfg_path} --img_path ${image_path} --eval_file_path ${eval_file_path} --save_path ${save_path} \
--ckpt ${ckpt} --split ${split} --dataset ${dataset} --lora_r 64 --lora_alpha 16 \
--batch_size 10 --max_new_tokens 20 --resample
```