mirror of
https://github.com/Vision-CAIR/MiniGPT-4.git
synced 2025-04-04 18:10:47 +00:00
66 lines
2.5 KiB
Markdown
66 lines
2.5 KiB
Markdown
## Evaluation Instruction for MiniGPT-v2
|
|
|
|
### Data preparation
|
|
Images download
|
|
Image source | Download path
|
|
--- | :---:
|
|
OKVQA| <a href="https://drive.google.com/drive/folders/1jxIgAhtaLu_YqnZEl8Ym11f7LhX3nptN?usp=sharing">annotations</a> <a href="http://images.cocodataset.org/zips/train2017.zip"> images</a>
|
|
gqa | <a href="https://drive.google.com/drive/folders/1-dF-cgFwstutS4qq2D9CFQTDS0UTmIft?usp=drive_link">annotations</a> <a href="https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip">images</a>
|
|
hateful meme | <a href="https://github.com/faizanahemad/facebook-hateful-memes">images and annotations</a>
|
|
iconqa | <a href="https://iconqa.github.io/#download">images and annotation</a>
|
|
vizwiz | <a href="https://vizwiz.org/tasks-and-datasets/vqa/">images and annotation</a>
|
|
RefCOCO | <a href="https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco.zip"> annotations </a>
|
|
RefCOCO+ | <a href="https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco+.zip"> annotations </a>
|
|
RefCOCOg | <a href="https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcocog.zip"> annotations </a>
|
|
|
|
### Evaluation dataset structure
|
|
|
|
|
|
### environment setup
|
|
|
|
```
|
|
export PYTHONPATH=$PYTHONPATH:/path/to/directory/of/MiniGPT-4
|
|
```
|
|
|
|
### start evalauting RefCOCO, RefCOCO+, RefCOCOg
|
|
port=port_number
|
|
cfg_path=/path/to/eval_configs/minigptv2_eval.yaml
|
|
eval_file_path=/path/to/eval/image/path
|
|
save_path=/path/to/save/path
|
|
ckpt=/path/to/evaluation/checkpoint
|
|
|
|
|
|
split=/evaluation/data/split/type # e.g. val, testA, testB, test
|
|
dataset=/data/type #refcoco, refcoco+, refcocog
|
|
|
|
```
|
|
torchrun --master-port ${port} --nproc_per_node 1 eval_ref.py \
|
|
--cfg-path ${cfg_path} --img_path ${IMG_PATH} --eval_file_path ${eval_file_path} --save_path ${save_path} \
|
|
--ckpt ${ckpt} --split ${split} --dataset ${dataset} --lora_r 64 --lora_alpha 16 \
|
|
--batch_size 10 --max_new_tokens 20 --resample
|
|
```
|
|
|
|
|
|
### start evaluating visual question answering
|
|
|
|
port=port_number
|
|
cfg_path=/path/to/eval_configs/minigptv2_eval.yaml
|
|
eval_file_path=/path/to/eval/image/path
|
|
save_path=/path/to/save/path
|
|
ckpt=/path/to/evaluation/checkpoint
|
|
|
|
|
|
split=/evaluation/data/split/type # e.g. val,test
|
|
dataset=/data/type # vqa data types: okvqa, vizwiz, iconvqa, gqa, vsr, hm
|
|
|
|
```
|
|
torchrun --master-port ${port} --nproc_per_node 1 eval_ref.py \
|
|
--cfg-path ${cfg_path} --img_path ${IMG_PATH} --eval_file_path ${eval_file_path} --save_path ${save_path} \
|
|
--ckpt ${ckpt} --split ${split} --dataset ${dataset} --lora_r 64 --lora_alpha 16 \
|
|
--batch_size 10 --max_new_tokens 20 --resample
|
|
```
|
|
|
|
|
|
|
|
|