update readme

This commit is contained in:
junchen14 2023-10-13 10:06:02 +03:00
parent 3c13d1d4b4
commit 57b9d9547a

View File

@ -107,6 +107,14 @@ or for Llama 2 version by
python demo.py --cfg-path eval_configs/minigpt4_llama2_eval.yaml --gpu-id 0 python demo.py --cfg-path eval_configs/minigpt4_llama2_eval.yaml --gpu-id 0
``` ```
or for MiniGPT-v2 version by
```
python demo_v2.py --cfg-path eval_configs/minigpt4v2_eval.yaml --gpu-id 0
```
To save GPU memory, LLMs loads as 8 bit by default, with a beam search width of 1. To save GPU memory, LLMs loads as 8 bit by default, with a beam search width of 1.
This configuration requires about 23G GPU memory for 13B LLM and 11.5G GPU memory for 7B LLM. This configuration requires about 23G GPU memory for 13B LLM and 11.5G GPU memory for 7B LLM.