mirror of
https://github.com/Vision-CAIR/MiniGPT-4.git
synced 2025-04-04 18:10:47 +00:00
update readme
This commit is contained in:
parent
3c13d1d4b4
commit
57b9d9547a
@ -107,6 +107,14 @@ or for Llama 2 version by
|
||||
python demo.py --cfg-path eval_configs/minigpt4_llama2_eval.yaml --gpu-id 0
|
||||
```
|
||||
|
||||
or for MiniGPT-v2 version by
|
||||
|
||||
```
|
||||
python demo_v2.py --cfg-path eval_configs/minigpt4v2_eval.yaml --gpu-id 0
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
To save GPU memory, LLMs loads as 8 bit by default, with a beam search width of 1.
|
||||
This configuration requires about 23G GPU memory for 13B LLM and 11.5G GPU memory for 7B LLM.
|
||||
|
Loading…
Reference in New Issue
Block a user