From 57b9d9547a8ee8c25dd93250043948b1bb19fc44 Mon Sep 17 00:00:00 2001 From: junchen14 Date: Fri, 13 Oct 2023 10:06:02 +0300 Subject: [PATCH] update readme --- README.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/README.md b/README.md index 4e5a82f..b052314 100644 --- a/README.md +++ b/README.md @@ -107,6 +107,14 @@ or for Llama 2 version by python demo.py --cfg-path eval_configs/minigpt4_llama2_eval.yaml --gpu-id 0 ``` +or for MiniGPT-v2 version by + +``` +python demo_v2.py --cfg-path eval_configs/minigpt4v2_eval.yaml --gpu-id 0 +``` + + + To save GPU memory, LLMs loads as 8 bit by default, with a beam search width of 1. This configuration requires about 23G GPU memory for 13B LLM and 11.5G GPU memory for 7B LLM.