From 39db6a0bb187d928521cc44b60ee30b9c6c4eea0 Mon Sep 17 00:00:00 2001 From: Jun Chen Date: Mon, 17 Apr 2023 01:46:45 +0300 Subject: [PATCH] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 8851234..9a9912b 100644 --- a/README.md +++ b/README.md @@ -26,6 +26,8 @@ More examples can be found in the [project page](https://minigpt-4.github.io). - MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer. - We train MiniGPT-4 with two stages. The first pretraining stage is trained using roughly 5 million aligned image-text pairs with around 40 A100 hours. The second finetuning stage is trained using additional 3,500 carefully curated high-quality pairs with around 7 A100 minutes. - MiniGPT-4 yields many emerging vision-language capabilities similar to those demonstrated in GPT-4. + + ![overview](figs/overview.png)