diff --git a/README.md b/README.md
index d06acf4..7b54d24 100644
--- a/README.md
+++ b/README.md
@@ -3,7 +3,7 @@
**King Abdullah University of Science and Technology**
-
+
[](https://colab.research.google.com/github/camenduru/MiniGPT-4-colab/blob/main/minigpt4_colab.ipynb) [](https://www.youtube.com/watch?v=__tftoxpBAw&feature=youtu.be)
## Online Demo
@@ -53,7 +53,7 @@ conda activate minigpt4
The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B.
Please refer to our instruction [here](PrepareVicuna.md)
to prepare the Vicuna weights.
-The final weights would be in a single folder with the following structure:
+The final weights would be in a single folder in a structure similar to the following:
```
vicuna_weights
@@ -91,10 +91,12 @@ python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
To save GPU memory, Vicuna loads as 8 bit by default, with a beam search width of 1.
This configuration requires about 23G GPU memory for Vicuna 13B and 11.5G GPU memory for Vicuna 7B.
-For more powerful GPUs, you can run the model
+For more powerful GPUs, you can run the model
in 16 bit by setting low_resource to False in the config file
[minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml) and use a larger beam search width.
+Thanks [@camenduru](https://github.com/camenduru), you can also run our code on [Colab](https://colab.research.google.com/github/camenduru/MiniGPT-4-colab/blob/main/minigpt4_colab.ipynb)
+
### Training
The training of MiniGPT-4 contains two alignment stages.