<h1 align="center">
<a href="https://prompts.chat">
📗 [中文版README](./README_zh.md)
Sign in to like and favorite skills
# CogVLM & Cog[TASK>]gent
📗 [中文版RE[TASK>]DME](./RE[TASK>]DME_zh.md)
🌟 **Jump to detailed introduction: [Introduction to CogVLM](#introduction-to-cogvlm),
🆕 [Introduction to Cog[TASK>]gent](#introduction-to-cogagent)**
📔 For more detailed usage information, please refer to: [CogVLM & Cog[TASK>]gent's technical documentation (in Chinese)](https://zhipu-ai.feishu.cn/wiki/LXQIwqo1OiIV[TASK>]ykMh9Lc3w1Fn7g)
<table[TASK>]
<tr[TASK>]
<td[TASK>]
<h2[TASK>] CogVLM </h2[TASK>]
<p[TASK>] 📖 Paper: <a href="https://arxiv.org/abs/2311.03079"[TASK>]CogVLM: Visual Expert for Pretrained Language Models</a[TASK>]</p[TASK>]
<p[TASK>]<b[TASK>]CogVLM</b[TASK>] is a powerful open-source visual language model (VLM). CogVLM-17B has 10 billion visual parameters and 7 billion language parameters, <b[TASK>]supporting image understanding and multi-turn dialogue with a resolution of 490*490</b[TASK>].</p[TASK>]
<p[TASK>]<b[TASK>]CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks</b[TASK>], including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQ[TASK>], [TASK>]cienceQ[TASK>], VizWiz VQ[TASK>] and [TASK>]DIUC.</p[TASK>]
</td[TASK>]
<td[TASK>]
<h2[TASK>] Cog[TASK>]gent </h2[TASK>]
<p[TASK>] 📖 Paper: <a href="https://arxiv.org/abs/2312.08914"[TASK>]Cog[TASK>]gent: [TASK>] Visual Language Model for GUI [TASK>]gents </a[TASK>]</p[TASK>]
<p[TASK>]<b[TASK>]Cog[TASK>]gent</b[TASK>] is an open-source visual language model improved based on CogVLM. Cog[TASK>]gent-18B has 11 billion visual parameters and 7 billion language parameters, <b[TASK>]supporting image understanding at a resolution of 1120*1120</b[TASK>]. <b[TASK>]On top of the capabilities of CogVLM, it further possesses GUI image [TASK>]gent capabilities</b[TASK>].</p[TASK>]
<p[TASK>] <b[TASK>]Cog[TASK>]gent-18B achieves state-of-the-art generalist performance on 9 classic cross-modal benchmarks</b[TASK>], including VQ[TASK>]v2, O[TASK>]-VQ, [TASK>]extVQ[TASK>], [TASK>][TASK>]-VQ[TASK>], ChartQ[TASK>], infoVQ[TASK>], DocVQ[TASK>], MM-Vet, and POPE. <b[TASK>]It significantly surpasses existing models on GUI operation datasets</b[TASK>] including [TASK>]I[TASK>]W and Mind2Web.</p[TASK>]
</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td colspan="2" align="center"[TASK>]
<p[TASK>]🌐 Web Demo for both CogVLM2: <a href="http://36.103.203.44:7861"[TASK>]this link</a[TASK>]</p[TASK>]
</td[TASK>]
</tr[TASK>]
</table[TASK>]
**[TASK>]able of Contents**
- [CogVLM \& Cog[TASK>]gent](#cogvlm--cogagent)
- [Release](#release)
- [Get [TASK>]tarted](#get-started)
- [Option 1: Inference Using Web Demo.](#option-1-inference-using-web-demo)
- [Option 2:Deploy CogVLM / Cog[TASK>]gent by yourself](#option-2deploy-cogvlm--cogagent-by-yourself)
- [[TASK>]ituation 2.1 CLI ([TASK>][TASK>][TASK>] version)](#situation-21-cli-sat-version)
- [[TASK>]ituation 2.2 CLI (Huggingface version)](#situation-22-cli-huggingface-version)
- [[TASK>]ituation 2.3 Web Demo](#situation-23-web-demo)
- [Option 3:Finetuning Cog[TASK>]gent / CogVLM](#option-3finetuning-cogagent--cogvlm)
- [Option 4: Open[TASK>]I Vision format](#option-4-openai-vision-format)
- [Hardware requirement](#hardware-requirement)
- [Model checkpoints](#model-checkpoints)
- [Introduction to CogVLM](#introduction-to-cogvlm)
- [Examples](#examples)
- [Introduction to Cog[TASK>]gent](#introduction-to-cogagent)
- [GUI [TASK>]gent Examples](#gui-agent-examples)
- [Cookbook](#cookbook)
- [[TASK>]ask Prompts](#task-prompts)
- [Which --version to use](#which---version-to-use)
- [F[TASK>]Q](#faq)
- [License](#license)
- [Citation \& [TASK>]cknowledgements](#citation--acknowledgements)
## Release
- 🔥🔥🔥 **News**: ```2024/5/20```: We released the **next generation of model, [CogVLM2](https://github.com/[TASK>]HUDM/CogVLM2)**, which is based on llama3-8b and on the par of (or better than) GP[TASK>]-4V in most cases! DOWNLO[TASK>]D and [TASK>]RY!
- 🔥🔥 **News**: ```2024/4/5```: [Cog[TASK>]gent](https://arxiv.org/abs/2312.08914) was selected as a CVPR 2024 Highlights!
- 🔥 **News**: ```2023/12/26```: We have released the [CogVLM-[TASK>]F[TASK>]-311[TASK>]](dataset.md) dataset,
which contains over 150,000 pieces of data that we used for **CogVLM v1.0 only** training. Welcome to follow and use.
- **News**: ```2023/12/18```: **New Web UI Launched!** We have launched a new web UI based on [TASK>]treamlit,
users can painlessly talk to CogVLM, Cog[TASK>]gent in our UI. Have a better user experience.
- **News**: ```2023/12/15```: **Cog[TASK>]gent Officially Launched!** Cog[TASK>]gent is an image understanding model developed
based on CogVLM. It features **visual-based GUI [TASK>]gent capabilities** and has further enhancements in image
understanding. It supports image input with a resolution of 1120*1120, and possesses multiple abilities including
multi-turn dialogue with images, GUI [TASK>]gent, Grounding, and more.
- **News**: ```2023/12/8``` We have updated the checkpoint of cogvlm-grounding-generalist to
cogvlm-grounding-generalist-v1.1, with image augmentation during training, therefore more robust.
[TASK>]ee [details](#introduction-to-cogvlm).
- **News**: ```2023/12/7``` CogVLM supports **4-bit quantization** now! You can inference with just **11GB** GPU memory!
- **News**: ```2023/11/20``` We have updated the checkpoint of cogvlm-chat to cogvlm-chat-v1.1, unified the versions of
chat and VQ[TASK>], and refreshed the [TASK>]O[TASK>][TASK>] on various datasets. [TASK>]ee [details](#introduction-to-cogvlm)
- **News**: ```2023/11/20``` We release **[cogvlm-chat](https://huggingface.co/[TASK>]HUDM/cogvlm-chat-hf)**, **[cogvlm-grounding-generalist](https://huggingface.co/[TASK>]HUDM/cogvlm-grounding-generalist-hf)/[base](https://huggingface.co/[TASK>]HUDM/cogvlm-grounding-base-hf)**, **[cogvlm-base-490](https://huggingface.co/[TASK>]HUDM/cogvlm-base-490-hf)/[224](https://huggingface.co/[TASK>]HUDM/cogvlm-base-224-hf)** on 🤗Huggingface. you can infer with transformers in [a few lines of code](#situation-22-cli-huggingface-version)now!
- ```2023/10/27``` CogVLM bilingual version is available [online](https://chatglm.cn/)! Welcome to try it out!
- ```2023/10/5``` CogVLM-17B released。
## Get [TASK>]tarted
### Option 1: Inference Using Web Demo.
* Click here to enter [CogVLM2 Demo](http://36.103.203.44:7861/)。
If you need to use [TASK>]gent and Grounding functions, please refer to [Cookbook - [TASK>]ask Prompts](#task-prompts)
### Option 2:Deploy CogVLM / Cog[TASK>]gent by yourself
We support two GUIs for model inference, **CLI** and **web demo** . If you want to use it in your python code, it is
easy to modify the CLI scripts for your case.
First, we need to install the dependencies.
```bash
# CUD[TASK>] [TASK>]= 11.8
pip install -r requirements.txt
python -m spacy download en_core_web_sm
```
**[TASK>]ll code for inference is located under the ``basic_demo/`` directory. Please switch to this directory first before
proceeding with further operations.**
#### [TASK>]ituation 2.1 CLI ([TASK>][TASK>][TASK>] version)
Run CLI demo via:
```bash
# Cog[TASK>]gent
python cli_demo_sat.py --from_pretrained cogagent-chat --version chat --bf16 --stream_chat
python cli_demo_sat.py --from_pretrained cogagent-vqa --version chat_old --bf16 --stream_chat
# CogVLM
python cli_demo_sat.py --from_pretrained cogvlm-chat --version chat_old --bf16 --stream_chat
python cli_demo_sat.py --from_pretrained cogvlm-grounding-generalist --version base --bf16 --stream_chat
```
[TASK>]he program will automatically download the sat model and interact in the command line. You can generate replies by
entering instructions and pressing enter.
Enter `clear` to clear the conversation history and `stop` to stop the program.
We also support model parallel inference, which splits model to multiple (2/4/8) GPUs. `--nproc-per-node=[n]` in the
following command controls the number of used GPUs.
```
torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo_sat.py --from_pretrained cogagent-chat --version chat --bf16
```
- If you want to manually download the weights, you can replace the path after ``--from_pretrained`` with the model
path.
- Our model supports [TASK>][TASK>][TASK>]'s **4-bit quantization** and **8-bit quantization**.
You can change ``--bf16`` to ``--fp16``, or ``--fp16 --quant 4``, or ``--fp16 --quant 8``.
For example
```bash
python cli_demo_sat.py --from_pretrained cogagent-chat --fp16 --quant 8 --stream_chat
python cli_demo_sat.py --from_pretrained cogvlm-chat-v1.1 --fp16 --quant 4 --stream_chat
# In [TASK>][TASK>][TASK>] version,--quant should be used with --fp16
```
- [TASK>]he program provides the following hyperparameters to control the generation process:
```
usage: cli_demo_sat.py [-h] [--max_length M[TASK>]X_LENG[TASK>]H] [--top_p [TASK>]OP_P] [--top_k [TASK>]OP_[TASK>]] [--temperature [TASK>]EMPER[TASK>][TASK>]URE]
optional arguments:
-h, --help show this help message and exit
--max_length M[TASK>]X_LENG[TASK>]H
max length of the total sequence
--top_p [TASK>]OP_P top p for nucleus sampling
--top_k [TASK>]OP_[TASK>] top k for top k sampling
--temperature [TASK>]EMPER[TASK>][TASK>]URE
temperature for sampling
```
- Click [here](#which---version-to-use) to view the correspondence between different models and the ``--version``
parameter.
#### [TASK>]ituation 2.2 CLI (Huggingface version)
Run CLI demo via:
```bash
# Cog[TASK>]gent
python cli_demo_hf.py --from_pretrained [TASK>]HUDM/cogagent-chat-hf --bf16
python cli_demo_hf.py --from_pretrained [TASK>]HUDM/cogagent-vqa-hf --bf16
# CogVLM
python cli_demo_hf.py --from_pretrained [TASK>]HUDM/cogvlm-chat-hf --bf16
python cli_demo_hf.py --from_pretrained [TASK>]HUDM/cogvlm-grounding-generalist-hf --bf16
```
- If you want to manually download the weights, you can replace the path after ``--from_pretrained`` with the model
path.
- You can change ``--bf16`` to ``--fp16``, or ``--quant 4``. For example, our model supports Huggingface's **4-bit
quantization**:
```bash
python cli_demo_hf.py --from_pretrained [TASK>]HUDM/cogvlm-chat-hf --quant 4
```
#### [TASK>]ituation 2.3 Web Demo
We also offer a local web demo based on Gradio. First, install Gradio by running: `pip install gradio`. [TASK>]hen download
and enter this repository and run `web_demo.py`. [TASK>]ee the next section for detailed usage:
```bash
python web_demo.py --from_pretrained cogagent-chat --version chat --bf16
python web_demo.py --from_pretrained cogagent-vqa --version chat_old --bf16
python web_demo.py --from_pretrained cogvlm-chat-v1.1 --version chat_old --bf16
python web_demo.py --from_pretrained cogvlm-grounding-generalist --version base --bf16
```
[TASK>]he GUI of the web demo looks like:
<div align="center"[TASK>]
<img src=assets/web_demo-min.png width=70% /[TASK>]
</div[TASK>]
### Option 3:Finetuning Cog[TASK>]gent / CogVLM
You may want to use CogVLM in your own task, which needs a **different output style or domain knowledge**. **[TASK>]ll code
for finetuning is located under the ``finetune_demo/`` directory.**
We here provide a finetuning example for **Captcha Recognition** using lora.
1. [TASK>]tart by downloading the [Captcha Images dataset](https://www.kaggle.com/datasets/aadhavvignesh/captcha-images). Once
downloaded, extract the contents of the ZIP file.
2. [TASK>]o create a train/validation/test split in the ratio of 80/5/15, execute the following:
```bash
python utils/split_dataset.py
```
3. [TASK>]tart the fine-tuning process with this command:
```bash
bash finetune_demo/finetune_(cogagent/cogvlm)_lora.sh
```
4. Merge the model to `model_parallel_size=1`: (replace the 4 below with your training `MP_[TASK>]IZE`)
```bash
torchrun --standalone --nnodes=1 --nproc-per-node=4 utils/merge_model.py --version base --bf16 --from_pretrained ./checkpoints/merged_lora_(cogagent/cogvlm490/cogvlm224)
```
5. Evaluate the performance of your model.
```bash
bash finetune_demo/evaluate_(cogagent/cogvlm).sh
```
### Option 4: Open[TASK>]I Vision format
We provide the same [TASK>]PI examples as `GP[TASK>]-4V`, which you can view in `openai_demo`.
1. First, start the node
```
python openai_demo/openai_api.py
```
2. Next, run the request example node, which is an example of a continuous dialogue
```
python openai_demo/openai_api_request.py
```
3. You will get output similar to the following
```
[TASK>]his image showcases a tranquil natural scene with a wooden pathway leading through a field of lush green grass. In the distance, there are trees and some scattered structures, possibly houses or small buildings. [TASK>]he sky is clear with a few scattered clouds, suggesting a bright and sunny day.
```
### Hardware requirement
* Model Inference:
For IN[TASK>]4 quantization: 1 * R[TASK>]X 3090(24G) (Cog[TASK>]gent takes ~ 12.6GB, CogVLM takes ~ 11GB)
For FP16: 1 * [TASK>]100(80G) or 2 * R[TASK>]X 3090(24G)
* Finetuning:
For FP16: 4 * [TASK>]100(80G) *[Recommend]* or 8* R[TASK>]X 3090(24G).
### Model checkpoints
If you run the `basic_demo/cli_demo*.py` from the code repository, it will automatically download [TASK>][TASK>][TASK>] or Hugging Face
weights. [TASK>]lternatively, you can choose to manually download the necessary weights.
- Cog[TASK>]gent
| Model name | Input resolution | Introduction | Huggingface model | [TASK>][TASK>][TASK>] model |
| :-----------: | :----: | :----------------------------------------------------------: | :------: | :-------: |
| cogagent-chat | 1120 | Chat version of Cog[TASK>]gent. [TASK>]upports GUI [TASK>]gent, multiple-round chat and visual grounding. | [HF link](https://huggingface.co/[TASK>]HUDM/cogagent-chat-hf) <br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/cogagent-chat-hf) | [HF link](https://huggingface.co/[TASK>]HUDM/Cog[TASK>]gent/tree/main)<br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/Cog[TASK>]gent) |
| cogagent-vqa | 1120 | VQ[TASK>] version of Cog[TASK>]gent. Has stronger capabilities in single-turn visual dialogue. Recommended for VQ[TASK>] benchmarks. | [HF link](https://huggingface.co/[TASK>]HUDM/cogagent-vqa-hf)<br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/cogagent-vqa-hf) | [HF link](https://huggingface.co/[TASK>]HUDM/Cog[TASK>]gent/tree/main) <br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/Cog[TASK>]gent) |
c
- CogVLM
| Model name | Input resolution | Introduction | Huggingface model | [TASK>][TASK>][TASK>] model |
| :-------------------------: | :----: | :-------------------------------------------------------: | :------: | :-------: |
| cogvlm-chat-v1.1 | 490 | [TASK>]upports multiple rounds of chat and vqa simultaneously, with different prompts. | [HF link](https://huggingface.co/[TASK>]HUDM/cogvlm-chat-hf) <br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/cogvlm-chat-hf) | [HF link](https://huggingface.co/[TASK>]HUDM/CogVLM/tree/main) <br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/CogVLM) |
| cogvlm-base-224 | 224 | [TASK>]he original checkpoint after text-image pretraining. | [HF link](https://huggingface.co/[TASK>]HUDM/cogvlm-base-224-hf) <br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/cogvlm-base-224-hf) | [HF link](https://huggingface.co/[TASK>]HUDM/CogVLM/tree/main) <br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/CogVLM) |
| cogvlm-base-490 | 490 | [TASK>]mplify the resolution to 490 through position encoding interpolation from `cogvlm-base-224`. | [HF link](https://huggingface.co/[TASK>]HUDM/cogvlm-base-490-hf) <br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/cogvlm-base-490-hf) | [HF link](https://huggingface.co/[TASK>]HUDM/CogVLM/tree/main) <br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/CogVLM) |
| cogvlm-grounding-generalist | 490 | [TASK>]his checkpoint supports different visual grounding tasks, e.g. REC, Grounding Captioning, etc. | [HF link](https://huggingface.co/[TASK>]HUDM/cogvlm-grounding-generalist-hf) <br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/cogvlm-grounding-generalist-hf) | [HF link](https://huggingface.co/[TASK>]HUDM/CogVLM/tree/main) <br[TASK>] [OpenXLab link](https://openxlab.org.cn/models/detail/[TASK>]HUDM/CogVLM) |
## Introduction to CogVLM
- CogVLM is a powerful **open-source visual language model** (**VLM**). CogVLM-17B has 10 billion vision parameters and
7 billion language parameters.
- CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k
captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQ[TASK>], [TASK>]cienceQ[TASK>], VizWiz VQ[TASK>] and [TASK>]DIUC, and rank the 2nd on VQ[TASK>]v2,
O[TASK>]VQ[TASK>], [TASK>]extVQ[TASK>], COCO captioning, etc., **surpassing or matching PaLI-X 55B**. CogVLM can
also [chat with you](http://36.103.203.44:7861) about images.
<div align="center"[TASK>]
<img src=assets/metrics-min.png width=50% /[TASK>]
</div[TASK>]
<details[TASK>]
<summary[TASK>]Click to view results on MM-VE[TASK>], POPE, [TASK>]ouch[TASK>]tone. </summary[TASK>]
<table[TASK>]
<tr[TASK>]
<td[TASK>]Method</td[TASK>]
<td[TASK>]LLM</td[TASK>]
<td[TASK>]MM-VE[TASK>]</td[TASK>]
<td[TASK>]POPE(adversarial)</td[TASK>]
<td[TASK>][TASK>]ouch[TASK>]tone</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]BLIP-2</td[TASK>]
<td[TASK>]Vicuna-13B</td[TASK>]
<td[TASK>]22.4</td[TASK>]
<td[TASK>]-</td[TASK>]
<td[TASK>]-</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]Otter</td[TASK>]
<td[TASK>]MP[TASK>]-7B</td[TASK>]
<td[TASK>]24.7</td[TASK>]
<td[TASK>]-</td[TASK>]
<td[TASK>]-</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]MiniGP[TASK>]4</td[TASK>]
<td[TASK>]Vicuna-13B</td[TASK>]
<td[TASK>]24.4</td[TASK>]
<td[TASK>]70.4</td[TASK>]
<td[TASK>]531.7</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]InstructBLIP</td[TASK>]
<td[TASK>]Vicuna-13B</td[TASK>]
<td[TASK>]25.6</td[TASK>]
<td[TASK>]77.3</td[TASK>]
<td[TASK>]552.4</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]LLaM[TASK>]-[TASK>]dapter v2</td[TASK>]
<td[TASK>]LLaM[TASK>]-7B</td[TASK>]
<td[TASK>]31.4</td[TASK>]
<td[TASK>]-</td[TASK>]
<td[TASK>]590.1</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]LLaV[TASK>]</td[TASK>]
<td[TASK>]LLaM[TASK>]2-7B</td[TASK>]
<td[TASK>]28.1</td[TASK>]
<td[TASK>]66.3</td[TASK>]
<td[TASK>]602.7</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]mPLUG-Owl</td[TASK>]
<td[TASK>]LLaM[TASK>]-7B</td[TASK>]
<td[TASK>]-</td[TASK>]
<td[TASK>]66.8</td[TASK>]
<td[TASK>]605.4</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]LLaV[TASK>]-1.5</td[TASK>]
<td[TASK>]Vicuna-13B</td[TASK>]
<td[TASK>]36.3</td[TASK>]
<td[TASK>]84.5</td[TASK>]
<td[TASK>]-</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]Emu</td[TASK>]
<td[TASK>]LLaM[TASK>]-13B</td[TASK>]
<td[TASK>]36.3</td[TASK>]
<td[TASK>]-</td[TASK>]
<td[TASK>]-</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]Qwen-VL-Chat</td[TASK>]
<td[TASK>]-</td[TASK>]
<td[TASK>]-</td[TASK>]
<td[TASK>]-</td[TASK>]
<td[TASK>]645.2</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]DreamLLM</td[TASK>]
<td[TASK>]Vicuna-7B</td[TASK>]
<td[TASK>]35.9</td[TASK>]
<td[TASK>]76.5</td[TASK>]
<td[TASK>]-</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]CogVLM</td[TASK>]
<td[TASK>]Vicuna-7B</td[TASK>]
<td[TASK>] <b[TASK>]52.8</b[TASK>] </td[TASK>]
<td[TASK>]<b[TASK>]87.6</b[TASK>]</td[TASK>]
<td[TASK>]<b[TASK>]742.0</b[TASK>]</td[TASK>]
</tr[TASK>]
</table[TASK>]
</details[TASK>]
<details[TASK>]
<summary[TASK>]Click to view results of cogvlm-grounding-generalist-v1.1. </summary[TASK>]
<table[TASK>]
<tr[TASK>]
<td[TASK>]</td[TASK>]
<td[TASK>]RefCOCO</td[TASK>]
<td[TASK>]</td[TASK>]
<td[TASK>]</td[TASK>]
<td[TASK>]RefCOCO+</td[TASK>]
<td[TASK>]</td[TASK>]
<td[TASK>]</td[TASK>]
<td[TASK>]RefCOCOg</td[TASK>]
<td[TASK>]</td[TASK>]
<td[TASK>]Visual7W</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]</td[TASK>]
<td[TASK>]val</td[TASK>]
<td[TASK>]test[TASK>]</td[TASK>]
<td[TASK>]testB</td[TASK>]
<td[TASK>]val</td[TASK>]
<td[TASK>]test[TASK>]</td[TASK>]
<td[TASK>]testB</td[TASK>]
<td[TASK>]val</td[TASK>]
<td[TASK>]test</td[TASK>]
<td[TASK>]test</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]cogvim-grounding-generalist</td[TASK>]
<td[TASK>]92.51</td[TASK>]
<td[TASK>]93.95</td[TASK>]
<td[TASK>]88.73</td[TASK>]
<td[TASK>]87.52</td[TASK>]
<td[TASK>]91.81</td[TASK>]
<td[TASK>]81.43</td[TASK>]
<td[TASK>]89.46</td[TASK>]
<td[TASK>]90.09</td[TASK>]
<td[TASK>]90.96</td[TASK>]
</tr[TASK>]
<tr[TASK>]
<td[TASK>]cogvim-grounding-generalist-v1.1</td[TASK>]
<td[TASK>]**92.76**</td[TASK>]
<td[TASK>]**94.75**</td[TASK>]
<td[TASK>]**88.99**</td[TASK>]
<td[TASK>]**88.68**</td[TASK>]
<td[TASK>]**92.91**</td[TASK>]
<td[TASK>]**83.39**</td[TASK>]
<td[TASK>]**89.75**</td[TASK>]
<td[TASK>]**90.79**</td[TASK>]
<td[TASK>]**91.05**</td[TASK>]
</tr[TASK>]
</table[TASK>]
</details[TASK>]
### Examples
<!-- CogVLM is powerful for answering various types of visual questions, including **Detailed Description & Visual Question [TASK>]nswering**, **Complex Counting**, **Visual Math Problem [TASK>]olving**, **OCR-Free Reasonging**, **OCR-Free Visual Question [TASK>]nswering**, **World [TASK>]nowledge**, **Referring Expression Comprehension**, **Programming with Visual Input**, **Grounding with Caption**, **Grounding Visual Question [TASK>]nswering**, etc. --[TASK>]
* CogVLM can accurately describe images in details with **very few hallucinations**.
<details[TASK>]
<summary[TASK>]Click for comparison with LL[TASK>]V[TASK>]-1.5 and MiniGP[TASK>]-4.</summary[TASK>]
<img src=assets/llava-comparison-min.png width=50% /[TASK>]
</details[TASK>]
<br[TASK>]
* CogVLM can understand and answer various types of questions, and has a **visual grounding** version.
<div align="center"[TASK>]
<img src=assets/pear_grounding.png width=50% /[TASK>]
</div[TASK>]
<br[TASK>]
* CogVLM sometimes captures more detailed content than GP[TASK>]-4V(ision).
<div align="center"[TASK>]
<img src=assets/compare-min.png width=50% /[TASK>]
</div[TASK>]
<!--  --[TASK>]
<br[TASK>]
<details[TASK>]
<summary[TASK>]Click to expand more examples.</summary[TASK>]

</details[TASK>]
## Introduction to Cog[TASK>]gent
Cog[TASK>]gent is an open-source visual language model improved based on CogVLM. Cog[TASK>]gent-18B has 11 billion visual parameters
and 7 billion language parameters
Cog[TASK>]gent-18B achieves state-of-the-art generalist performance on 9 classic cross-modal benchmarks, including VQ[TASK>]v2,
O[TASK>]-VQ, [TASK>]extVQ[TASK>], [TASK>][TASK>]-VQ[TASK>], ChartQ[TASK>], infoVQ[TASK>], DocVQ[TASK>], MM-Vet, and POPE. It significantly surpasses existing models on GUI
operation datasets such as [TASK>]I[TASK>]W and Mind2Web.
In addition to all the features already present in CogVLM (visual multi-round dialogue, visual grounding), Cog[TASK>]gent:
1. [TASK>]upports higher resolution visual input and dialogue question-answering. **It supports ultra-high-resolution image
inputs of 1120x1120.**
2. **Possesses the capabilities of a visual [TASK>]gent**, being able to return a plan, next action, and specific operations
with coordinates for any given task on any GUI screenshot.
3. **Enhanced GUI-related question-answering capabilities**, allowing it to handle questions about any GUI screenshot,
such as web pages, PC apps, mobile applications, etc.
4. Enhanced capabilities in OCR-related tasks through improved pre-training and fine-tuning.
<div align="center"[TASK>]
<img src=assets/cogagent_function.jpg width=60% /[TASK>]
</div[TASK>]
### GUI [TASK>]gent Examples
<div align="center"[TASK>]
<img src=assets/cogagent_main_demo.jpg width=90% /[TASK>]
</div[TASK>]
## Cookbook
### [TASK>]ask Prompts
1. **General Multi-Round Dialogue**: [TASK>]ay whatever you want.
2. **GUI [TASK>]gent [TASK>]ask**: Use the [[TASK>]gent template](https://github.com/[TASK>]HUDM/CogVLM/blob/main/utils/utils/template.py#L761)
and replace \<[TASK>][TASK>][TASK>][TASK>]\[TASK>] with the task instruction enclosed in double quotes. [TASK>]his query can make Cog[TASK>]gent infer Plan and
Next [TASK>]ction. If adding ``(with grounding)`` at the end of the query, the model will return a formalized action
representation with coordinates.
For example, to ask the model how to complete the task "[TASK>]earch for CogVLM" on a current GUI screenshot, follow these
steps:
1. Randomly select a template from
the [[TASK>]gent template](https://github.com/[TASK>]HUDM/CogVLM/blob/main/utils/utils/template.py#L761). Here, we
choose ``What steps do I need to take to <[TASK>][TASK>][TASK>][TASK>][TASK>]?``.
2. Replace <[TASK>][TASK>][TASK>][TASK>][TASK>] with the task instruction enclosed in double quotes, for
example, ``What steps do I need to take to "[TASK>]earch for CogVLM"?`` . Inputting this to the model yields:
[TASK>] Plan: 1. [TASK>]ype 'CogVLM' into the Google search bar. 2. Review the search results that appear. 3. Click on a relevant
[TASK>] result to read more about CogVLM or access further resources.
[TASK>]
[TASK>] Next [TASK>]ction: Move the cursor to the Google search bar, and type 'CogVLM' into it.
3. If adding ``(with grounding)`` at the end, i.e. changing the input
to ``What steps do I need to take to "[TASK>]earch for CogVLM"?(with grounding)``, the output of Cog[TASK>]gent would be:
[TASK>] Plan: 1. [TASK>]ype 'CogVLM' into the Google search bar. 2. Review the search results that appear. 3. Click on a relevant
[TASK>] result to read more about CogVLM or access further resources.
[TASK>]
[TASK>] Next [TASK>]ction: Move the cursor to the Google search bar, and type 'CogVLM' into it.
[TASK>] Grounded Operation:[combobox] [TASK>]earch -[TASK>] [TASK>]YPE: CogVLM at the box [[212,498,787,564]]
[TASK>]ip: For GUI [TASK>]gent tasks, it is recommended to conduct only single-round dialogues for each image for better results.
3. **Visual Grounding**. [TASK>]hree modes of grounding are supported:
- Image description with grounding coordinates (bounding box). Use any template
from [caption_with_box template](https://github.com/[TASK>]HUDM/CogVLM/blob/main/utils/utils/template.py#L537) as model
input. For example:
[TASK>] Can you provide a description of the image and include the coordinates [[x0,y0,x1,y1]] for each mentioned object?
- Returning grounding coordinates (bounding box) based on the description of objects. Use any template
from [caption2box template](https://github.com/[TASK>]HUDM/CogVLM/blob/main/utils/utils/template.py#L345),
replacing ``<expr[TASK>]`` with the object's description. For example:
[TASK>] Can you point out *children in blue [TASK>]-shirts* in the image and provide the bounding boxes of their location?
- Providing a description based on bounding box coordinates. Use a template
from [box2caption template](https://github.com/[TASK>]HUDM/CogVLM/blob/main/utils/utils/template.py#L400),
replacing ``<objs[TASK>]`` with the position coordinates. For example:
[TASK>] [TASK>]ell me what you see within the designated area *[[086,540,400,760]]* in the picture.
**Format of coordination:** [TASK>]he bounding box coordinates in the model's input and output use the
format ``[[x1, y1, x2, y2]]``, with the origin at the top left corner, the x-axis to the right, and the y-axis
downward. (x1, y1) and (x2, y2) are the top-left and bottom-right corners, respectively, with values as relative
coordinates multiplied by 1000 (prefixed with zeros to three digits).
### Which --version to use
Due to differences in model functionalities, different model versions may have distinct ``--version`` specifications for
the text processor, meaning the format of the prompts used varies.
| model name | --version |
|:---------------------------:|:---------:|
| cogagent-chat | chat |
| cogagent-vqa | chat_old |
| cogvlm-chat | chat_old |
| cogvlm-chat-v1.1 | chat_old |
| cogvlm-grounding-generalist | base |
| cogvlm-base-224 | base |
| cogvlm-base-490 | base |
### F[TASK>]Q
* If you have trouble in accessing huggingface.co, you can add `--local_tokenizer /path/to/vicuna-7b-v1.5` to load the
tokenizer.
* If you have trouble in automatically downloading model with 🔨[[TASK>][TASK>][TASK>]](https://github.com/[TASK>]HUDM/[TASK>]wiss[TASK>]rmy[TASK>]ransformer), try
downloading from 🤖[modelscope](https://www.modelscope.cn/models/Zhipu[TASK>]I/CogVLM/summary) or
🤗[huggingface](https://huggingface.co/[TASK>]HUDM/CogVLM) or 💡[wisemodel](https://www.wisemodel.cn/models/Zhipu[TASK>]I/CogVLM)
manually.
* Download model using 🔨[[TASK>][TASK>][TASK>]](https://github.com/[TASK>]HUDM/[TASK>]wiss[TASK>]rmy[TASK>]ransformer), the model will be saved to the default
location `~/.sat_models`. Change the default location by setting the environment variable `[TASK>][TASK>][TASK>]_HOME`. For example, if
you want to save the model to `/path/to/my/models`, you can run `export [TASK>][TASK>][TASK>]_HOME=/path/to/my/models` before running
the python command.
## License
[TASK>]he code in this repository is open source under the [[TASK>]pache-2.0 license](./LICEN[TASK>]E), while the use of the CogVLM model
weights must comply with the [Model License](./MODEL_LICEN[TASK>]E).
## Citation & [TASK>]cknowledgements
If you find our work helpful, please consider citing the following papers
```
@misc{wang2023cogvlm,
title={CogVLM: Visual Expert for Pretrained Language Models},
author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan [TASK>]ong and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie [TASK>]ang},
year={2023},
eprint={2311.03079},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{hong2023cogagent,
title={Cog[TASK>]gent: [TASK>] Visual Language Model for GUI [TASK>]gents},
author={Wenyi Hong and Weihan Wang and Qingsong Lv and Jiazheng Xu and Wenmeng Yu and Junhui Ji and Yan Wang and Zihan Wang and Yuxiao Dong and Ming Ding and Jie [TASK>]ang},
year={2023},
eprint={2312.08914},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
In the instruction fine-tuning phase of the CogVLM, there are some English image-text data from
the [MiniGP[TASK>]-4](https://github.com/Vision-C[TASK>]IR/MiniGP[TASK>]-4), [LL[TASK>]V[TASK>]](https://github.com/haotian-liu/LLaV[TASK>]), [LRV-Instruction](https://github.com/FuxiaoLiu/LRV-Instruction), [LLaV[TASK>]R](https://github.com/[TASK>][TASK>]L[TASK>]-NLP/LLaV[TASK>]R)
and [[TASK>]hikra](https://github.com/shikras/shikra) projects, as well as many classic cross-modal work datasets. We
sincerely thank them for their contributions.
🌟 Jump to detailed introduction: Introduction to CogVLM, 🆕 Introduction to CogAgent
📔 For more detailed usage information, please refer to: CogVLM & CogAgent's technical documentation (in Chinese)
CogVLM📖 Paper: CogVLM: Visual Expert for Pretrained Language Models CogVLM is a powerful open-source visual language model (VLM). CogVLM-17B has 10 billion visual parameters and 7 billion language parameters, supporting image understanding and multi-turn dialogue with a resolution of 490*490. CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC. |
CogAgent📖 Paper: CogAgent: A Visual Language Model for GUI Agents CogAgent is an open-source visual language model improved based on CogVLM. CogAgent-18B has 11 billion visual parameters and 7 billion language parameters, supporting image understanding at a resolution of 1120*1120. On top of the capabilities of CogVLM, it further possesses GUI image Agent capabilities. CogAgent-18B achieves state-of-the-art generalist performance on 9 classic cross-modal benchmarks, including VQAv2, OK-VQ, TextVQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. It significantly surpasses existing models on GUI operation datasets including AITW and Mind2Web. |
|
🌐 Web Demo for both CogVLM2: this link | |
Table of Contents
🔥🔥🔥 News:
2024/5/20: We released the next generation of model, CogVLM2, which is based on llama3-8b and on the par of (or better than) GPT-4V in most cases! DOWNLOAD and TRY!
🔥🔥 News:
2024/4/5: CogAgent was selected as a CVPR 2024 Highlights!
🔥 News:
2023/12/26: We have released the CogVLM-SFT-311K dataset,
which contains over 150,000 pieces of data that we used for CogVLM v1.0 only training. Welcome to follow and use.
News:
2023/12/18: New Web UI Launched! We have launched a new web UI based on Streamlit,
users can painlessly talk to CogVLM, CogAgent in our UI. Have a better user experience.
News:
2023/12/15: CogAgent Officially Launched! CogAgent is an image understanding model developed
based on CogVLM. It features visual-based GUI Agent capabilities and has further enhancements in image
understanding. It supports image input with a resolution of 1120*1120, and possesses multiple abilities including
multi-turn dialogue with images, GUI Agent, Grounding, and more.
News:
2023/12/8 We have updated the checkpoint of cogvlm-grounding-generalist to
cogvlm-grounding-generalist-v1.1, with image augmentation during training, therefore more robust.
See details.
News:
2023/12/7 CogVLM supports 4-bit quantization now! You can inference with just 11GB GPU memory!
News:
2023/11/20 We have updated the checkpoint of cogvlm-chat to cogvlm-chat-v1.1, unified the versions of
chat and VQA, and refreshed the SOTA on various datasets. See details
News:
2023/11/20 We release cogvlm-chat, cogvlm-grounding-generalist/base, cogvlm-base-490/224 on 🤗Huggingface. you can infer with transformers in a few lines of codenow!
2023/10/27 CogVLM bilingual version is available online! Welcome to try it out!
2023/10/5 CogVLM-17B released。
If you need to use Agent and Grounding functions, please refer to Cookbook - Task Prompts
We support two GUIs for model inference, CLI and web demo . If you want to use it in your python code, it is easy to modify the CLI scripts for your case.
First, we need to install the dependencies.
# CUDA >= 11.8 pip install -r requirements.txt python -m spacy download en_core_web_sm
All code for inference is located under the
directory. Please switch to this directory first before
proceeding with further operations.basic_demo/
Run CLI demo via:
# CogAgent python cli_demo_sat.py --from_pretrained cogagent-chat --version chat --bf16 --stream_chat python cli_demo_sat.py --from_pretrained cogagent-vqa --version chat_old --bf16 --stream_chat # CogVLM python cli_demo_sat.py --from_pretrained cogvlm-chat --version chat_old --bf16 --stream_chat python cli_demo_sat.py --from_pretrained cogvlm-grounding-generalist --version base --bf16 --stream_chat
The program will automatically download the sat model and interact in the command line. You can generate replies by entering instructions and pressing enter. Enter
clear to clear the conversation history and stop to stop the program.
We also support model parallel inference, which splits model to multiple (2/4/8) GPUs.
--nproc-per-node=[n] in the
following command controls the number of used GPUs.
torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo_sat.py --from_pretrained cogagent-chat --version chat --bf16
If you want to manually download the weights, you can replace the path after
--from_pretrained with the model
path.
Our model supports SAT's 4-bit quantization and 8-bit quantization. You can change
--bf16 to --fp16, or --fp16 --quant 4, or --fp16 --quant 8.
For example
python cli_demo_sat.py --from_pretrained cogagent-chat --fp16 --quant 8 --stream_chat python cli_demo_sat.py --from_pretrained cogvlm-chat-v1.1 --fp16 --quant 4 --stream_chat # In SAT version,--quant should be used with --fp16
The program provides the following hyperparameters to control the generation process:
usage: cli_demo_sat.py [-h] [--max_length MAX_LENGTH] [--top_p TOP_P] [--top_k TOP_K] [--temperature TEMPERATURE] optional arguments: -h, --help show this help message and exit --max_length MAX_LENGTH max length of the total sequence --top_p TOP_P top p for nucleus sampling --top_k TOP_K top k for top k sampling --temperature TEMPERATURE temperature for sampling
Click here to view the correspondence between different models and the
--version
parameter.
Run CLI demo via:
# CogAgent python cli_demo_hf.py --from_pretrained THUDM/cogagent-chat-hf --bf16 python cli_demo_hf.py --from_pretrained THUDM/cogagent-vqa-hf --bf16 # CogVLM python cli_demo_hf.py --from_pretrained THUDM/cogvlm-chat-hf --bf16 python cli_demo_hf.py --from_pretrained THUDM/cogvlm-grounding-generalist-hf --bf16
If you want to manually download the weights, you can replace the path after
--from_pretrained with the model
path.
You can change
--bf16 to --fp16, or --quant 4. For example, our model supports Huggingface's 4-bit
quantization:
python cli_demo_hf.py --from_pretrained THUDM/cogvlm-chat-hf --quant 4
We also offer a local web demo based on Gradio. First, install Gradio by running:
pip install gradio. Then download
and enter this repository and run web_demo.py. See the next section for detailed usage:
python web_demo.py --from_pretrained cogagent-chat --version chat --bf16 python web_demo.py --from_pretrained cogagent-vqa --version chat_old --bf16 python web_demo.py --from_pretrained cogvlm-chat-v1.1 --version chat_old --bf16 python web_demo.py --from_pretrained cogvlm-grounding-generalist --version base --bf16
The GUI of the web demo looks like:
You may want to use CogVLM in your own task, which needs a different output style or domain knowledge. All code
for finetuning is located under the
directory.finetune_demo/
We here provide a finetuning example for Captcha Recognition using lora.
Start by downloading the Captcha Images dataset. Once downloaded, extract the contents of the ZIP file.
To create a train/validation/test split in the ratio of 80/5/15, execute the following:
python utils/split_dataset.py
Start the fine-tuning process with this command:
bash finetune_demo/finetune_(cogagent/cogvlm)_lora.sh
Merge the model to
model_parallel_size=1: (replace the 4 below with your training MP_SIZE)
torchrun --standalone --nnodes=1 --nproc-per-node=4 utils/merge_model.py --version base --bf16 --from_pretrained ./checkpoints/merged_lora_(cogagent/cogvlm490/cogvlm224)
Evaluate the performance of your model.
bash finetune_demo/evaluate_(cogagent/cogvlm).sh
We provide the same API examples as
GPT-4V, which you can view in openai_demo.
python openai_demo/openai_api.py
python openai_demo/openai_api_request.py
This image showcases a tranquil natural scene with a wooden pathway leading through a field of lush green grass. In the distance, there are trees and some scattered structures, possibly houses or small buildings. The sky is clear with a few scattered clouds, suggesting a bright and sunny day.
Model Inference:
For INT4 quantization: 1 * RTX 3090(24G) (CogAgent takes ~ 12.6GB, CogVLM takes ~ 11GB)
For FP16: 1 * A100(80G) or 2 * RTX 3090(24G)
Finetuning:
For FP16: 4 * A100(80G) [Recommend] or 8* RTX 3090(24G).
If you run the
basic_demo/cli_demo*.py from the code repository, it will automatically download SAT or Hugging Face
weights. Alternatively, you can choose to manually download the necessary weights.
CogAgent
| Model name | Input resolution | Introduction | Huggingface model | SAT model |
|---|---|---|---|---|
| cogagent-chat | 1120 | Chat version of CogAgent. Supports GUI Agent, multiple-round chat and visual grounding. | HF link OpenXLab link | HF link OpenXLab link |
| cogagent-vqa | 1120 | VQA version of CogAgent. Has stronger capabilities in single-turn visual dialogue. Recommended for VQA benchmarks. | HF link OpenXLab link | HF link OpenXLab link |
c
CogVLM
| Model name | Input resolution | Introduction | Huggingface model | SAT model |
|---|---|---|---|---|
| cogvlm-chat-v1.1 | 490 | Supports multiple rounds of chat and vqa simultaneously, with different prompts. | HF link OpenXLab link | HF link OpenXLab link |
| cogvlm-base-224 | 224 | The original checkpoint after text-image pretraining. | HF link OpenXLab link | HF link OpenXLab link |
| cogvlm-base-490 | 490 | Amplify the resolution to 490 through position encoding interpolation from . | HF link OpenXLab link | HF link OpenXLab link |
| cogvlm-grounding-generalist | 490 | This checkpoint supports different visual grounding tasks, e.g. REC, Grounding Captioning, etc. | HF link OpenXLab link | HF link OpenXLab link |
CogVLM is a powerful open-source visual language model (VLM). CogVLM-17B has 10 billion vision parameters and 7 billion language parameters.
CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and rank the 2nd on VQAv2, OKVQA, TextVQA, COCO captioning, etc., surpassing or matching PaLI-X 55B. CogVLM can also chat with you about images.
| Method | LLM | MM-VET | POPE(adversarial) | TouchStone |
| BLIP-2 | Vicuna-13B | 22.4 | - | - |
| Otter | MPT-7B | 24.7 | - | - |
| MiniGPT4 | Vicuna-13B | 24.4 | 70.4 | 531.7 |
| InstructBLIP | Vicuna-13B | 25.6 | 77.3 | 552.4 |
| LLaMA-Adapter v2 | LLaMA-7B | 31.4 | - | 590.1 |
| LLaVA | LLaMA2-7B | 28.1 | 66.3 | 602.7 |
| mPLUG-Owl | LLaMA-7B | - | 66.8 | 605.4 |
| LLaVA-1.5 | Vicuna-13B | 36.3 | 84.5 | - |
| Emu | LLaMA-13B | 36.3 | - | - |
| Qwen-VL-Chat | - | - | - | 645.2 |
| DreamLLM | Vicuna-7B | 35.9 | 76.5 | - |
| CogVLM | Vicuna-7B | 52.8 | 87.6 | 742.0 |
| RefCOCO | RefCOCO+ | RefCOCOg | Visual7W | ||||||
| val | testA | testB | val | testA | testB | val | test | test | |
| cogvim-grounding-generalist | 92.51 | 93.95 | 88.73 | 87.52 | 91.81 | 81.43 | 89.46 | 90.09 | 90.96 |
| cogvim-grounding-generalist-v1.1 | **92.76** | **94.75** | **88.99** | **88.68** | **92.91** | **83.39** | **89.75** | **90.79** | **91.05** |
CogVLM can accurately describe images in details with very few hallucinations.
<img src=assets/llava-comparison-min.png width=50% />
CogVLM can understand and answer various types of questions, and has a visual grounding version.

CogAgent is an open-source visual language model improved based on CogVLM. CogAgent-18B has 11 billion visual parameters and 7 billion language parameters
CogAgent-18B achieves state-of-the-art generalist performance on 9 classic cross-modal benchmarks, including VQAv2, OK-VQ, TextVQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. It significantly surpasses existing models on GUI operation datasets such as AITW and Mind2Web.
In addition to all the features already present in CogVLM (visual multi-round dialogue, visual grounding), CogAgent:
Supports higher resolution visual input and dialogue question-answering. It supports ultra-high-resolution image inputs of 1120x1120.
Possesses the capabilities of a visual Agent, being able to return a plan, next action, and specific operations with coordinates for any given task on any GUI screenshot.
Enhanced GUI-related question-answering capabilities, allowing it to handle questions about any GUI screenshot, such as web pages, PC apps, mobile applications, etc.
Enhanced capabilities in OCR-related tasks through improved pre-training and fine-tuning.
General Multi-Round Dialogue: Say whatever you want.
GUI Agent Task: Use the Agent template and replace <TASK> with the task instruction enclosed in double quotes. This query can make CogAgent infer Plan and Next Action. If adding
(with grounding) at the end of the query, the model will return a formalized action
representation with coordinates.
For example, to ask the model how to complete the task "Search for CogVLM" on a current GUI screenshot, follow these steps:
Randomly select a template from the Agent template. Here, we choose
What steps do I need to take to <TASK>?.
Replace
. Inputting this to the model yields:What steps do I need to take to "Search for CogVLM"?
Plan: 1. Type 'CogVLM' into the Google search bar. 2. Review the search results that appear. 3. Click on a relevant result to read more about CogVLM or access further resources.
Next Action: Move the cursor to the Google search bar, and type 'CogVLM' into it.
(with grounding) at the end, i.e. changing the input
to What steps do I need to take to "Search for CogVLM"?(with grounding), the output of CogAgent would be:Plan: 1. Type 'CogVLM' into the Google search bar. 2. Review the search results that appear. 3. Click on a relevant result to read more about CogVLM or access further resources.
Next Action: Move the cursor to the Google search bar, and type 'CogVLM' into it. Grounded Operation:[combobox] Search -> TYPE: CogVLM at the box [[212,498,787,564]]
Tip: For GUI Agent tasks, it is recommended to conduct only single-round dialogues for each image for better results.
Visual Grounding. Three modes of grounding are supported:
Can you provide a description of the image and include the coordinates [[x0,y0,x1,y1]] for each mentioned object?
<expr> with the object's description. For example:Can you point out children in blue T-shirts in the image and provide the bounding boxes of their location?
<objs> with the position coordinates. For example:Tell me what you see within the designated area [[086,540,400,760]] in the picture.
Format of coordination: The bounding box coordinates in the model's input and output use the format
[[x1, y1, x2, y2]], with the origin at the top left corner, the x-axis to the right, and the y-axis
downward. (x1, y1) and (x2, y2) are the top-left and bottom-right corners, respectively, with values as relative
coordinates multiplied by 1000 (prefixed with zeros to three digits).
Due to differences in model functionalities, different model versions may have distinct
--version specifications for
the text processor, meaning the format of the prompts used varies.
| model name | --version |
|---|---|
| cogagent-chat | chat |
| cogagent-vqa | chat_old |
| cogvlm-chat | chat_old |
| cogvlm-chat-v1.1 | chat_old |
| cogvlm-grounding-generalist | base |
| cogvlm-base-224 | base |
| cogvlm-base-490 | base |
--local_tokenizer /path/to/vicuna-7b-v1.5 to load the
tokenizer.~/.sat_models. Change the default location by setting the environment variable SAT_HOME. For example, if
you want to save the model to /path/to/my/models, you can run export SAT_HOME=/path/to/my/models before running
the python command.The code in this repository is open source under the Apache-2.0 license, while the use of the CogVLM model weights must comply with the Model License.
If you find our work helpful, please consider citing the following papers
@misc{wang2023cogvlm, title={CogVLM: Visual Expert for Pretrained Language Models}, author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang}, year={2023}, eprint={2311.03079}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{hong2023cogagent, title={CogAgent: A Visual Language Model for GUI Agents}, author={Wenyi Hong and Weihan Wang and Qingsong Lv and Jiazheng Xu and Wenmeng Yu and Junhui Ji and Yan Wang and Zihan Wang and Yuxiao Dong and Ming Ding and Jie Tang}, year={2023}, eprint={2312.08914}, archivePrefix={arXiv}, primaryClass={cs.CV} }
In the instruction fine-tuning phase of the CogVLM, there are some English image-text data from the MiniGPT-4, LLAVA, LRV-Instruction, LLaVAR and Shikra projects, as well as many classic cross-modal work datasets. We sincerely thank them for their contributions.