mirror of
https://github.com/deepseek-ai/DeepSeek-VL2.git
synced 2025-02-22 21:59:04 -05:00
Added Table of contents(ToC) to Readme.md
This commit is contained in:
parent
c0cf24859d
commit
3381e67616
17
README.md
17
README.md
@ -45,13 +45,6 @@
|
|||||||
</a>
|
</a>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
## Table of Contents
|
|
||||||
1. [Introduction](#1-introduction)
|
|
||||||
2. [Release](#2-release)
|
|
||||||
3. [Model Download](#3-model-download)
|
|
||||||
4. [Quick Start](#4-quick-start)
|
|
||||||
5. [License](#5-license)
|
|
||||||
6. [Citation](#6-citation)
|
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<a href="https://github.com/deepseek-ai/DeepSeek-VL2/tree/main?tab=readme-ov-file#3-model-download"><b>📥 Model Download</b></a> |
|
<a href="https://github.com/deepseek-ai/DeepSeek-VL2/tree/main?tab=readme-ov-file#3-model-download"><b>📥 Model Download</b></a> |
|
||||||
@ -63,6 +56,16 @@
|
|||||||
<a href=""><b>👁️ Demo</b></a>
|
<a href=""><b>👁️ Demo</b></a>
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
1. [Introduction](#1-introduction)
|
||||||
|
2. [Release](#2-release)
|
||||||
|
3. [Model Download](#3-model-download)
|
||||||
|
4. [Quick Start](#4-quick-start)
|
||||||
|
5. [License](#5-license)
|
||||||
|
6. [Citation](#6-citation)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 1. Introduction
|
## 1. Introduction
|
||||||
|
|
||||||
Introducing DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL. DeepSeek-VL2 demonstrates superior capabilities across various tasks, including but not limited to visual question answering, optical character recognition, document/table/chart understanding, and visual grounding. Our model series is composed of three variants: DeepSeek-VL2-Tiny, DeepSeek-VL2-Small and DeepSeek-VL2, with 1.0B, 2.8B and 4.5B activated parameters respectively.
|
Introducing DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL. DeepSeek-VL2 demonstrates superior capabilities across various tasks, including but not limited to visual question answering, optical character recognition, document/table/chart understanding, and visual grounding. Our model series is composed of three variants: DeepSeek-VL2-Tiny, DeepSeek-VL2-Small and DeepSeek-VL2, with 1.0B, 2.8B and 4.5B activated parameters respectively.
|
||||||
|
Loading…
Reference in New Issue
Block a user