From e42723a7b6c6a174346eedeea19fd9dd529dd486 Mon Sep 17 00:00:00 2001 From: Dhieu Date: Thu, 6 Feb 2025 00:01:47 +0300 Subject: [PATCH] Add table of contents to README for better navigation --- README.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/README.md b/README.md index c7591b0..80e9d28 100644 --- a/README.md +++ b/README.md @@ -56,6 +56,16 @@ 👁️ Demo

+ +1. [Introduction](#1-introduction) +2. [Release](#2-release) +3. [Model Download](#3-model-download) +4. [Quick Start](#4-quick-start) +5. [License](#5-license) +6. [Citation](#6-citation) +7. [Contact](#7-contact) + + ## 1. Introduction Introducing DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL. DeepSeek-VL2 demonstrates superior capabilities across various tasks, including but not limited to visual question answering, optical character recognition, document/table/chart understanding, and visual grounding. Our model series is composed of three variants: DeepSeek-VL2-Tiny, DeepSeek-VL2-Small and DeepSeek-VL2, with 1.0B, 2.8B and 4.5B activated parameters respectively.