From 45680ae127e48913d669271fb2e9f289bfba904a Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E6=88=B4=E7=A6=8F=E7=94=9F?=
<37627578+aarsxx@users.noreply.github.com>
Date: Tue, 28 Jan 2025 00:49:28 +0700
Subject: [PATCH] Fix: correct typos and formatting issues in README
- Removed duplicate closing tag in the introduction section.
- Fixed extra space in the link reference to VLMEvalKit.
- Cleaned up redundant hash symbols in code comments.
- Added missing periods at the end of sentences for consistency.
---
README.md | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/README.md b/README.md
index a77a938..cd590b3 100755
--- a/README.md
+++ b/README.md
@@ -17,7 +17,6 @@
-
@@ -66,7 +65,7 @@
**2024.11.13**: JanusFlow is released, a new unified model with rectified flow for image generation. See [paper](https://arxiv.org/abs/2411.07975), [demo](https://huggingface.co/spaces/deepseek-ai/JanusFlow-1.3B) and [usage](https://github.com/deepseek-ai/Janus?tab=readme-ov-file#janusflow).
-**2024.10.23**: Evaluation code for reproducing the multimodal understanding results from the paper has been added to VLMEvalKit. Please refer to [this link]( https://github.com/open-compass/VLMEvalKit/pull/541).
+**2024.10.23**: Evaluation code for reproducing the multimodal understanding results from the paper has been added to VLMEvalKit. Please refer to [this link](https://github.com/open-compass/VLMEvalKit/pull/541).
**2024.10.20**: (1) Fix a bug in [tokenizer_config.json](https://huggingface.co/deepseek-ai/Janus-1.3B/blob/main/tokenizer_config.json). The previous version caused classifier-free guidance to not function properly, resulting in relatively poor visual generation quality. (2) Release Gradio demo ([online demo](https://huggingface.co/spaces/deepseek-ai/Janus-1.3B) and [local](#gradio-demo)).
@@ -165,10 +164,10 @@ prepare_inputs = vl_chat_processor(
conversations=conversation, images=pil_images, force_batchify=True
).to(vl_gpt.device)
-# # run image encoder to get the image embeddings
+# run image encoder to get the image embeddings
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
-# # run the model to get the response
+# run the model to get the response
outputs = vl_gpt.language_model.generate(
inputs_embeds=inputs_embeds,
attention_mask=prepare_inputs.attention_mask,