mirror of
https://github.com/deepseek-ai/Janus.git
synced 2025-02-23 06:08:59 -05:00
Fix: correct typos and formatting issues in README
- Removed duplicate closing </a> tag in the introduction section. - Fixed extra space in the link reference to VLMEvalKit. - Cleaned up redundant hash symbols in code comments. - Added missing periods at the end of sentences for consistency.
This commit is contained in:
parent
a42ad6dab3
commit
45680ae127
@ -17,7 +17,6 @@
|
||||
<a href="https://www.deepseek.com/" target="_blank">
|
||||
<img alt="Homepage" src="images/badge.svg" />
|
||||
</a>
|
||||
</a>
|
||||
<a href="https://huggingface.co/deepseek-ai" target="_blank">
|
||||
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" />
|
||||
</a>
|
||||
@ -66,7 +65,7 @@
|
||||
|
||||
**2024.11.13**: JanusFlow is released, a new unified model with rectified flow for image generation. See [paper](https://arxiv.org/abs/2411.07975), [demo](https://huggingface.co/spaces/deepseek-ai/JanusFlow-1.3B) and [usage](https://github.com/deepseek-ai/Janus?tab=readme-ov-file#janusflow).
|
||||
|
||||
**2024.10.23**: Evaluation code for reproducing the multimodal understanding results from the paper has been added to VLMEvalKit. Please refer to [this link]( https://github.com/open-compass/VLMEvalKit/pull/541).
|
||||
**2024.10.23**: Evaluation code for reproducing the multimodal understanding results from the paper has been added to VLMEvalKit. Please refer to [this link](https://github.com/open-compass/VLMEvalKit/pull/541).
|
||||
|
||||
**2024.10.20**: (1) Fix a bug in [tokenizer_config.json](https://huggingface.co/deepseek-ai/Janus-1.3B/blob/main/tokenizer_config.json). The previous version caused classifier-free guidance to not function properly, resulting in relatively poor visual generation quality. (2) Release Gradio demo ([online demo](https://huggingface.co/spaces/deepseek-ai/Janus-1.3B) and [local](#gradio-demo)).
|
||||
|
||||
@ -165,10 +164,10 @@ prepare_inputs = vl_chat_processor(
|
||||
conversations=conversation, images=pil_images, force_batchify=True
|
||||
).to(vl_gpt.device)
|
||||
|
||||
# # run image encoder to get the image embeddings
|
||||
# run image encoder to get the image embeddings
|
||||
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
|
||||
|
||||
# # run the model to get the response
|
||||
# run the model to get the response
|
||||
outputs = vl_gpt.language_model.generate(
|
||||
inputs_embeds=inputs_embeds,
|
||||
attention_mask=prepare_inputs.attention_mask,
|
||||
|
Loading…
Reference in New Issue
Block a user