diff --git a/README.md b/README.md index 42ddd00..4167a3e 100644 --- a/README.md +++ b/README.md @@ -144,6 +144,8 @@ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(**inputs, max_length=128) print(tokenizer.decode(outputs[0])) ``` +![Completion GIF](pictures/completion_demo.gif) + #### Chat Model Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM