update model card usage example
Browse files
README.md
CHANGED
@@ -72,7 +72,7 @@ prompt = "Di seguito è riportata un'istruzione che descrive un'attività, accom
|
|
72 |
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
|
73 |
outputs = model.generate(input_ids=input_ids)
|
74 |
|
75 |
-
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
|
76 |
```
|
77 |
|
78 |
If you are facing issues when loading the model, you can try to load it quantized:
|
|
|
72 |
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
|
73 |
outputs = model.generate(input_ids=input_ids)
|
74 |
|
75 |
+
print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1]:], skip_special_tokens=True)[0])
|
76 |
```
|
77 |
|
78 |
If you are facing issues when loading the model, you can try to load it quantized:
|