Text Generation
Transformers
Safetensors
jamba
conversational
custom_code
Inference Endpoints
ptrdvn commited on
Commit
82b76ed
1 Parent(s): a95c79f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -24,6 +24,9 @@ Initial subjective testing has shown that this model can chat reasonably well in
24
 
25
  ## How to use
26
 
 
 
 
27
  ```python
28
  from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
29
  import torch
 
24
 
25
  ## How to use
26
 
27
+ ※ - This code automatically appends the "<|startoftext|>" special token to any input.
28
+ Appending this to all inputs is required for inference, as initial testing shows that leaving it out leads to output errors.
29
+
30
  ```python
31
  from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
32
  import torch