add text-generation-inference example
Browse files
README.md
CHANGED
@@ -99,6 +99,15 @@ tokenizer = AutoTokenizer.from_pretrained(
|
|
99 |
|
100 |
```
|
101 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
### Limitations and Biases
|
103 |
|
104 |
Palmyra Large’s core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting Palmyra Large, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on Palmyra Large to produce factually correct results.
|
|
|
99 |
|
100 |
```
|
101 |
|
102 |
+
It can also be used with text-generation-inference
|
103 |
+
|
104 |
+
```sh
|
105 |
+
model=Writer/palmyra-large
|
106 |
+
volume=$PWD/data
|
107 |
+
|
108 |
+
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference --model-id $model
|
109 |
+
```
|
110 |
+
|
111 |
### Limitations and Biases
|
112 |
|
113 |
Palmyra Large’s core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting Palmyra Large, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on Palmyra Large to produce factually correct results.
|