Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,20 @@ The dataset was tokenized and fed to the model as a conversation between two spe
|
|
21 |
- the default inference API examples should work _okay_
|
22 |
- an ideal test would be explicitly adding `person beta` to the **end** of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
## citations
|
25 |
```
|
26 |
@inproceedings{dinan2019wizard,
|
|
|
21 |
- the default inference API examples should work _okay_
|
22 |
- an ideal test would be explicitly adding `person beta` to the **end** of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
|
23 |
|
24 |
+
### Example prompt:
|
25 |
+
```
|
26 |
+
do you like to eat beans?
|
27 |
+
person beta:
|
28 |
+
```
|
29 |
+
### Resulting output
|
30 |
+
|
31 |
+
```
|
32 |
+
do you like to eat beans?
|
33 |
+
|
34 |
+
person beta:
|
35 |
+
no, i don't like
|
36 |
+
```
|
37 |
+
|
38 |
## citations
|
39 |
```
|
40 |
@inproceedings{dinan2019wizard,
|