kargaranamir
commited on
Commit
·
f10f95e
1
Parent(s):
dc68044
add example of readme
Browse files
README.md
CHANGED
@@ -27,6 +27,33 @@ widget:
|
|
27 |
|
28 |
# T5-Reverse (T5R)
|
29 |
|
30 |
-
This model can generate prompts (
|
31 |
|
32 |
This model is an instruction-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [alpaca dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) but in **reverse format**!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
# T5-Reverse (T5R)
|
29 |
|
30 |
+
This model can generate prompts (instructions) for any text!
|
31 |
|
32 |
This model is an instruction-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [alpaca dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) but in **reverse format**!
|
33 |
+
|
34 |
+
## How to Use the Model
|
35 |
+
|
36 |
+
You can use the `transformers` library to load and utilize the T5-Reverse (T5R) model for generating prompts based on text. Here's an example of how to do it:
|
37 |
+
|
38 |
+
```python
|
39 |
+
# Import required libraries
|
40 |
+
import torch
|
41 |
+
from transformers import pipeline
|
42 |
+
|
43 |
+
# Load the model and tokenizer using the pipeline from Hugging Face Hub
|
44 |
+
inference = pipeline("text2text-generation", model="kargaranamir/T5R-base")
|
45 |
+
|
46 |
+
# Example instruction and prompt
|
47 |
+
sample = '''
|
48 |
+
Instruction: X
|
49 |
+
Output: 1- Base your meals on higher fibre starchy carbohydrates. 2- Eat lots of fruit and veg. 3- Eat more fish, including a portion of oily fish.
|
50 |
+
What kind of instruction could this be the answer to?
|
51 |
+
X:
|
52 |
+
'''
|
53 |
+
|
54 |
+
# Generate a response using the model
|
55 |
+
res = inference(sample)
|
56 |
+
|
57 |
+
# Print the generated response
|
58 |
+
print(res)
|
59 |
+
>> [{'generated_text': 'Instruction: Generate three recommendations for a healthy diet.'}]
|