Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ widget:
|
|
18 |
|
19 |
# `sgarbi/gpt-nq-prompt-generator` Model Card
|
20 |
|
21 |
-
|
22 |
|
23 |
## Intended Use
|
24 |
|
@@ -36,6 +36,14 @@ The `sgarbi/gpt-nq-prompt-generator` is designed with specificity in mind.
|
|
36 |
**Licensing**: This model is proudly released under the MIT license, in alignment with GPT-2's licensing provisions. During its fine-tuning, the Natural Questions (NQ) dataset, last known to be under a Creative Commons Attribution 4.0 International License as of January 2022, was utilized. Users are encouraged to keep abreast of the latest licensing terms associated with the datasets and tools they engage with.
|
37 |
|
38 |
## How To Use
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
```python
|
40 |
from transformers import GPT2Tokenizer, GPT2LMHeadModel
|
41 |
import torch
|
|
|
18 |
|
19 |
# `sgarbi/gpt-nq-prompt-generator` Model Card
|
20 |
|
21 |
+
This is a learning model developed to enhance prompt engineering capabilities. Its creation stems from a systematic approach to understanding the complexities involved in model training. The primary goal of this model is to serve specialized applications beyond the scope of standard prompt generation.
|
22 |
|
23 |
## Intended Use
|
24 |
|
|
|
36 |
**Licensing**: This model is proudly released under the MIT license, in alignment with GPT-2's licensing provisions. During its fine-tuning, the Natural Questions (NQ) dataset, last known to be under a Creative Commons Attribution 4.0 International License as of January 2022, was utilized. Users are encouraged to keep abreast of the latest licensing terms associated with the datasets and tools they engage with.
|
37 |
|
38 |
## How To Use
|
39 |
+
|
40 |
+
1. **Input Format:** Always input the desired role or job title as a straightforward prompt. For example, "Software Engineer" or "Nurse Practitioner".
|
41 |
+
2. **Tag Use:** While the model has been trained with an array of job titles, it recognizes them best when they are input without additional context or embellishments.
|
42 |
+
3. **Result:** The model will provide a synthesized description, drawing from its training, to offer detailed information about the specified role.
|
43 |
+
|
44 |
+
### Note:
|
45 |
+
While the model recognizes a diverse range of job titles, it's always possible that some niche or highly specialized roles might receive less detailed or generic outputs. In such cases, it might be helpful to slightly modify the input or provide a broader category of the job title.
|
46 |
+
Would this work for
|
47 |
```python
|
48 |
from transformers import GPT2Tokenizer, GPT2LMHeadModel
|
49 |
import torch
|