Update README.md
Browse files
README.md
CHANGED
@@ -8,15 +8,28 @@ pipeline_tag: text-generation
|
|
8 |
---
|
9 |
# Using NeyabAI:
|
10 |
|
|
|
11 |
|
12 |
-
|
13 |
-
|
|
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
|
|
|
|
|
|
|
|
16 |
|
17 |
-
|
18 |
|
19 |
-
|
20 |
|
21 |
## Requirements
|
22 |
|
|
|
8 |
---
|
9 |
# Using NeyabAI:
|
10 |
|
11 |
+
## Direct Use:
|
12 |
|
13 |
+
```python
|
14 |
+
import torch
|
15 |
+
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
|
16 |
|
17 |
+
model = GPT2LMHeadModel.from_pretrained("XsoraS/NeyabAI")
|
18 |
+
tokenizer = GPT2TokenizerFast.from_pretrained("XsoraS/NeyabAI")
|
19 |
+
```
|
20 |
+
```python
|
21 |
+
def generate_response(prompt):
|
22 |
+
inputs = tokenizer(prompt, return_tensors='pt') # You can add .to(torch.device("cuda")) to use GPU acceleration.
|
23 |
+
return tokenizer.decode(model.generate(inputs.input_ids, max_length=512, do_sample=True,top_p=0.8, temperature=0.7, num_return_sequences=1,attention_mask=None)[0],skip_special_tokens=True)
|
24 |
|
25 |
+
prompt = "Hello"
|
26 |
+
response = ' '.join(map(str, str(generate_response("### Human: "+prompt+" \n### AI:")).replace("</s>","").split()))
|
27 |
+
print(response)
|
28 |
+
```
|
29 |
|
30 |
+
## Fine-Tuning:
|
31 |
|
32 |
+
This repository demonstrates how to fine-tune the NeyabAI(GPT-2) language model on a custom dataset using PyTorch and Hugging Face's Transformers library. The code provides an end-to-end example, from loading the dataset to training the model and evaluating its performance.
|
33 |
|
34 |
## Requirements
|
35 |
|