File size: 3,999 Bytes
f28bccb
 
c5567b3
 
 
 
 
 
 
 
 
 
46ae537
 
ff9993b
 
c5567b3
46ae537
c5567b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d3a3da
 
 
 
c5567b3
5d3a3da
c5567b3
 
 
 
f28bccb
c5567b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ff9993b
c5567b3
 
 
c5999f6
 
 
 
 
 
 
c5567b3
 
ff9993b
c5999f6
 
 
c5567b3
ff9993b
c5567b3
ff9993b
 
 
c5567b3
ff9993b
 
 
c5567b3
 
 
 
 
ff9993b
 
 
 
 
 
c5567b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
---
license: mit
datasets:
- MuskumPillerum/General-Knowledge
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
widget:
- text: >-
    Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.\n
    Question: Who is the king of the jungle?'\n
    Answer: 
     
  example_title: Knowledge-AI
  
- text: >-
    Below is an instruction that describes a task. Write a response that
    appropriately completes the request.

    Instruction: Write a poem on cows

    Response: 
  example_title: Poem Generation
- text: >-
    Below is an instruction that describes a task. Write a response that
    appropriately completes the request.

    Instruction: What is the meaning of life?

    Response: 
  example_title: Philosophy
- text: >-
    Below is an instruction that describes a task. Write a response that
    appropriately completes the request.

    Instruction: Why is the sky blue?

    Response: 
  example_title: Knowledge-sky
- text: >-
    Below is an instruction that describes a task. Write a response that
    appropriately completes the request.

    Instruction: Define the term torque.

    Response: 
  example_title: Knowledge-torque
inference:
  parameters:
    temperature: 0.7
    top_k: 50
    top_p: 0.9
    max_length: 200
tags:
- chemistry
- biology
- text-generation-inference
---

# Model Card for Model ID
This is a fine tuned version of OpenAI's GPT2 to be good at chatting and question answering. The model seems to be very good for a 124M parameter model in general knowledge.
Intended purpose of the model: To create a powerful, easy to use and reliable model to be run on a consumer level graphics card (or maybe even a CPU).
This model vastly outperforms GPT2 and many other similar parameter models.



### Model Description

```
license: mit
dataset: MuskumPillerum/General-Knowledge
language: en
library_name: transformers
pipeline_tag: text-generation
base_model: gpt2
Finetuned epochs: 50
Finetune loss: 0.06
```

# Comparision with GPT2

1. Give me tips to be healthy
```
GPT2: Who is the king of the jungle? And where is the king?'

'Babe: It is in the hills, called the forest.'

They both said something I hadn't seen before

They say the king is in his

```
```
chat-GPT2: Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.
Question: Who is the king of the jungle?
Answer: Lion is the king of the jungle.

```

2. Write a poem on cows
```
GPT2: 

```
```
chat-GPT2: 

```

3. Which is the fastest animal in the world
```
GPT2: 

```
```
chat-GPT2: 

```


## Bias, Risks, and Limitations

This model has all the biases, risks and limitations of base gpt2 model. But the prompt template used in this model makes it safer than the traditional GPT2 model.


## Recommendation

The ideal format for inference is: 
```
Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.
Question: <Your question>
Answer: 
```
Replace 'Your question' with the question of your choice.
The prompt format was inspired from both Alpaca dataset (tatsu-lab/alpaca) and Llama2's (meta-llama/Llama-2-7b-chat-hf) prompt design.

The parameters I used for inference are:
```
top_k=20
top_p=.9
temperature = .7
```


## References used 

1. GPT2
@article{radford2019language,
  title={Language Models are Unsupervised Multitask Learners},
  author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
  year={2019}
}

2. MuskumPillerum/General-Knowledge