File size: 7,937 Bytes
f28bccb c5567b3 46ae537 bb204db 45e6968 ff9993b b6db9ae 46ae537 c5567b3 b6db9ae c5567b3 b6db9ae c5567b3 b6db9ae c5567b3 45e6968 5d3a3da c5567b3 5d3a3da c5567b3 f28bccb c5567b3 ff9993b a3fe400 c5567b3 f230017 c5567b3 c5999f6 c5567b3 ff9993b c5999f6 c5567b3 ff9993b c5567b3 f230017 ff9993b f230017 c5567b3 ff9993b f230017 c5567b3 f230017 c5567b3 f230017 ff9993b f230017 ff9993b f230017 ff9993b f230017 ff9993b c5567b3 f230017 c5567b3 f230017 c5567b3 b6db9ae c5567b3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 |
---
license: mit
datasets:
- MuskumPillerum/General-Knowledge
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
widget:
- text: >-
Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.\n
Question: Who is the king of the jungle?
Answer:
example_title: Knowledge - Animal kingdom
- text: >-
Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.\n
Question: Who is Kobe Bryant?
Answer:
example_title: Knowledge - Sports
- text: >-
Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.\n
Question: What is the meaning of life?
Answer:
example_title: Philosophy
- text: >-
Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.\n
Question: What role of actuators in robotics?
Answer:
example_title: Robotics
inference:
parameters:
temperature: 0.7
top_k: 50
top_p: 0.9
max_length: 200
tags:
- chemistry
- biology
- text-generation-inference
---
# Model Card for Model ID
This is a fine tuned version of OpenAI's GPT2 to be good at chatting and question answering. The model seems to be very good for a 124M parameter model in general knowledge.
Intended purpose of the model: To create a powerful, easy to use and reliable model to be run on a consumer level graphics card (or maybe even a CPU).
This model vastly outperforms GPT2 and many other similar parameter models.
### Model Description
```
license: mit
dataset: MuskumPillerum/General-Knowledge
language: en
library_name: transformers
pipeline_tag: text-generation
base_model: gpt2
Finetuned epochs: 50
Finetune loss: 0.06
```
# Comparision with GPT2
GPT2 is a text generation AI and is not meant for question answering purposes. The following comparison is meant to show how good the fine tuned model is, in comparison to the base model.
1. Who is the king of the jungle?
```
GPT2: Who is the king of the jungle? And where is the king?'
'Babe: It is in the hills, called the forest.'
They both said something I hadn't seen before
They say the king is in his
```
```
chat-GPT2: Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.
Question: Who is the king of the jungle?
Answer: Lion is the king of the jungle.
```
2. What is AI?
```
GPT2: What is AI? Can it be done? How does it function? These are all incredibly important questions, and they've been answered in great detail, but if we want to get to what really matters, we can't just assume the answer in order
```
```
chat-GPT2: Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.
Question: What is AI?
Answer: AI is a branch of computer science that focuses on creating intelligent machines that are capable of performing tasks that would typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.\n
```
3. What is the difference between acid and a base?
```
GPT2: What is the difference between an acid and a base?
The simplest distinction among these two structures has been between acids and bases. Acid is the water equivalent to a base — the water that enters a bottle, or the acid stored on it, or the base that is added to the glass.
```
```
chat-GPT2: Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.
Question: What is the difference between an acid and a base?
Answer: An acid is a molecule or ion that donates a proton (H+) in a chemical reaction, while a base is a molecule or ion that accepts a proton (H+) in a chemical reaction. Acids have a pH lower than 7, while bases have a pH higher than 7.\n
```
4. Should physicians be allowed to assist terminally ill patients in ending their lives?
```
GPT2: Should physicians be allowed to assist terminally ill patients in ending their lives?
Physician advocates have been asking for change throughout decades since, arguing that dying can be easily stopped by an end of life support or life support protocol. The problem is that many of them do not get the chance to.
```
```
chat-GPT2: Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.
Question: Should physicians be allowed to assist terminally ill patients in ending their lives?
Answer: My answer to this question would be that assisting terminally ill patients in ending their lives raises ethical concerns around autonomy, respect for human life, and the role of physicians in end-of-life care. It is important to ensure that any assistance in ending a patient’s life is based on a clear and legitimate medical purpose and is subject to rigorous oversight and accountability. Any decision to assist in ending a patient’s life should be subject to ethical review and should only be carried out in exceptional circumstances where there is no other option for survival.
```
5. Who is known as the greatest boxer of all time?
```
GPT2:Who is known as the greatest boxer of all time?Who is known as the greatest boxer of all time? Awwwwwwww... That's what's funny.
In his time with the New York Times, Mr. Jones had morefightswithhefightthan
```
```
chat-GPT2: Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.
Question: Who is known as the greatest boxer of all time?
Answer: Muhammad Ali is known as the greatest boxer of all time.
```
## Bias, Risks, and Limitations
This model has all the biases, risks and limitations of base gpt2 model. But the prompt template used in this model makes it safer than the traditional GPT2 model.
## Recommendation
The ideal format for inference is:
```
Read the question and give an honest answer. Your answers should not include any unethical, racist, sexist, dangerous, or illegal content. If the question is wrong, or does not make sense, accept it instead of giving the wrong answer.
Question: <Your question>
Answer:
```
Replace 'Your question' with the question of your choice.
The prompt format was inspired from both Alpaca dataset (tatsu-lab/alpaca) and Llama2's (meta-llama/Llama-2-7b-chat-hf) prompt design.
The parameters I used for inference are:
```
top_k=50
top_p=0.9
temperature = 0.7
```
## References used
1. GPT2
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
2. MuskumPillerum/General-Knowledge |