|
--- |
|
license: mit |
|
datasets: |
|
- tatsu-lab/alpaca |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
widget: |
|
- text: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\nInstruction: What is artificial intelligence?\nResponse: " |
|
example_title: "Knowledge-AI" |
|
- text: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\nInstruction: Write a haiku poem on cows\nResponse: " |
|
example_title: "Poem Generation" |
|
- text: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\nInstruction: What is the meaning of life?\nResponse: " |
|
example_title: "Philosophy" |
|
- text: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\nInstruction: What is the sky blue?\nResponse: " |
|
example_title: "Knowledge-sky" |
|
- text: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\nInstruction: Define electrostatic potential\nResponse: " |
|
example_title: "Knowledge-electricity" |
|
inference: |
|
parameters: |
|
temperature: 0.7, |
|
top_k: 50, |
|
top_p: .9, |
|
max_length: 200 |
|
--- |
|
|
|
# Model Card for Model ID |
|
This is the fine tuned version of OpenAI's GPT-2 with Alpaca dataset. |
|
This has been fine tuned for 20 epochs at batch size of 14. |
|
|
|
|
|
### Model Description |
|
|
|
``` |
|
license: mit |
|
dataset: tatsu-lab/alpaca |
|
language: en |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
base_model: gpt2 |
|
``` |
|
|
|
## Examples |
|
|
|
1. Give me tips to be healthy |
|
``` |
|
Here are some tips to be healthy: |
|
- Eat a balanced diet that includes plenty of fruits, vegetables, whole grains, lean proteins, and healthy fats. |
|
- Exercise regularly to maintain a healthy weight and reduce the risk of developing any health issues. |
|
- Get enough sleep and stay hydrated by drinking plenty of water. |
|
- Get enough fiber and healthy fats to support your overall health and wellbeing. |
|
- Practice mindful breathing and avoid activities that can increase stress. |
|
- Get enough sleep and maintain a healthy weight. |
|
``` |
|
|
|
2. Write a poem on cows |
|
``` |
|
Cows are one of the most beloved animals in the world, |
|
They live in lush green fields and care for the ground. |
|
Their milk is a sweet beverage, |
|
They live in a world of peace and tranquility. |
|
|
|
Though cows may seem small in comparison, |
|
They have a special bond with their owners. |
|
They care for their cows, and they do it right. |
|
The milk they produce is vital, |
|
They have a special place in our hearts. |
|
They are loyal friends, and they will never be defeated. |
|
Cows are a source of joy and joy, |
|
They will never be out of place again. |
|
``` |
|
|
|
3. Which is the fastest animal in the world |
|
``` |
|
The cheetah is the fastest animal in the world. |
|
``` |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
This model has all the biases, risks and limitations of base gpt2 model. |
|
|
|
|
|
## Recommendation |
|
|
|
The ideal format for inference is: |
|
``` |
|
Below is an instruction that describes a task. Write a response that appropriately completes the request. |
|
Instruction:{question} |
|
Response: |
|
``` |
|
Replace {question} with the question of your choice. |
|
|
|
The parameters I used for inference are: |
|
``` |
|
top_k=20 |
|
top_p=.9 |
|
temperature = .7 |
|
``` |
|
|
|
|
|
## References used |
|
|
|
1. GPT2 |
|
@article{radford2019language, |
|
title={Language Models are Unsupervised Multitask Learners}, |
|
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, |
|
year={2019} |
|
} |
|
|
|
2. tatsu-lab/alpaca |
|
@misc{alpaca, |
|
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, |
|
title = {Stanford Alpaca: An Instruction-following LLaMA model}, |
|
year = {2023}, |
|
publisher = {GitHub}, |
|
journal = {GitHub repository}, |
|
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, |
|
} |
|
|
|
|
|
|