|
--- |
|
license: mit |
|
datasets: |
|
- tatsu-lab/alpaca |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
widget: |
|
- text: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\nInstruction: What is artificial intelligence?\nResponse: " |
|
example_title: "Knowledge-AI" |
|
- text: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\nInstruction: Write a poem on cows\nResponse: " |
|
example_title: "Poem Generation" |
|
- text: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\nInstruction: What is the meaning of life?\nResponse: " |
|
example_title: "Philosophy" |
|
- text: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\nInstruction: Why is the sky blue?\nResponse: " |
|
example_title: "Knowledge-sky" |
|
- text: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\nInstruction: Define the term torque.\nResponse: " |
|
example_title: "Knowledge-torque" |
|
inference: |
|
parameters: |
|
temperature: 0.7 |
|
top_k: 50 |
|
top_p: .9 |
|
max_length: 200 |
|
--- |
|
|
|
# Model Card for Model ID |
|
This is the fine tuned version of OpenAI's GPT-2 with Alpaca dataset. |
|
This has been fine tuned for 20 epochs at batch size of 14. |
|
|
|
|
|
### Model Description |
|
|
|
``` |
|
license: mit |
|
dataset: tatsu-lab/alpaca |
|
language: en |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
base_model: gpt2 |
|
``` |
|
|
|
## Examples |
|
|
|
1. Give me tips to be healthy |
|
``` |
|
Here are some tips to be healthy: |
|
- Eat a balanced diet that includes plenty of fruits, vegetables, whole grains, lean proteins, and healthy fats. |
|
- Exercise regularly to maintain a healthy weight and reduce the risk of developing any health issues. |
|
- Get enough sleep and stay hydrated by drinking plenty of water. |
|
- Get enough fiber and healthy fats to support your overall health and wellbeing. |
|
- Practice mindful breathing and avoid activities that can increase stress. |
|
- Get enough sleep and maintain a healthy weight. |
|
``` |
|
|
|
2. Write a poem on cows |
|
``` |
|
Cows are beloved creatures, |
|
Families who love to cuddle, |
|
In their humble homes they rest. |
|
|
|
Cows are the bread and butter of life, |
|
They can be found in the fields and gardens, |
|
Birds and insects to feed us. |
|
|
|
Cows are the sweet smell of life, |
|
A source of joy and happiness, |
|
Providing shelter and a home to rest. |
|
|
|
Cows are the faithful guardians of the land, |
|
They will protect our spirits from strife, |
|
In their care they will ensure that our food and shelter will be nourished. |
|
|
|
Cows are the beloved petals of life, |
|
They are loyal companions and faithful stewards, |
|
Their presence will always be seen. |
|
``` |
|
|
|
3. Which is the fastest animal in the world |
|
``` |
|
The cheetah is the fastest animal in the world. |
|
``` |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
This model has all the biases, risks and limitations of base gpt2 model. |
|
|
|
|
|
## Recommendation |
|
|
|
The ideal format for inference is: |
|
``` |
|
Below is an instruction that describes a task. Write a response that appropriately completes the request. |
|
Instruction:{question} |
|
Response: |
|
``` |
|
Replace {question} with the question of your choice. |
|
|
|
The parameters I used for inference are: |
|
``` |
|
top_k=20 |
|
top_p=.9 |
|
temperature = .7 |
|
``` |
|
|
|
|
|
## References used |
|
|
|
1. GPT2 |
|
@article{radford2019language, |
|
title={Language Models are Unsupervised Multitask Learners}, |
|
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, |
|
year={2019} |
|
} |
|
|
|
2. tatsu-lab/alpaca |
|
@misc{alpaca, |
|
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, |
|
title = {Stanford Alpaca: An Instruction-following LLaMA model}, |
|
year = {2023}, |
|
publisher = {GitHub}, |
|
journal = {GitHub repository}, |
|
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, |
|
} |
|
|
|
|
|
|