|
--- |
|
license: bigscience-bloom-rail-1.0 |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: Bloom-Alpaca-560m |
|
results: [] |
|
widget: |
|
- text: |- |
|
### Instruction: |
|
Write a poem about pancakes. |
|
|
|
|
|
datasets: |
|
- tatsu-lab/alpaca |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# Bloom-Alpaca-560m |
|
|
|
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co./bigscience/bloom-560m) on the [tatsu-lab/alpaca](https://huggingface.co./datasets/tatsu-lab/alpaca) dataset. This is a test to check if the concept of Databrick's [databricks/dolly-v1-6b](https://huggingface.co./databricks/dolly-v1-6b) can be applied to smaller models. |
|
|
|
## Model description |
|
|
|
This is an LLM trained to follow instructions, similar to ChatGPT, except much much smaller. |
|
|
|
## Sample Use |
|
|
|
```python |
|
from transformers import pipeline |
|
|
|
generator = pipeline('text-generation', model='linkanjarad/Bloom-Alpaca-560m') |
|
|
|
prompt = '''### Instruction: |
|
Who is the first person on the moon? |
|
|
|
### Response:''' |
|
|
|
output = generator(prompt, do_sample=True, min_length=10, max_length=50, temperature=0.5) |
|
print(output[0]['generated_text']) |
|
``` |
|
|
|
## Intended uses & limitations |
|
|
|
The model works best when given input in the format of the dataset it was trained on. Here is an example of the format of the [tatsu-lab/alpaca](https://huggingface.co./datasets/tatsu-lab/alpaca) dataset: |
|
|
|
``` |
|
### Instruction: |
|
What are the three primary colors? |
|
|
|
### Response: |
|
The three primary colors are red, blue, and yellow. |
|
``` |
|
|
|
Here are a few sample model outputs: |
|
|
|
``` |
|
### Instruction: |
|
Write a poem about pancakes in the style of Dr. Suess. |
|
|
|
### Response: |
|
Gently bake the delicious cup of pancake goodness away |
|
In the sweet light of a sunken window |
|
Stay away from those who take it for granted |
|
The magic of those who will bake and preserve |
|
The sweetness of baked pancakes. |
|
Once more I'm gonna bake my dreams for ever |
|
No matter what my journey might bring |
|
No matter how hard I come |
|
``` |
|
|
|
``` |
|
### Instruction: |
|
Generate a program in Python to generate prime numbers. |
|
|
|
### Response: |
|
""" |
|
Generate a program in Python to generate prime numbers. |
|
""" |
|
|
|
import random |
|
import math |
|
import math.exp |
|
|
|
def generate_primes(): |
|
m = random.randint(1, 10) |
|
result = random.randint(1, 10) |
|
|
|
for i in range(m): |
|
result = generate_primes() |
|
if result == result: |
|
break |
|
return result |
|
|
|
``` |
|
|
|
``` |
|
### Instruction: |
|
How was the steam engine invented? |
|
|
|
### Response: |
|
The steam engine was invented by William George Taylor, a French engineer and inventor who used the technology of steam turbines to create engines. Taylor first designed a turbine that used a rotating disc with a spiral gear that accelerated the air through an annular passage. By expanding the annular passage, Taylor's turbine gained power and became a powerful engine. Taylor then used the spiral gear to create a turbine that drove a rail with a |
|
``` |
|
|
|
``` |
|
### Instruction: |
|
Generate a list of marketing strategies to promote a new mobile app. |
|
|
|
### Response: |
|
1. Create a campaign to promote the app's launch. |
|
2. Provide a promotional video and posters. |
|
3. Create a mobile app store campaign. |
|
4. Publish a newsletter to attract potential customers. |
|
5. Make a list of key features of the app. |
|
6. Establish a referral program. |
|
7. Attend marketing conferences and events. |
|
8. Participate in customer surveys. |
|
9. Offer loyalty rewards and discounts. |
|
10. Utilize social media to promote the app. |
|
11. Create a user-friendly advertising campaign. |
|
12. Utilize social media to promote the app. |
|
``` |
|
|
|
## Training Procedure |
|
|
|
The model was trained with Huggingface's 🤗 Trainer for about 1 epoch. |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 1 |
|
- eval_batch_size: 1 |
|
- seed: 42 |
|
- gradient_accumulation_steps: 128 |
|
- total_train_batch_size: 128 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_ratio: 0.15 |
|
- num_epochs: 1 |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.27.4 |
|
- Pytorch 1.13.0 |
|
- Datasets 2.1.0 |
|
- Tokenizers 0.13.2 |
|
|