|
---
|
|
base_model: segestic/Tinystories-gpt-0.1-3m
|
|
datasets:
|
|
- roneneldan/TinyStories
|
|
inference: true
|
|
language:
|
|
- en
|
|
library_name: transformers
|
|
model_creator: segestic
|
|
model_name: Tinystories-gpt-0.1-3m
|
|
pipeline_tag: text-generation
|
|
quantized_by: afrideva
|
|
tags:
|
|
- gguf
|
|
- ggml
|
|
- quantized
|
|
---
|
|
|
|
# Tinystories-gpt-0.1-3m-GGUF
|
|
|
|
Quantized GGUF model files for [Tinystories-gpt-0.1-3m](https://huggingface.co./segestic/Tinystories-gpt-0.1-3m) from [segestic](https://huggingface.co./segestic)
|
|
|
|
## Original Model Card:
|
|
|
|
## We tried to use the huggingface transformers library to recreate the TinyStories models on Consumer GPU using GPT2 Architecture instead of GPT-Neo Architecture orignally used in the paper (https://arxiv.org/abs/2305.07759). Output model is 15mb and has 3 million parameters.
|
|
|
|
|
|
|
|
# ------ EXAMPLE USAGE 1 ---
|
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
tokenizer = AutoTokenizer.from_pretrained("segestic/Tinystories-gpt-0.1-3m")
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("segestic/Tinystories-gpt-0.1-3m")
|
|
|
|
prompt = "Once upon a time there was"
|
|
|
|
input_ids = tokenizer.encode(prompt, return_tensors="pt")
|
|
#### Generate completion
|
|
output = model.generate(input_ids, max_length = 1000, num_beams=1)
|
|
#### Decode the completion
|
|
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
|
#### Print the generated text
|
|
print(output_text)
|
|
|
|
|
|
|
|
# ------ EXAMPLE USAGE 2 ------
|
|
## Use a pipeline as a high-level helper
|
|
from transformers import pipeline
|
|
#### pipeline
|
|
pipe = pipeline("text-generation", model="segestic/Tinystories-gpt-0.1-3m")
|
|
#### prompt
|
|
prompt = "where is the little girl"
|
|
#### generate completion
|
|
output = pipe(prompt, max_length=1000, num_beams=1)
|
|
#### decode the completion
|
|
generated_text = output[0]['generated_text']
|
|
#### Print the generated text
|
|
print(generated_text) |