File size: 1,785 Bytes
3255a6c 88bad18 3255a6c 2fa687d 3255a6c 88bad18 3255a6c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
widget:
- example_title: Fibonacci (Python)
messages:
- role: system
content: You are a chatbot who can help code!
- role: user
content: Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.
tags:
- transformers
- safetensors
- finetuned
- 4-bit
- AWQ
- text-generation
- text-generation-inference
- autotrain_compatible
- endpoints_compatible
- chatml
model_creator: TinyLlama
model_name: TinyLlama-1.1B-Chat-v1.0
inference: false
pipeline_tag: text-generation
quantized_by: Suparious
---
# TinyLlama/TinyLlama-1.1B-Chat-v1.0 AWQ
- Model creator: [TinyLlama](https://huggingface.co./TinyLlama)
- Original model: [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co./TinyLlama/TinyLlama-1.1B-Chat-v1.0)
## Model Summary
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co./TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co./HuggingFaceH4/zephyr-7b-alpha)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co./datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co./datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
|