Suparious's picture
Update README.md
88bad18 verified
|
raw
history blame
1.79 kB
metadata
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
  - cerebras/SlimPajama-627B
  - bigcode/starcoderdata
  - HuggingFaceH4/ultrachat_200k
  - HuggingFaceH4/ultrafeedback_binarized
language:
  - en
widget:
  - example_title: Fibonacci (Python)
    messages:
      - role: system
        content: You are a chatbot who can help code!
      - role: user
        content: >-
          Write me a function to calculate the first 10 digits of the fibonacci
          sequence in Python and print it out to the CLI.
tags:
  - transformers
  - safetensors
  - finetuned
  - 4-bit
  - AWQ
  - text-generation
  - text-generation-inference
  - autotrain_compatible
  - endpoints_compatible
  - chatml
model_creator: TinyLlama
model_name: TinyLlama-1.1B-Chat-v1.0
inference: false
pipeline_tag: text-generation
quantized_by: Suparious

TinyLlama/TinyLlama-1.1B-Chat-v1.0 AWQ

Model Summary

This is the chat model finetuned on top of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T. We follow HF's Zephyr's training recipe. The model was " initially fine-tuned on a variant of the UltraChat dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with 🤗 TRL's DPOTrainer on the openbmb/UltraFeedback dataset, which contain 64k prompts and model completions that are ranked by GPT-4."