Ramikan-BR's picture
Update README.md
ce792c2 verified
|
raw
history blame
No virus
1.34 kB
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
  - tinyllamacoder-py
  - coder-py
  - coder
base_model: unsloth/tinyllama-bnb-4bit

--- >- ==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 \ /| Num examples = 967 | Num Epochs = 1 O^O/ _/ \ Batch size per device = 2 | Gradient Accumulation steps = 16 \ / Total batch size = 32 | Total steps = 30 "-____-" Number of trainable parameters = 100,925,440 [30/30 26:26, Epoch 0/1] Step Training Loss

1 1.737000 2 1.738000 3 1.384700 4 1.086400 5 1.009600 6 0.921000 7 0.830400 8 0.808900 9 0.774500 10 0.759900 11 0.736100 12 0.721200 13 0.733200 14 0.701000 15 0.711700 16 0.701400 17 0.689500 18 0.678800 19 0.675200 20 0.680500 21 0.685800 22 0.681200 23 0.672000 24 0.679900 25 0.675500 26 0.666600 27 0.687900 28 0.653600 29 0.672500 30 0.660900


null

Uploaded model

  • Developed by: Ramikan-BR
  • License: apache-2.0
  • Finetuned from model : unsloth/tinyllama-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.