File size: 1,335 Bytes
a0b76af
 
 
 
 
 
 
 
 
 
27d7c2d
 
 
a0b76af
 
 
96986db
 
 
 
 
 
 
ce792c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96986db
 
 
 
 
a0b76af
 
 
 
 
 
 
 
cd095f1
 
9b9569f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- tinyllamacoder-py
- coder-py
- coder
base_model: unsloth/tinyllama-bnb-4bit
---

--- >-
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 \\ /| Num
examples = 967 | Num Epochs = 1 O^O/ \_/ \ Batch size per device = 2 | Gradient
Accumulation steps = 16 \ / Total batch size = 32 | Total steps = 30 "-____-"
Number of trainable parameters = 100,925,440 [30/30 26:26, Epoch 0/1] Step
Training Loss

1 1.737000 
2 1.738000 
3 1.384700 
4 1.086400 
5 1.009600 
6 0.921000 
7 0.830400 
8 0.808900 
9 0.774500 
10 0.759900 
11 0.736100 
12 0.721200 
13 0.733200 
14 0.701000
15 0.711700 
16 0.701400 
17 0.689500 
18 0.678800 
19 0.675200 
20 0.680500 
21 0.685800 
22 0.681200 
23 0.672000 
24 0.679900 
25 0.675500 
26 0.666600 
27 0.687900
28 0.653600 
29 0.672500 
30 0.660900

---
null


# Uploaded  model

- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)