Solshine's picture
Update README.md
87d572d verified
---
base_model: inceptionai/jais-adapted-7b-chat
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
Developed by: Solshine (Caleb DeLeeuw)
License: apache-2.0
Finetuned from model : inceptionai/jais-adapted-7b-chat ( after quantization transformation into Solshine/jais-adapted-7b-chat-Q4_K_M-GGUF )
Dataset: CopyleftCultivarinceptionai/jais-adapted-7b-chats/Natural-Farming-Real-QandA-Conversations-Q1-2024-Update (Real world Natural Farming advise, from over 12 countries and a multitude of real-world farm operations, using semi-synthetic data curated by domain experts)
V4 (best training loss curve of unsloth configs tested) of LORA adapter trained, merged into this quantized gguf.
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
\\ /| Num examples = 169 | Num Epochs = 2
O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 4
\ / Total batch size = 8 | Total steps = 38
"-____-" Number of trainable parameters = 39,976,960
[38/38 03:29, Epoch 1/2]
Step Training Loss
1 2.286800
2 2.205600
3 2.201700
4 2.158100
5 2.021100
6 1.820200
7 1.822500
8 1.565700
9 1.335700
10 1.225900
11 1.081000
12 0.947700
13 0.828600
14 0.830200
15 0.796300
16 0.781200
17 0.781600
18 0.815000
19 0.741400
20 0.847600
21 0.736600
22 0.714300
23 0.706400
24 0.752800
25 0.684600
26 0.647800
27 0.775300
28 0.613800
29 0.679500
30 0.752900
31 0.589800
32 0.729400
33 0.549500
34 0.638500
35 0.609500
36 0.632200
37 0.686400
38 0.724200
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)