File size: 2,613 Bytes
0a341a4 9e4c38d 0a341a4 f4ea37e 0a341a4 f4ea37e 9e4c38d 0a341a4 9e4c38d 0a341a4 9e4c38d 0a341a4 01e53a2 0a341a4 9e4c38d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
language:
- en
- hi
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
datasets:
- yahma/alpaca-cleaned
- ravithejads/samvaad-hi-filtered
- HydraIndicLM/hindi_alpaca_dolly_67k
---
# TinyLlama-1.1B-Hinglish-LORA-v1.0 model
- **Developed by:** [Kiran Kunapuli](https://www.linkedin.com/in/kirankunapuli/)
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
- - **Model config:**
```python
model = FastLanguageModel.get_peft_model(
model,
r = 16,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 32,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = True,
random_state = 42,
use_rslora = True,
loftq_config = None,
)
```
- **Training parameters:**
```python
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 2,
packing = True,
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
warmup_steps = 5,
max_steps = 120,
learning_rate = 2e-4,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 42,
output_dir = "outputs",
report_to = "wandb",
),
)
```
- **Training details:**
```
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
\\ /| Num examples = 14,343 | Num Epochs = 1
O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 4
\ / Total batch size = 8 | Total steps = 120
"-____-" Number of trainable parameters = 19,611,648
GPU = Tesla T4. Max memory = 14.748 GB.
2118.7553 seconds used for training.
35.31 minutes used for training.
Peak reserved memory = 9.172 GB.
Peak reserved memory for training = 6.758 GB.
Peak reserved memory % of max memory = 62.191 %.
Peak reserved memory for training % of max memory = 45.823 %.
```
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |