Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,9 @@ understanding.
|
|
19 |
|
20 |
## Model Details
|
21 |
|
22 |
-
The base model, unsloth/gemma-2-9b, supports RoPE scaling, 4-bit quantization for memory efficiency, and fine-tuning with LoRA (Low-Rank Adaptation). Flash Attention 2 is utilized to enable softcapping and improve efficiency during training.
|
|
|
|
|
23 |
|
24 |
A custom Hindi Alpaca style Prompt Template is designed to format instructions, inputs, and expected outputs in a conversational structure.
|
25 |
|
|
|
19 |
|
20 |
## Model Details
|
21 |
|
22 |
+
The base model, unsloth/gemma-2-9b, supports RoPE scaling, 4-bit quantization for memory efficiency, and fine-tuning with LoRA (Low-Rank Adaptation). Flash Attention 2 is utilized to enable softcapping and improve efficiency during training.
|
23 |
+
|
24 |
+
## Prompt Design
|
25 |
|
26 |
A custom Hindi Alpaca style Prompt Template is designed to format instructions, inputs, and expected outputs in a conversational structure.
|
27 |
|