student-abdullah commited on
Commit
a2b148e
1 Parent(s): af3026a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -12,7 +12,7 @@ datasets:
12
  - **Developed by:** student-abdullah
13
  - **Finetuned from model:** meta-llama/Meta-Llama-3.1-8B
14
  - **Created on:** 27th September, 2024
15
- - **Full model:** student-abdullah/llama3.1_medicine_hinglish_fine-tuned_27-09_8bits_gguf
16
 
17
  ---
18
  # Acknowledgement
@@ -24,7 +24,7 @@ This LoRA adapter layer model is fine-tuned from the meta-llama/Meta-Llama-3.1-8
24
 
25
  - Fine Tuning Template: Llama 3.1 Q&A
26
  - Max Tokens: 512
27
- - LoRA Alpha: 16
28
  - LoRA Rank (r): 128
29
  - Learning rate: 2e-4
30
  - Gradient Accumulation Steps: 2
@@ -32,7 +32,7 @@ This LoRA adapter layer model is fine-tuned from the meta-llama/Meta-Llama-3.1-8
32
 
33
  ---
34
  # Model Quantitative Performace
35
- - Training Quantitative Loss: 0.141 (at final 300th epoch)
36
 
37
  ---
38
  # Limitations
 
12
  - **Developed by:** student-abdullah
13
  - **Finetuned from model:** meta-llama/Meta-Llama-3.1-8B
14
  - **Created on:** 27th September, 2024
15
+ - **Full model:** student-abdullah/llama3.1_medicine_hinglish_fine-tuned_26-09_8bits_gguf
16
 
17
  ---
18
  # Acknowledgement
 
24
 
25
  - Fine Tuning Template: Llama 3.1 Q&A
26
  - Max Tokens: 512
27
+ - LoRA Alpha: 32
28
  - LoRA Rank (r): 128
29
  - Learning rate: 2e-4
30
  - Gradient Accumulation Steps: 2
 
32
 
33
  ---
34
  # Model Quantitative Performace
35
+ - Training Quantitative Loss: 0.1368 (at final 300th epoch)
36
 
37
  ---
38
  # Limitations