--- base_model: meta-llama/Meta-Llama-3.1-8B language: - en - hi datasets: - student-abdullah/BigPharma_Generic_Q-A_Format_Augemented_Hinglish_Dataset --- # LoRA Adapter Layers! # Uploaded model - **Developed by:** student-abdullah - **Finetuned from model:** meta-llama/Meta-Llama-3.1-8B - **Created on:** 27th September, 2024 - **Full model:** student-abdullah/llama3.1_medicine_hinglish_fine-tuned_26-09_8bits_gguf --- # Acknowledgement [](https://github.com/unslothai/unsloth) --- # Model Description This LoRA adapter layer model is fine-tuned from the meta-llama/Meta-Llama-3.1-8B base model to specialisation related to generic medications under the PMBJP scheme. The fine-tuning process included the following hyperparameters: - Fine Tuning Template: Llama 3.1 Q&A - Max Tokens: 512 - LoRA Alpha: 32 - LoRA Rank (r): 128 - Learning rate: 2e-4 - Gradient Accumulation Steps: 2 - Batch Size: 12 --- # Model Quantitative Performace - Training Quantitative Loss: 0.1368 (at final 300th epoch) --- # Limitations - This is not a fully compiled model, rather just LoRA layers