--- license: cc-by-nc-sa-4.0 datasets: - HumanF-MarkrAI/Korean-RAG-ver2 language: - ko tags: - Retrieval Augmented Generation - RAG - Multi-domain --- # MarkrAI/RAG-KO-Mixtral-7Bx2-v1.15 # Model Details ## Model Developers MarkrAI - AI Researchers ## Base Model [DopeorNope/Ko-Mixtral-v1.3-MoE-7Bx2](https://huggingface.co./DopeorNope/Ko-Mixtral-v1.3-MoE-7Bx2). ## Instruction tuning Method Using QLoRA. ``` 4-bit quantization Lora_r: 64 Lora_alpha: 64 Lora_dropout: 0.05 Lora_target_modules: [embed_tokens, q_proj, k_proj, v_proj, o_proj, gate, w1, w2, w3, lm_head] ``` ## Hyperparameters ``` Epoch: 3 Batch size: 64 Learning_rate: 1e-5 Learning scheduler: linear Warmup_ratio: 0.06 ``` ## Datasets Private datasets: [HumanF-MarkrAI/Korean-RAG-ver2](https://huggingface.co./datasets/HumanF-MarkrAI/Korean-RAG-ver2) ``` Aihub datasets 활용하여서 제작함. ``` ## Implmentation Code ``` ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "MarkrAI/RAG-KO-Mixtral-7Bx2-v1.15" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` # Model Benchmark - Coming soon...