--- license: llama3 language: - ko - en pipeline_tag: text-generation tags: - text-generation-inference --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) ## Model Details axolotl를 이용하여 공개된 한국어, 영어 데이터셋으로 파인튜닝하였습니다. 사용된 GPU : L40S * 2 ### Model Description ### Socre ``` "scores": { "AVG_llm_kr_eval": "0.3988", "EL": "0.1773", "FA": "0.2019", "NLI": "0.6033", "QA": "0.3700", "RC": "0.6417", "klue_ner_set_f1": "0.1746", "klue_re_exact_match": "0.1800", "kmmlu_preview_exact_match": "0.2800", "kobest_copa_exact_match": "0.8800", "kobest_hs_exact_match": "0.3200", "kobest_sn_exact_match": "0.8600", "kobest_wic_exact_match": "0.4600", "korea_cg_bleu": "0.2019", "kornli_exact_match": "0.6100", "korsts_pearson": "0.5292", "korsts_spearman": "0.5358" } ``` ### Built with Meta Llama 3 License Llama3 License: https://llama.meta.com/llama3/license ### Applications This fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains. ### Limitations and Considerations While our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements.