--- datasets: - sean0042/KorMedMCQA language: - ko - en pipeline_tag: text-generation --- ### Model Card for Model ID base_model : [google/gemma-2b-it](https://huggingface.co./google/gemma-2b-it) ### Basic usage ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("MDDDDR/gemma-2b-it-v0.1") model = AutoModelForCausalLM.from_pretrained( "MDDDDR/gemma-2b-it-v0.1", device_map="auto", torch_dtype=torch.bfloat32 ) input_text = "사과가 뭐야?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` ### Training dataset dataset : [sean0042/KorMedMCQA](https://huggingface.co./datasets/sean0042/KorMedMCQA) ### lora_config and bnb_config in Training ```python bnd_config = BitsAndBytesConfig( load_in_4bit = True, bnb_4bit_use_double_quant = True, bnb_4bit_quant_type = 'nf4', bnb_4bit_compute_dtype = torch.bfloat16 ) lora_config = LoraConfig( r = 32, lora_alpha = 32, lora_dropout = 0.05, target_modules = ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj'] ) ``` ### Hardware A100 40GB x 1