File size: 1,240 Bytes
43d0420 292e627 43d0420 292e627 4491063 292e627 43d0420 292e627 43d0420 292e627 43d0420 292e627 43d0420 292e627 43d0420 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
datasets:
- sean0042/KorMedMCQA
language:
- ko
- en
pipeline_tag: text-generation
---
### Model Card for Model ID
base_model : [google/gemma-2b-it](https://huggingface.co./google/gemma-2b-it)
### Basic usage
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("MDDDDR/gemma-2b-it-v0.1")
model = AutoModelForCausalLM.from_pretrained(
"MDDDDR/gemma-2b-it-v0.1",
device_map="auto",
torch_dtype=torch.bfloat32
)
input_text = "사과가 뭐야?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
### Training dataset
dataset : [sean0042/KorMedMCQA](https://huggingface.co./datasets/sean0042/KorMedMCQA)
### lora_config and bnb_config in Training
```python
bnd_config = BitsAndBytesConfig(
load_in_4bit = True,
bnb_4bit_use_double_quant = True,
bnb_4bit_quant_type = 'nf4',
bnb_4bit_compute_dtype = torch.bfloat16
)
lora_config = LoraConfig(
r = 32,
lora_alpha = 32,
lora_dropout = 0.05,
target_modules = ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']
)
```
### Hardware
A100 40GB x 1 |