MDDDDR's picture
Update README.md
baef452 verified
|
raw
history blame
2.13 kB
---
language:
- ko
- en
pipeline_tag: text-generation
datasets:
- DILAB-HYU/KoQuality
---
### Model Card for Model ID
base_model : [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3.1-8B-Instruct)
### Basic usage
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("MDDDDR/Meta-Llama-3.1-8B-it-v0.1")
model = AutoModelForCausalLM.from_pretrained(
"MDDDDR/Meta-Llama-3.1-8B-it-v0.1",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "사과가 뭐야?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
### Training dataset
- dataset_1 : [DILAB-HYU/KoQuality](https://huggingface.co./datasets/DILAB-HYU/KoQuality)
### lora_config and bnb_config in Training
```python
bnd_config = BitsAndBytesConfig(
load_in_4bit = True,
bnb_4bit_use_double_quant = True,
bnb_4bit_quant_type = 'nf4',
bnb_4bit_compute_dtype = torch.bfloat16
)
lora_config = LoraConfig(
r = 8,
lora_alpha = 8,
lora_dropout = 0.05,
target_modules = ['gate_proj', 'up_proj', 'down_proj']
)
```
### Model evaluation
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|----------------|------:|------|-----:|--------|---|-----:|---|------|
|kobest_boolq | 1|none | 0|acc |↑ |0.5150|± |0.0133|
| | |none | 0|f1 |↑ |0.3634|± | N/A|
|kobest_copa | 1|none | 0|acc |↑ |0.6280|± |0.0153|
| | |none | 0|f1 |↑ |0.6279|± | N/A|
|kobest_hellaswag| 1|none | 0|acc |↑ |0.4280|± |0.0221|
| | |none | 0|acc_norm|↑ |0.5540|± |0.0223|
| | |none | 0|f1 |↑ |0.4250|± | N/A|
|kobest_sentineg | 1|none | 0|acc |↑ |0.7406|± |0.0220|
| | |none | 0|f1 |↑ |0.7317|± | N/A|
### Hardware
- RTX 3090 Ti 24GB x 1
- Training Time : 1 hours