base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: llama-2-7b-nsmc2
results: []
llama-2-7b-nsmc2
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an nsmc dataset.
Model description
llama-2λͺ¨λΈμ nsmcλ°μ΄ν°μ λν΄ λ―ΈμΈνλν λͺ¨λΈ
μν 리뷰 λ°μ΄ν°λ₯Ό κΈ°λ°μΌλ‘ μ¬μ©μκ° μμ±ν 리뷰μ κΈμ λλ λΆμ μ νμ
νλ€.
Intended uses & limitations
Intended uses
μ¬μ©μκ° μμ±ν 리뷰μ κΈμ λλ λΆμ κ°μ λΆμμ μ 곡ν¨
Limitaions
μν 리뷰μ νΉνλμ΄ μμΌλ©°, λ€λ₯Έ μ νμλ μ νμ΄ μμ μ μμ
Colab T4 GPUμμ ν
μ€νΈ λμμ
Training and evaluation data
Training data: nsmc 'train' data μ€ μμ 2000κ°μ μν Evaluation data: nsmc 'test' data μ€ μμ 1000κ°μ μν
Training procedure
trainer.train() 2:02:05 μμ
μΆλ‘ κ³Όμ GPU λ©λͺ¨λ¦¬ 5.7GB μ¬μ©
300 stepλ§λ€ 체ν¬ν¬μΈνΈ μ μ₯
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2000
- mixed_precision_training: Native AMP
Training results
trainable params: 19988480 || all params: 3520401408 || trainable%: 0.5677897967708119
μ νλ
Llama2: μ νλ 0.913
Positive Prediction(PP) | Negative Prediction(NP) | |
---|---|---|
True Positive (TP) | 441 | 67 |
True Negative (TN) | 20 | 472 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0