metadata
license: mit
datasets:
- yahma/alpaca-cleaned
language:
- en
library_name: transformers
pipeline_tag: text-generation
phi-2-alpaca-cleaned
This model is an instruction-tuned version of the microsoft/phi-2 model fine-tuned on the yahma/alpaca-cleaned dataset. In the training, full parameter fine-tuning of phi-2 was performed, and LoRA was not used.
Training
- GPUs: 8 × A6000 48GB
- per_device_train_batch_size 8
- gradient_accumulation_steps 8
- per_device_eval_batch_size 8
- num_train_epochs 3
- learning_rate 2e-5
- warmup_ratio 0.03