Edit model card

Update @ 2023.06.01

  • Add Safetensor sharded model weight (max shard = 1GB)

KoAlpaca-Polyglot-12.8B (v1.1b)

This model is a fine-tuned version of EleutherAI/polyglot-ko-12.8b on a KoAlpaca Dataset v1.1b

Detail Codes are available at KoAlpaca Github Repository

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU (A100 80G)
  • num_devices: 4
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 256
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2.0

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu117
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
2,847
Safetensors
Model size
13.1B params
Tensor type
FP16
F32
BOOL
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for beomi/KoAlpaca-Polyglot-12.8B

Finetuned
this model
Adapters
8 models

Space using beomi/KoAlpaca-Polyglot-12.8B 1