ITT-AF/ITT-AF-PLM-1.4B_v0.2

This model is a pretrained version on custom dataset(110G).

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 24
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 96
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1.0
  • mixed_precision_training: Native AMP

Training results

text = "ν•œκ΅­μ˜ μˆ˜λ„λŠ”" 
gen_text = "ν•œκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμ΄λ‹€. κ·ΈλŸ¬λ‚˜ μ„œμšΈμ΄λΌλŠ” λ„μ‹œλŠ” κ·Έ μžμ²΄κ°€ ν•˜λ‚˜μ˜ κ±°λŒ€ν•œ λ„μ‹œλ‹€. μ„œμšΈμ˜ 쀑심은 κ΄‘ν™”λ¬Έκ΄‘μž₯이닀."

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.0.0
  • Tokenizers 0.15.0
Downloads last month
51
Safetensors
Model size
1.44B params
Tensor type
F32
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.