aamixsh's picture
Fixing configs
553c172
metadata
base_model: TannerGladson/chess-roberta-whole-move-pretrained
tags:
  - generated_from_trainer
datasets:
  - TannerGladson/chess-roberta-pretraining
metrics:
  - accuracy
model-index:
  - name: 2024.09.24-01.27
    results:
      - task:
          name: Masked Language Modeling
          type: fill-mask
        dataset:
          name: TannerGladson/chess-roberta-pretraining
          type: TannerGladson/chess-roberta-pretraining
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.5445640378503339

Visualize in Weights & Biases

2024.09.24-01.27

This model is a fine-tuned version of /src/tanner/chess-roberta-code/model_configs/chess_roberta.json on the TannerGladson/chess-roberta-pretraining dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1202
  • Accuracy: 0.5446

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • training_steps: 9000

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.1201 0.0120 1000 2.1195 0.5445
2.1222 0.0239 2000 2.1199 0.5446
2.1179 0.0359 3000 2.1199 0.5444
2.1215 0.0479 4000 2.1186 0.5446
2.1176 0.0598 5000 2.1194 0.5445
2.1189 0.0718 6000 2.1193 0.5445
2.1184 0.0838 7000 2.1194 0.5446
2.119 0.0957 8000 2.1190 0.5446
2.1237 0.1077 9000 2.1201 0.5446

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.0.1+cu117
  • Datasets 2.17.1
  • Tokenizers 0.19.1