Edit model card

falcon-7b-ft-alpaca-cleaned-dutch

Model description

This model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the BramVanroy/alpaca-cleaned-dutch dataset. See the original Falcon 7B model for more information, intended use, and biases.

Intended uses & limitations

This model is intended as a (poor) baseline for Dutch generative LLMs. It by no means aims to provide SOTA performance and is specifically intended for research purposes, and an opportunity for me to test hyperparameters and stability.

Importantly, the original Falcon 7B model was only trained on English and French. Therefore, Dutch generations should be taken with a massive grain of salt.

Training and evaluation data

Trained on the synthetic BramVanroy/alpaca-cleaned-dutch instruction dataset. Therefore, commercial use of this model is forbidden. The model is intended for research purposes only.

Training procedure

Trained with LoRA and merged before upload.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 128
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
1.9832 0.03 10 1.8889
1.9355 0.05 20 1.8834
1.9694 0.08 30 1.8671
1.9048 0.1 40 1.8328
1.8443 0.13 50 1.7970
1.7448 0.16 60 1.7711
1.8004 0.18 70 1.7522
1.7767 0.21 80 1.7370
1.7733 0.23 90 1.7248
1.7926 0.26 100 1.7149
1.8258 0.29 110 1.7066
1.6709 0.31 120 1.6993
1.6612 0.34 130 1.6926
1.8463 0.36 140 1.6867
1.8413 0.39 150 1.6814
1.7659 0.42 160 1.6765
1.69 0.44 170 1.6715
1.7219 0.47 180 1.6673
1.6755 0.49 190 1.6627
1.7823 0.52 200 1.6584
1.7635 0.55 210 1.6545
1.7335 0.57 220 1.6506
1.7272 0.6 230 1.6471
1.718 0.63 240 1.6436
1.6899 0.65 250 1.6403
1.622 0.68 260 1.6370
1.6556 0.7 270 1.6337
1.7912 0.73 280 1.6304
1.6025 0.76 290 1.6274
1.7181 0.78 300 1.6246
1.7452 0.81 310 1.6217
1.5975 0.83 320 1.6189
1.5754 0.86 330 1.6162
1.7077 0.89 340 1.6136
1.5848 0.91 350 1.6112
1.7011 0.94 360 1.6087
1.6697 0.96 370 1.6065
1.6633 0.99 380 1.6042
1.6722 1.02 390 1.6015
1.7181 1.04 400 1.5993
1.6414 1.07 410 1.5972
1.6856 1.09 420 1.5952
1.6491 1.12 430 1.5930
1.6736 1.15 440 1.5912
1.619 1.17 450 1.5893
1.6452 1.2 460 1.5870
1.6498 1.22 470 1.5854
1.675 1.25 480 1.5839
1.684 1.28 490 1.5823
1.6379 1.3 500 1.5802
1.5173 1.33 510 1.5786
1.6443 1.35 520 1.5773
1.5628 1.38 530 1.5755
1.7287 1.41 540 1.5738
1.5615 1.43 550 1.5725
1.6129 1.46 560 1.5712
1.6709 1.48 570 1.5700
1.5818 1.51 580 1.5683
1.6358 1.54 590 1.5672
1.6513 1.56 600 1.5662
1.5637 1.59 610 1.5654
1.612 1.62 620 1.5643
1.6396 1.64 630 1.5630
1.6414 1.67 640 1.5620
1.6096 1.69 650 1.5611
1.6149 1.72 660 1.5603
1.5886 1.75 670 1.5593
1.537 1.77 680 1.5582
1.5883 1.8 690 1.5574
1.6512 1.82 700 1.5566
1.683 1.85 710 1.5559
1.7059 1.88 720 1.5549
1.5453 1.9 730 1.5542
1.5738 1.93 740 1.5536
1.6004 1.95 750 1.5530
1.6753 1.98 760 1.5523
1.6362 2.01 770 1.5517
1.5805 2.03 780 1.5511
1.6416 2.06 790 1.5508
1.5755 2.08 800 1.5506
1.5763 2.11 810 1.5501
1.7112 2.14 820 1.5497
1.6533 2.16 830 1.5493
1.6008 2.19 840 1.5489
1.5731 2.21 850 1.5485
1.4975 2.24 860 1.5480
1.6158 2.27 870 1.5478
1.6063 2.29 880 1.5474
1.628 2.32 890 1.5470
1.6177 2.34 900 1.5468
1.5646 2.37 910 1.5467
1.5272 2.4 920 1.5466
1.5402 2.42 930 1.5464
1.5815 2.45 940 1.5461
1.4857 2.47 950 1.5459
1.5923 2.5 960 1.5458
1.6167 2.53 970 1.5456
1.7214 2.55 980 1.5456
1.5467 2.58 990 1.5455
1.6455 2.61 1000 1.5453
1.6137 2.63 1010 1.5453
1.6104 2.66 1020 1.5453
1.6756 2.68 1030 1.5451
1.5818 2.71 1040 1.5450
1.5829 2.74 1050 1.5450
1.5753 2.76 1060 1.5450
1.6484 2.79 1070 1.5450
1.6765 2.81 1080 1.5450
1.623 2.84 1090 1.5449
1.6901 2.87 1100 1.5449
1.6601 2.89 1110 1.5449
1.6763 2.92 1120 1.5449
1.6203 2.94 1130 1.5449
1.5113 2.97 1140 1.5448

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
27
Safetensors
Model size
6.92B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for BramVanroy/falcon-7b-ft-alpaca-cleaned-dutch

Finetuned
(92)
this model

Dataset used to train BramVanroy/falcon-7b-ft-alpaca-cleaned-dutch

Collection including BramVanroy/falcon-7b-ft-alpaca-cleaned-dutch