jjzha's picture
Update README.md
228df99 verified
metadata
license: cc-by-4.0
dataset_info:
  features:
    - name: text
      dtype: string
    - name: source
      dtype: string
  splits:
    - name: train
      num_bytes: 107402461357
      num_examples: 431867387
  download_size: 63321627068
  dataset_size: 107402461357
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
language:
  - da
size_categories:
  - 100M<n<1B

Details

SnakModel is a 7B-parameter, autoregressive language model specifically designed for Danish. There are both an instruction-tuned variant, as well as a base version for further fine-tuning. Our models build upon Llama 2, which we continuously pre-train on a diverse collection of Danish corpora.

Developers

🧭 NLPnorth research unit at the IT University of Copenhagen, Denmark.
🌊 AAU-NLP research unit at Aalborg University Copenhagen, Denmark.

Mike Zhang*, Max Müller-Eberstein*, Elisa Bassignana, Rob van der Goot.
*equal contribution.

Deduplication

Important Note: In this data, we removed two sources, namely DaNewsroom and FTSpeech. The reason is because the licenses are not clear.

The hyperparameters of this particular pretraining set:

  --seed 42 \
  --batch_size 1024 \
  --num_perm 64 \
  --threshold 0.85 \
  --hash_bits 32 \
  --num_proc 16 \

The hyperparameters of the original pretraining set:

  --seed 42 \
  --batch_size 4096 \
  --num_perm 128 \
  --threshold 0.85 \

Note how the number of permutations is lower, a lower batch size and we have lower hash bits 64 -> 32. We encountered several OOM errors that were a bit inexplicable and decided to lower the memory footprint in this way. The hardware we used was a machine with 128 cores and 1TB of RAM. This data should take less than 100GB of disk space.

Citation

If you find the work in this repository useful, please don't forget to cite:

@inproceedings{snakmodel,
  title={{S}nak{M}odel: Lessons Learned from Training an Open Danish Large Language Model},
  author={Mike Zhang and Max M{\"u}ller-Eberstein and Elisa Bassignana and Rob van der Goot},
  booktitle={The Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies},
  year={2024},
  url={https://openreview.net/forum?id=YxzfgQGpRQ}
}