File size: 1,264 Bytes
fec95ed 262e57d 957eab8 fec95ed 3f69347 fec95ed 262e57d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
license: llama3
datasets:
- silk-road/alpaca-data-gpt4-chinese
- TigerResearch/sft_zh
- LooksJuicy/ruozhiba
- leo009/alpaca-cleaned-zh-cn
- REILX/extracted_tagengo_gpt4
language:
- en
- zh
pipeline_tag: text-generation
tags:
- text-generation-inference
- llama
- chat
- sft
- llama-factory
---
### 模型:
- https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct
### 数据集:
- https://huggingface.co./datasets/TigerResearch/sft_zh
- https://huggingface.co./datasets/silk-road/alpaca-data-gpt4-chinese
- https://huggingface.co./datasets/REILX/extracted_tagengo_gpt4
- https://huggingface.co./datasets/LooksJuicy/ruozhiba
- https://huggingface.co./datasets/leo009/alpaca-cleaned-zh-cn
(使用langid清理以上数据集,删除其中非中文资料)
### 训练工具
https://github.com/hiyouga/LLaMA-Factory
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0 |