File size: 2,047 Bytes
7012d00 3e7e25c 7012d00 3e7e25c 7012d00 3e7e25c 7012d00 3e7e25c 7012d00 3e7e25c 7012d00 3e7e25c 7012d00 3e7e25c 7012d00 3e7e25c 7012d00 3e7e25c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: original
results: []
language:
- en
---
## Model description
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co./Qwen/Qwen2.5-7B-Instruct) on the [Stratos-R1 dataset](https://huggingface.co./datasets/bespokelabs/stratos-r1).
The dataset is derived by distilling DeepSeek-R1 using the data pipeline of Berkeley NovaSky’s Sky-T1 with some modifications. More info in the dataset card at [Stratos-R1 dataset](https://huggingface.co./datasets/bespokelabs/stratos-r1).
It outperforms Qwen-2.5-7B-Instruct on reasoning benchmarks:
||Bespoke-Stratos-7B|DeepSeek-R1-Distill-Qwen-7B|Qwen2.5-7B-Instruct|
|---|---|---|---|
|AIME2024|20.0|55.5|10.0|
|MATH500|82.0|83.3|74.2|
|GPQA-Diamond|37.8|49.1|33.3|
|LiveCodeBench|-|37.6|32.9|
Note that the authors of Sky-T1 had [noted](https://github.com/NovaSky-AI/SkyThought/issues/4#issuecomment-2585860004) that they saw little or no improvement in training 7B or 14B models with their data.
However, see an improvement, though not at the scale of DeepSeek's distilled model. The reason could be that we used 17k examples, while DeepSeek seems to have used 800k.
## Intended uses & limitations
Non-commercial use.
## Training procedure
We used 8xH100 to train the model.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 12
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |