File size: 2,478 Bytes
7012d00
 
3e7e25c
7012d00
 
 
 
 
 
 
 
3e7e25c
 
d4b9ced
94b33fd
7012d00
 
e6cee0e
 
 
7012d00
 
7171a31
ef2ffa1
64b4381
7012d00
54e8451
 
 
8fc949d
54e8451
 
 
 
 
962647b
7012d00
3e7e25c
 
7012d00
3e7e25c
7012d00
cf1601b
7012d00
 
e3ae1b8
7012d00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e7e25c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: original
  results: []
language:
- en
datasets:
- bespokelabs/Bespoke-Stratos-17k
---

<p align="center">
    <img src="https://huggingface.co./bespokelabs/Bespoke-MiniCheck-7B/resolve/main/Bespoke-Labs-Logo.png" width="550">
</p>

## Model description
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co./Qwen/Qwen2.5-7B-Instruct) on the [Bespoke-Stratos-17k dataset](https://huggingface.co./datasets/bespokelabs/Bespoke-Stratos-17k).
The dataset is derived by distilling DeepSeek-R1 using the data pipeline of Berkeley NovaSky’s Sky-T1 with some modifications. More info in the dataset card at [Bespoke-Stratos-17k](https://huggingface.co./datasets/bespokelabs/Bespoke-Stratos-17k).
It outperforms Qwen-2.5-7B-Instruct on math reasoning benchmarks:

||Bespoke-Stratos-7B|Qwen2.5-7B-Instruct|DeepSeek-R1-Distill-Qwen-7B (Ours)|DeepSeek-R1-Distill-Qwen-7B (Reported)|
|---|---|---|---|---|
|AIME2024|20.0|10.0|43.3|55.5|
|MATH500|82.0|74.2|89.4|92.8|
|GPQA-Diamond|37.8|33.3|44.9|49.1|
|LiveCodeBench v2 Easy|71.4|65.9|81.3|-|
|LiveCodeBench v2 Medium|25.5|18.9|42.2|-|
|LiveCodeBench v2 Hard|1.6|3.3|2.4|-|
|LiveCodeBench v2 All|36.1|31.9|46.6|-|


Note that the authors of Sky-T1 had [noted](https://github.com/NovaSky-AI/SkyThought/issues/4#issuecomment-2585860004) that they saw little or no improvement in training 7B or 14B models with their data.
However, see an improvement, though not at the scale of DeepSeek's distilled model. The reason could be that we used 17k examples, while DeepSeek seems to have used 800k.

## Intended uses & limitations

Apache 2.0 License

## Training procedure
We used 8xH100 to train the model for 7 hours.

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 12
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0

### Training results



### Framework versions

- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3