ryanmarten commited on
Commit
daea860
·
verified ·
1 Parent(s): 72f4b89

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -15
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  library_name: transformers
3
- license: other
4
  base_model: Qwen/Qwen2.5-32B-Instruct
5
  tags:
6
  - llama-factory
@@ -9,28 +9,37 @@ tags:
9
  model-index:
10
  - name: original
11
  results: []
 
 
 
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # original
18
-
19
- This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the Stratos-R1 dataset.
20
 
21
  ## Model description
22
-
23
- More information needed
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ## Intended uses & limitations
26
 
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
 
33
  ## Training procedure
 
34
 
35
  ### Training hyperparameters
36
 
@@ -58,4 +67,4 @@ The following hyperparameters were used during training:
58
  - Transformers 4.46.1
59
  - Pytorch 2.5.1+cu124
60
  - Datasets 3.1.0
61
- - Tokenizers 0.20.3
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
  base_model: Qwen/Qwen2.5-32B-Instruct
5
  tags:
6
  - llama-factory
 
9
  model-index:
10
  - name: original
11
  results: []
12
+ language:
13
+ - en
14
+ datasets:
15
+ - bespokelabs/Bespoke-Stratos-17k
16
  ---
17
 
18
+ <p align="center">
19
+ <img src="https://huggingface.co/bespokelabs/Bespoke-MiniCheck-7B/resolve/main/Bespoke-Labs-Logo.png" width="550">
20
+ </p>
 
 
 
21
 
22
  ## Model description
23
+ This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the [Bespoke-Stratos-17k dataset](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k).
24
+ The dataset is derived by distilling DeepSeek-R1 using the data pipeline of Berkeley NovaSky’s Sky-T1 with some modifications. More info in the dataset card at [Bespoke-Stratos-17k](https://huggingface.co/datasets/Bespoke-Stratos-17k).
25
+ It outperforms Qwen-2.5-7B-Instruct on reasoning benchmarks:
26
+
27
+ | Metric | Bespoke-Stratos-32B | Sky-T1-32B | O1-preview | DeepSeek-R1 | DeepSeek-R1-Distill-Qwen-32B |
28
+ |---------|-------------------|-------------|------------|------------|----------------------------|
29
+ | AIME2024 | 56.7 | 43.3 | 40.0 | 79.8 | 72.6 |
30
+ | MATH500 | 92.4 | 82.4 | 81.4 | 97.3 | 94.3 |
31
+ | GPQA-Diamond | 55.6 | 56.8 | 75.2 | 71.5 | 62.1 |
32
+ | LiveCodeBench Easy | 93.4 | 86.3 | 92.9 | - | - |
33
+ | LiveCodeBench Medium | 60.7 | 56.8 | 54.9 | - | - |
34
+ | LiveCodeBench Hard | 24.4 | 17.9 | 16.3 | - | - |
35
+ | LiveCodeBench All | 63.60 | 57.93 | 59.13 | 65.9 | 57.2 |
36
 
37
  ## Intended uses & limitations
38
 
39
+ Non-commercial use.
 
 
 
 
40
 
41
  ## Training procedure
42
+ We used 8xH100 to train the model for 27 hours.
43
 
44
  ### Training hyperparameters
45
 
 
67
  - Transformers 4.46.1
68
  - Pytorch 2.5.1+cu124
69
  - Datasets 3.1.0
70
+ - Tokenizers 0.20.3