File size: 4,798 Bytes
b70fa61 1a467c4 b70fa61 1a467c4 b70fa61 1a467c4 b70fa61 1a467c4 b70fa61 1a467c4 b70fa61 1a467c4 6fbe699 08b542b 9f602c7 08b542b a9b93a4 08b542b 637c16f 08b542b 53288d7 eea5583 1a467c4 b70fa61 1a467c4 b70fa61 1a467c4 b70fa61 1a467c4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: DCFT-Stratos-Verified-114k-7B-4gpus
results: []
datasets:
- open-thoughts/open-thoughts-114k
---
<p align="center">
<img src="https://huggingface.co./datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
</p>
# OpenThinker-7B
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co./Qwen/Qwen2.5-7B-Instruct) on the
[OpenThoughts-114k dataset](https://huggingface.co./datasets/open-thoughts/OpenThoughts-114k) dataset.
The dataset is derived by distilling DeepSeek-R1 using the [data pipeline available on github](https://github.com/open-thoughts/open-thoughts).
More info about the dataset can be found on the dataset card at [OpenThoughts-114k dataset](https://huggingface.co./datasets/open-thoughts/open-thoughts-114k).
This model improves upon the [Bespoke-Stratos-7B model](https://huggingface.co./bespokelabs/Bespoke-Stratos-7B), which used 17k examples ([Bespoke-Stratos-17k dataset](https://huggingface.co./datasets/bespokelabs/Bespoke-Stratos-17k)).
The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
| | AIME24 | MATH500 | GPQA-Diamond | LCBv2 Easy | LCBv2 Medium | LCBv2 Hard | LCBv2 All |
| --------------------------- | -------- | ------- | ------------ | ----------- | ------------- | ----------- | ---------- |
| OpenThinker-7B | 43.3 | 83.0 | 42.4 | 75.3 | 28.6 | 6.5 | 39.9 |
| Bespoke-Stratos-7B | 16.6 | 79.6 | 38.9 | 71.4 | 25.2 | 0.8 | 35.8 |
| DeepSeek-R1-Distill-Qwen-7B | 60 | 88.2 | 46.9 | 79.7 | 45.1 | 14.6 | 50.1 |
| gpt-4o-0513 | 10 | 75.8 | 46.5 | 87.4 | 42.7 | 8.9 | 50.5 |
| o1-mini | 63 | 85.6 | 60 | 92.8 | 74.7 | 39.8 | 72.8 |
We are fully open-source. Our [model weights](https://huggingface.co./open-thoughts), [datasets](https://huggingface.co./open-thoughts), [data generation code](https://github.com/open-thoughts/open-thoughts), [evaluation code](https://github.com/mlfoundations/Evalchemy), and [training code](https://github.com/hiyouga/LLaMA-Factory) are all publicly available.
| | Open Weights | Open Data | Open Code |
|--|--------------|-----------| --------- |
|OpenThinker-7B|β
|[β
](https://huggingface.co./datasets/open-thoughts/OpenThoughts-114k)|[β
](https://github.com/open-thoughts/open-thoughts) |
|Bespoke-Stratos-7B|β
|[β
](https://huggingface.co./datasets/bespokelabs/Bespoke-Stratos-17k)|[β
](https://github.com/bespokelabsai/curator/tree/main/examples/bespoke-stratos-data-generation)|
|DeepSeek-R1-Distill-Qwen-7B|β
|β|β|
|gpt-4o-0513|β|β|β|β|
|o1-mini|β|β|β|β|
## Intended uses & limitations
Apache 2.0 License
## Training procedure
We used four 8xH100 nodes to train the model for 20 hours.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).
# Links
- π [Open Thoughts Launch Blog Post](https://www.open-thoughts.ai/blog/launch)
- π [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
- π§ [OpenThoughts-114k dataset](https://huggingface.co./datasets/open-thoughts/OpenThoughts-114k)
- π€ [OpenThinker-7B model](https://huggingface.co./open-thoughts/OpenThinker-7B) - this model.
- π [Bespoke-Stratos Blog Post](https://www.bespokelabs.ai/blog/bespoke-stratos-the-unreasonable-effectiveness-of-reasoning-distillation)
- π§ [Bespoke-Stratos-17k dataset](https://huggingface.co./datasets/bespokelabs/Bespoke-Stratos-17k)
- π€ [Bespoke-Stratos-32B model](https://huggingface.co./bespokelabs/Bespoke-Stratos-32B)
- π€ [Bespoke-Stratos-7B model](https://huggingface.co./bespokelabs/Bespoke-Stratos-7B)
|