Thank you very much for this model, I have questions

#1
by NickyNicky - opened

I would like to know how they made fine tune?

Did you use the huggingface trl GRPO libraries?

Could you provide the libraries for the training?

thank you so much

Open Thoughts org

We used llama factory. Code coming soon in https://github.com/open-thoughts/open-thoughts

Thank you for your model! Did you use only SFT or another methods (like DPO, KTO or PPO)?

Open Thoughts org

We only used SFT for this model

Open Thoughts org

It is important to emphasize here - we use ONLY SFT for training on the data (114k samples of reasoning traces from R1). There is no RL loss involved in training.

Sign up or log in to comment