This model is a result of second stage pre-training of Google's Gemma 2B (https://huggingface.co./google/gemma-2b) for roughly 150B tokens on the combination of English + Russian subset of oscar and wiki datasets.

This is a raw pre-trained model, created with further fine-tuning in mind. Goal of this project is to further research cross-linguistic capabilities of open-source LLMs and to create a strong open-source foundational LLM that would be fluent in Russian language. More about it will be in the upcoming blog and/or research paper.

This model was pre-trained using EasyLM's fork as a framework (JAX) on Google's v4-32 TPU which was generously provided under the TRC program. The model reached ~ 1.5 in training loss, LR was roughly 5e-5.

I'm planning on releasing a chat model that would ungergo full-parameter SFT and DPO on Ilya Gusev's datasets.

Downloads last month
204
Safetensors
Model size
2.51B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Defetya/gemma-2b-ru

Quantizations
1 model

Dataset used to train Defetya/gemma-2b-ru