Model Details

Model Architecture:

urLLM-KO-7B is an auto-regressive language model that leverages an optimized transformer architecture derived from Llama-2-7b.

Training Corpus

The model was trained using selected datasets from Modu Corpus and Korean Wikipedia (approximately 28GB).

Vocab Expansion

The expanded vocab size is 51385.

Model Card Contact

For errors or additional questions about details in this model card, contact [email protected] .

Downloads last month
41
Safetensors
Model size
6.9B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for URP/urllm-ko-7b

Quantizations
3 models