|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- allenai/dolma |
|
language: |
|
- en |
|
--- |
|
|
|
|
|
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> |
|
|
|
|
|
# Model Card for OLMo 7B |
|
|
|
OLMo 7B November 2024 is an updated version of the original [OLMo 7B](https://huggingface.co./allenai/OLMo-7B) model rocking a ____ point increase in ____, among other evaluations improvements, from an improved version of the Dolma dataset and staged training. |
|
**This version is for direct use with HuggingFace Transformers** from v4.40 on. |
|
|
|
|
|
**For transformers versions v4.40.0 or newer, we suggest using [OLMo 7B HF](https://huggingface.co./allenai/OLMo-7B-hf) instead.** |
|
|
|
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models. |
|
The OLMo models are trained on the [Dolma](https://huggingface.co./datasets/allenai/dolma) dataset. |
|
We release all code, checkpoints, logs (coming soon), and details involved in training these models. |
|
|
|
<!-- *A new version of this model with a 24 point improvement on MMLU is available [here](https://huggingface.co./allenai/OLMo-1.7-7B)*. --> |