amanrangapur
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -13,8 +13,7 @@ language:
|
|
13 |
|
14 |
|
15 |
# Model Card for OLMo 2 13B
|
16 |
-
We introduce OLMo 2, a new family of 7B and 13B models
|
17 |
-
|
18 |
|
19 |
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
|
20 |
These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs (coming soon), and associated training details.
|
|
|
13 |
|
14 |
|
15 |
# Model Card for OLMo 2 13B
|
16 |
+
We introduce OLMo 2, a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently-sized fully-open models, and competitive with open-weight models from Meta and Mistral on English academic benchmarks.
|
|
|
17 |
|
18 |
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
|
19 |
These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs (coming soon), and associated training details.
|