Update README.md
Browse files
README.md
CHANGED
@@ -10,9 +10,9 @@ base_model:
|
|
10 |
- mlabonne/NeuralOmniBeagle-7B
|
11 |
---
|
12 |
|
13 |
-
# Monarch-7B
|
14 |
|
15 |
-
Monarch-7B
|
16 |
* [mlabonne/OmniTruthyBeagle-7B-v0](https://huggingface.co/mlabonne/OmniTruthyBeagle-7B-v0)
|
17 |
* [mlabonne/NeuBeagle-7B](https://huggingface.co/mlabonne/NeuBeagle-7B)
|
18 |
* [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B)
|
@@ -52,7 +52,7 @@ from transformers import AutoTokenizer
|
|
52 |
import transformers
|
53 |
import torch
|
54 |
|
55 |
-
model = "mlabonne/Monarch-7B
|
56 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
57 |
|
58 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
|
|
10 |
- mlabonne/NeuralOmniBeagle-7B
|
11 |
---
|
12 |
|
13 |
+
# Monarch-7B
|
14 |
|
15 |
+
Monarch-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
16 |
* [mlabonne/OmniTruthyBeagle-7B-v0](https://huggingface.co/mlabonne/OmniTruthyBeagle-7B-v0)
|
17 |
* [mlabonne/NeuBeagle-7B](https://huggingface.co/mlabonne/NeuBeagle-7B)
|
18 |
* [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B)
|
|
|
52 |
import transformers
|
53 |
import torch
|
54 |
|
55 |
+
model = "mlabonne/Monarch-7B"
|
56 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
57 |
|
58 |
tokenizer = AutoTokenizer.from_pretrained(model)
|