Tremontaine commited on
Commit
ddc910e
1 Parent(s): 1a479d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -39
README.md CHANGED
@@ -2,17 +2,8 @@
2
  base_model:
3
  - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
4
  - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
5
- - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
6
- - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
7
- - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
8
- - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
9
- - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
10
  tags:
11
  - merge
12
- - mergekit
13
- - lazymergekit
14
- - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
15
- - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
16
  ---
17
 
18
  # Llama3-Omphalos-12B
@@ -20,11 +11,6 @@ tags:
20
  Llama3-Omphalos-12B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
21
  * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
22
  * [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
23
- * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
24
- * [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
25
- * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
26
- * [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
27
- * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
28
 
29
  ## 🧩 Configuration
30
 
@@ -53,29 +39,4 @@ slices:
53
  - sources:
54
  - layer_range: [25, 32]
55
  model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
56
- ```
57
-
58
- ## 💻 Usage
59
-
60
- ```python
61
- !pip install -qU transformers accelerate
62
-
63
- from transformers import AutoTokenizer
64
- import transformers
65
- import torch
66
-
67
- model = "Tremontaine/Llama3-Omphalos-12B"
68
- messages = [{"role": "user", "content": "What is a large language model?"}]
69
-
70
- tokenizer = AutoTokenizer.from_pretrained(model)
71
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
72
- pipeline = transformers.pipeline(
73
- "text-generation",
74
- model=model,
75
- torch_dtype=torch.float16,
76
- device_map="auto",
77
- )
78
-
79
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
80
- print(outputs[0]["generated_text"])
81
  ```
 
2
  base_model:
3
  - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
4
  - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
 
 
 
 
 
5
  tags:
6
  - merge
 
 
 
 
7
  ---
8
 
9
  # Llama3-Omphalos-12B
 
11
  Llama3-Omphalos-12B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
12
  * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
13
  * [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
 
 
 
 
 
14
 
15
  ## 🧩 Configuration
16
 
 
39
  - sources:
40
  - layer_range: [25, 32]
41
  model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  ```