wwe180 commited on
Commit
ad8e171
1 Parent(s): 010f412

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -21
README.md CHANGED
@@ -1,38 +1,47 @@
1
  ---
2
  base_model:
3
- - Sao10K/L3-8B-Stheno-v3.2
4
- - hfl/llama-3-chinese-8b-instruct-v2-lora
5
- - gradientai/Llama-3-8B-Instruct-Gradient-1048k
6
  library_name: transformers
7
  tags:
8
  - mergekit
9
  - merge
10
-
 
 
11
  ---
 
 
 
12
  # merge
13
 
14
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
 
16
- ## Merge Details
17
- ### Merge Method
18
 
19
- This model was merged using the passthrough merge method using [gradientai/Llama-3-8B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k) as a base.
20
 
21
- ### Models Merged
 
22
 
23
- The following models were included in the merge:
24
- * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) + [hfl/llama-3-chinese-8b-instruct-v2-lora](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2-lora)
 
25
 
26
- ### Configuration
 
27
 
28
- The following YAML configuration was used to produce this model:
29
-
30
- ```yaml
31
- slices:
32
- - sources:
33
- - model: "Sao10K/L3-8B-Stheno-v3.2+hfl/llama-3-chinese-8b-instruct-v2-lora"
34
- layer_range: [0,32]
35
- merge_method: passthrough
36
- base_model: "gradientai/Llama-3-8B-Instruct-Gradient-1048k"
37
- dtype: bfloat16
38
  ```
 
 
 
 
 
 
 
 
1
  ---
2
  base_model:
3
+ - wwe180/L3-8B-LingYang-v2
 
 
4
  library_name: transformers
5
  tags:
6
  - mergekit
7
  - merge
8
+ - Llama3
9
+ license:
10
+ - other
11
  ---
12
+
13
+ #The model is experimental, so the results cannot be guaranteed.
14
+
15
  # merge
16
 
17
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
18
 
 
 
19
 
20
+ ## 💻 Usage
21
 
22
+ ```python
23
+ !pip install -qU transformers accelerate
24
 
25
+ from transformers import AutoTokenizer
26
+ import transformers
27
+ import torch
28
 
29
+ model = "L3-8B-LingYang-v2"
30
+ messages = [{"role": "user", "content": "What is a large language model?"}]
31
 
32
+ tokenizer = AutoTokenizer.from_pretrained(model)
33
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
34
+ pipeline = transformers.pipeline(
35
+ "text-generation",
36
+ model=model,
37
+ torch_dtype=torch.float16,
38
+ device_map="auto",
39
+ )
 
 
40
  ```
41
+ ## Statement:
42
+
43
+ L3-8B-LingYang-v2 does not represent the views and positions of the model developers We will not be liable for any problems arising from the use of the L3-8B-LingYang-v2 open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
44
+
45
+
46
+
47
+