Liangmingxin
commited on
Commit
•
063fa54
1
Parent(s):
ab3b156
Update README.md
Browse files
README.md
CHANGED
@@ -1,15 +1,49 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
Obtained from freecs/ThetaWave-7B after SFT fine tuning.
|
6 |
|
7 |
Open-Orca/SlimOrca datasets were used.
|
8 |
|
9 |
-
|
10 |
|
|
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
```
|
14 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
15 |
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- Open-Orca/SlimOrca
|
5 |
+
pipeline_tag: text-generation
|
6 |
---
|
7 |
|
8 |
Obtained from freecs/ThetaWave-7B after SFT fine tuning.
|
9 |
|
10 |
Open-Orca/SlimOrca datasets were used.
|
11 |
|
12 |
+
This model does not support system_prompt at the moment because it follows the chat_template of mistral, the next version is under training and ready to change to chatml template to support system_prompt.
|
13 |
|
14 |
+
More model details will be released...
|
15 |
|
16 |
+
Vllm deployment command
|
17 |
+
```
|
18 |
+
# Single graphics card
|
19 |
+
python /path/to/vllm/vllm/entrypoints/openai/api_server.py \
|
20 |
+
--model '/path/to/ThetaWave-7B-sft' \
|
21 |
+
--tokenizer '/path/to/ThetaWave-7B-sft' \
|
22 |
+
--tokenizer-mode auto \
|
23 |
+
--dtype float16 \
|
24 |
+
--enforce-eager \
|
25 |
+
--host 0.0.0.0 \
|
26 |
+
--port 6000 \
|
27 |
+
--disable-log-stats \
|
28 |
+
--disable-log-requests
|
29 |
+
|
30 |
+
# Dual graphics cards
|
31 |
+
python /path/to/vllm/vllm/entrypoints/openai/api_server.py \
|
32 |
+
--model '/path/to/ThetaWave-7B-sft' \
|
33 |
+
--tokenizer '/path/to/ThetaWave-7B-sft' \
|
34 |
+
--tokenizer-mode auto \
|
35 |
+
--dtype float16 \
|
36 |
+
--enforce-eager \
|
37 |
+
--tensor-parallel-size 2 \
|
38 |
+
--worker-use-ray \
|
39 |
+
--engine-use-ray \
|
40 |
+
--host 0.0.0.0 \
|
41 |
+
--port 6000 \
|
42 |
+
--disable-log-stats \
|
43 |
+
--disable-log-requests
|
44 |
+
```
|
45 |
+
|
46 |
+
Try it directly:
|
47 |
```
|
48 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
49 |
|