macadeliccc
commited on
Commit
β’
6dbbb82
1
Parent(s):
b472fd8
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,11 @@ Merge of four Solar-10.7B instruct finetunes.
|
|
14 |
![solar](solar.png)
|
15 |
|
16 |
## π Usage
|
|
|
17 |
|
|
|
|
|
|
|
18 |
|
19 |
## π
Code Example
|
20 |
|
@@ -59,7 +63,7 @@ print(generate_response(prompt), "\n")
|
|
59 |
|
60 |
## Llama.cpp
|
61 |
|
62 |
-
GGUF Quants available [here]()
|
63 |
|
64 |
![llama.cpp-screenshot](orca-llama-cpp-1.png)
|
65 |
|
|
|
14 |
![solar](solar.png)
|
15 |
|
16 |
## π Usage
|
17 |
+
This SOLAR model _loves_ to code. In my experience, if you ask it a code question it will use almost all of the available token limit to complete the code.
|
18 |
|
19 |
+
However, this can also be to its own detriment. If the request is complex it may not finish the code in a given time period. This behavior is not because of an eos token, as it finishes sentences quite normally if its a non code question.
|
20 |
+
|
21 |
+
Your mileage may vary.
|
22 |
|
23 |
## π
Code Example
|
24 |
|
|
|
63 |
|
64 |
## Llama.cpp
|
65 |
|
66 |
+
GGUF Quants available [here](https://huggingface.co/macadeliccc/Orca-SOLAR-4x10.7b-GGUF)
|
67 |
|
68 |
![llama.cpp-screenshot](orca-llama-cpp-1.png)
|
69 |
|