Text Generation
GGUF
English
MOE
Mixture of Experts
2X8B
deepseek
reasoning
thinking
creative
creative writing
128k context
general usage
problem solving
brainstorming
solve riddles
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
story
writing
fiction
roleplaying
llama 3.1
mergekit
Inference Endpoints
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -37,6 +37,8 @@ pipeline_tag: text-generation
|
|
37 |
|
38 |
<H2>L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-gguf</H2>
|
39 |
|
|
|
|
|
40 |
This model is DeepSeek and DeepHermes reasoning/thinking (Llama 3.1 - 8B each) in a MOE (Mixture of Experts) configuration
|
41 |
equal to 16B parameters, compressed to 13.7 B.
|
42 |
|
|
|
37 |
|
38 |
<H2>L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-gguf</H2>
|
39 |
|
40 |
+
<img src="two-gods2.jpg" style="float:right; width:300px; height:300px; padding:5px;">
|
41 |
+
|
42 |
This model is DeepSeek and DeepHermes reasoning/thinking (Llama 3.1 - 8B each) in a MOE (Mixture of Experts) configuration
|
43 |
equal to 16B parameters, compressed to 13.7 B.
|
44 |
|