OmnicromsBrain commited on
Commit
1a30d8f
1 Parent(s): 1d92242

Update README.md

Browse files

![OmnicromsBrain.png](https://cdn-uploads.huggingface.co/production/uploads/65c70c9e21d80a923d664563/Z8ahvWwVtmxOH3kEve7Xv.png)

Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -19,6 +19,11 @@ base_model:
19
 
20
  # NeuralStar_AlphaWriter_4x7b
21
 
 
 
 
 
 
22
  NeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
23
  * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
24
  * [FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B)
 
19
 
20
  # NeuralStar_AlphaWriter_4x7b
21
 
22
+ I was blown away by the writing results I was getting from mlabonne/Beyonder-4x7B-v3 while writing in NovelCrafter.
23
+ Inspired by his LLM Course and fueled by his LazyMergeKit, I couldnt help but wonder what a writing model would be like if all 4 “experts” excelled in creative writing.
24
+ I present NeuralStar-AlphaWriter-4x7b
25
+
26
+
27
  NeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
28
  * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
29
  * [FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B)