OmnicromsBrain commited on
Commit
e59d4f7
1 Parent(s): 1a30d8f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -19,9 +19,12 @@ base_model:
19
 
20
  # NeuralStar_AlphaWriter_4x7b
21
 
22
- I was blown away by the writing results I was getting from mlabonne/Beyonder-4x7B-v3 while writing in NovelCrafter.
23
- Inspired by his LLM Course and fueled by his LazyMergeKit, I couldnt help but wonder what a writing model would be like if all 4 “experts” excelled in creative writing.
24
- I present NeuralStar-AlphaWriter-4x7b
 
 
 
25
 
26
 
27
  NeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
 
19
 
20
  # NeuralStar_AlphaWriter_4x7b
21
 
22
+ I was blown away by the writing results I was getting from mlabonne/Beyonder-4x7B-v3 while writing in NovelCrafter.
23
+
24
+ Inspired by his LLM Course and fueled by his LazyMergeKit.
25
+ I couldnt help but wonder what a writing model would be like if all 4 “experts” excelled in creative writing.
26
+
27
+ I present NeuralStar-AlphaWriter-4x7b:
28
 
29
 
30
  NeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):