Kquant03 commited on
Commit
aa624ec
1 Parent(s): 3531d2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -16,11 +16,11 @@ tags:
16
 
17
  This is an update to the original Cognitive Fusion. We intend to perform a fine-tune on it in order to increase its performance.
18
 
19
- - [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) - base
20
- - [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp) - expert #1
21
- - [macadeliccc/MonarchLake-7B](https://huggingface.co/macadeliccc/MonarchLake-7B) - expert #2
22
- - [paulml/OmniBeagleSquaredMBX-v3-7B-v2](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B-v2) - expert #3
23
- - [louisbrulenaudet/Pearl-7B-slerp](https://huggingface.co/louisbrulenaudet/Pearl-7B-slerp) - expert #4
24
 
25
  # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
26
  ### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
 
16
 
17
  This is an update to the original Cognitive Fusion. We intend to perform a fine-tune on it in order to increase its performance.
18
 
19
+ - [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B) - base
20
+ - [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B) - expert #1
21
+ - [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b) - expert #2
22
+ - [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B) - expert #3
23
+ - [nlpguy/T3QM7](https://huggingface.co/nlpguy/T3QM7) - expert #4
24
 
25
  # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
26
  ### (from the MistralAI papers...click the quoted question above to navigate to it directly.)