InferenceIllusionist commited on
Commit
b51a828
1 Parent(s): b3f4ffc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -32,7 +32,7 @@ Trained with a whole lot of love on 1 epoch of cleaned and deduped c2 logs. This
32
 
33
  As hyperparameters and dataset intentionally mirror ones used in the original Sorcerer 8x22b tune, this is considered its 'lite' counterpart aiming to provide the same bespoke conversational experience relative to its size and reduced hardware requirements.
34
 
35
- While all three share the same Mistral-Small-Instruct base, in contrast to its sisters [Mistral-Small-NovusKyver](https://huggingface.co/Envoid/Mistral-Small-NovusKyver) and [Acoylte-22B](https://huggingface.co/rAIfle/Acolyte-22B) this release did not SLERP the resulting model with the original in a 50/50 ratio post-training. Instead, alpha was dropped when the lora was merged with full precision weights in the final step.
36
 
37
  ## Acknowledgments
38
 
 
32
 
33
  As hyperparameters and dataset intentionally mirror ones used in the original Sorcerer 8x22b tune, this is considered its 'lite' counterpart aiming to provide the same bespoke conversational experience relative to its size and reduced hardware requirements.
34
 
35
+ While all three share the same Mistral-Small-Instruct base, in contrast to its sisters [Mistral-Small-NovusKyver](https://huggingface.co/Envoid/Mistral-Small-NovusKyver) and [Acolyte-22B](https://huggingface.co/rAIfle/Acolyte-22B) this release did not SLERP the resulting model with the original in a 50/50 ratio post-training. Instead, alpha was dropped when the lora was merged with full precision weights in the final step.
36
 
37
  ## Acknowledgments
38