NitralAI's measurement.json was used for quantization
Sekhmet_Bet-L3.1-8B-v0.2
exllamav2 quant for Nitral-AI/Sekhmet_Bet-L3.1-8B-v0.2
Original model information:
Sekhmet_Bet [v-0.2] - Designed to provide robust solutions to complex problems while offering support and insightful guidance.
GGUF Quant's available thanks to: Reiterate3680 <3 GGUF Here
EXL2 Quant: 5bpw Exl2 Here
Recomended ST Presets: Sekhmet Presets(Same as Hathor's)
Training Note: Sekhmet_Bet [v0.2] is trained on: 1 epoch of Private - Hathor_0.85 Instructions, small subset of creative writing data, roleplaying chat pairs over Sekhmet_Aleph-L3.1-8B-v0.1
Additional Note's: This model was quickly assembled to provide users with a relatively uncensored alternative to L3.1 Instruct, featuring extended context capabilities. (As I will soon be on a short hiatus) The learning rate for this model was set rather low. Therefore, I do not expect it to match the performance levels demonstrated by Hathor versions 0.5, 0.85, or 1.0.
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Slvcxc/Sekhmet_Bet-L3.1-8B-v0.2-8.0bpw-h8-exl2
Base model
ChaoticNeutrals/Sekhmet_Bet-L3.1-8B-v0.2