File size: 1,416 Bytes
c66e074 a6b0a0e c66e074 3fe73e3 c66e074 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
license: other
base_model:
- TheDrummer/Hubble-4B-v1
library_name: transformers
quantized_by: Ex_y
base_model_relation: quantized
---
EXL2 quants of [TheDrummer/Hubble-4B-v1](https://huggingface.co./TheDrummer/Hubble-4B-v1)
Default parameter. 6.5bpw and 8.0 bpw uses 8 bit lm_head layer, while 4.25bpw and 5.0bpw uses 6 bit lm_head layer.
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
### Works on [Kobold 1.74](https://github.com/LostRuins/koboldcpp/releases/tag/v1.74)!
*([Layla (iOS / Android)](https://www.layla-network.ai/) support is in progress)*
---
[BeaverAI](https://huggingface.co./BeaverAI) proudly presents...
# Hubble 4B v1
*Equipped with his five senses, man explores the universe around him and calls the adventure 'Science'.*
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/R8_o3CCpTgKv5Wnnry7E_.png)
## Description
This is a finetune of Nvidia's Llama 3.1 4B Minitron - a shrunk down model of Llama 3.1 8B 128K.
### Usage
- ChatML or Text Completion
- Add `<|im_end|>` as a stop token
### Links
- Original: https://huggingface.co./TheDrummer/Hubble-4B-v1
- GGUF: https://huggingface.co./TheDrummer/Hubble-4B-v1-GGUF
- Chadquants: https://huggingface.co./bartowski/Hubble-4B-v1-GGUF
### Technical Note
Hubble was trained on ChatML with `<|end_of_text|>` as the EOS token. If you encounter any issues with the model, please let me know! |