|
|
|
--- |
|
|
|
base_model: |
|
- unsloth/Meta-Llama-3.1-8B |
|
- Replete-AI/Replete-LLM-V2-Llama-3.1-8b |
|
- bunnycore/Synesthesia-3.1-task_arithmetic |
|
- bunnycore/HyperLlama-3.1-8B |
|
- Dampfinchen/Llama-3.1-8B-Ultra-Instruct |
|
- bunnycore/MegaHyperLlama3.1 |
|
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 |
|
- bunnycore/HyperLexi-8B-breadcrumbs |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
|
|
--- |
|
|
|
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) |
|
|
|
# QuantFactory/LLama-3.1-Hyper-Stock-GGUF |
|
This is quantized version of [bunnycore/LLama-3.1-Hyper-Stock](https://huggingface.co./bunnycore/LLama-3.1-Hyper-Stock) created using llama.cpp |
|
|
|
# Original Model Card |
|
|
|
# merge |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [unsloth/Meta-Llama-3.1-8B](https://huggingface.co./unsloth/Meta-Llama-3.1-8B) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [Replete-AI/Replete-LLM-V2-Llama-3.1-8b](https://huggingface.co./Replete-AI/Replete-LLM-V2-Llama-3.1-8b) |
|
* [bunnycore/Synesthesia-3.1-task_arithmetic](https://huggingface.co./bunnycore/Synesthesia-3.1-task_arithmetic) |
|
* [bunnycore/HyperLlama-3.1-8B](https://huggingface.co./bunnycore/HyperLlama-3.1-8B) |
|
* [Dampfinchen/Llama-3.1-8B-Ultra-Instruct](https://huggingface.co./Dampfinchen/Llama-3.1-8B-Ultra-Instruct) |
|
* [bunnycore/MegaHyperLlama3.1](https://huggingface.co./bunnycore/MegaHyperLlama3.1) |
|
* [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co./Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) |
|
* [bunnycore/HyperLexi-8B-breadcrumbs](https://huggingface.co./bunnycore/HyperLexi-8B-breadcrumbs) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: Replete-AI/Replete-LLM-V2-Llama-3.1-8b |
|
- model: Dampfinchen/Llama-3.1-8B-Ultra-Instruct |
|
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 |
|
- model: bunnycore/HyperLlama-3.1-8B |
|
- model: bunnycore/HyperLexi-8B-breadcrumbs |
|
- model: bunnycore/MegaHyperLlama3.1 |
|
- model: bunnycore/Synesthesia-3.1-task_arithmetic |
|
merge_method: model_stock |
|
base_model: unsloth/Meta-Llama-3.1-8B |
|
dtype: bfloat16 |
|
|
|
``` |
|
|
|
|