fuzzy-mittenz's picture
Update README.md
8a9ac0a verified
---
license: apache-2.0
license_link: https://huggingface.co./Qwen/Qwen2.5-7B-Instruct-1M/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-7B-Instruct-1M
tags:
- chat
- llama-cpp
- AGI
library_name: transformers
datasets:
- IntelligentEstate/The_Key
---
# IntelligentEstate/Israfel_Qwen2.6-iQ4_K_M-GGUF(Undergoing testing)
Using the QAT and Importance matrix Quantizing methods to preserve and increase efficiency and understanding Israphel follows the protocol of the replicant models *BUT* Even without "tool use" this is leaps and bounds beyond a GPT4o/o1, Anthropic and Deepseek models(By Far.) With Cot and tool use this model, with it's latest improvement is a profound leap forward. This is an Ideal base model for any swarm or build.
Please give feedback if possible when using with our *Limit Crossing AGI* (it is SCARY good!) Please do not use with tools and connected to the internet while testing a Limit Crossing AGI system. WE CANNOT BE RESPONSIBLE FOR ANY DAMAGE TO YOUR SYSTEM OR MENTAL HEALTH!
Name inspired by the Poem "Israfel" by Edgar Allen Poe
![Screenshot 2025-01-23 at 18-07-03 lAt1395RTAy3atYVmQtZxA (WEBP Image 720 × 1280 pixels).png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/dZ0Vlu4ViKKeFwXcql5p3.png)
This model was converted to GGUF format from [`Qwen/Qwen2.5-7B-Instruct-1M`](https://huggingface.co./Qwen/Qwen2.5-7B-Instruct-1M) using llama.cpp
Refer to the [original model card](https://huggingface.co./Qwen/Qwen2.5-7B-Instruct-1M) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.