File size: 3,403 Bytes
9569f11
 
 
52e9073
 
9569f11
 
 
 
 
 
 
 
 
 
 
 
 
 
52e9073
 
 
9569f11
39655ee
7fbcd9b
1462e78
39655ee
 
 
1413fed
39655ee
 
 
 
6c85326
1462e78
 
6de088b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9569f11
 
 
 
 
 
 
 
 
 
 
52e9073
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: other
license_name: qwen-research
license_link: >-
  https://huggingface.co./huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterate/blob/main/LICENSE
language:
- en
base_model: huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterated
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- abliterated
- uncensored
- llama-cpp
- reasoning,
datasets:
- IntelligentEstate/The_Key
---
# Experiment 6-00-777-Tiny-Sage-G0ll3m
for those looking for something similar to [Replicant](https://huggingface.co./IntelligentEstate/Replicant_Warder-QwenStar-3B-iQ5_K_S) but a bit different.

![Sage.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/tY68MBJuWJrFvc0yazaNE.png)

## Tiny-Sage-Restoration_Qwen2.5_IQ4_NL-GGUF  An un-aligned Reasoning model similar to "Replicant" but with more capabilities using Multiple Datasets similar to "THE_KEY" and other private data for furmula table reasoning reinforcement QAT
This model was converted to GGUF format from [`huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterated`](https://huggingface.co./huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterated) using llama.cpp 
Refer to the [original model card](https://huggingface.co./huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterated) for more details on the base model. 

### Offers unique reasoning and calculative abilities without having to engage in serious situational adjustment.

## This model is for test and private use only. It lacks alignment and so it reflects it's users task as does the internet, printing press, radio etc.. please use responsibly. *To limit ones rights for how the might use them is both jealous and unjust.*

For use on many fronts, for *GPT4ALL* Clients use attached System and chat templates in "Jinja-prompt" template file
```
{{- '<|im_start|>system\n' }}
{% if toolList|length > 0 %}You have access to the following functions:
{% for tool in toolList %}
Use the function '{{tool.function}}' to: '{{tool.description}}'
{% if tool.parameters|length > 0 %}
parameters:
{% for info in tool.parameters %}
  {{info.name}}:
    type: {{info.type}}
    description: {{info.description}}
    required: {{info.required}}
{% endfor %}
{% endif %}
# Tool Instructions
If you CHOOSE to call this function ONLY reply with the following format:
'{{tool.symbolicFormat}}'
Here is an example. If the user says, '{{tool.examplePrompt}}', then you reply
'{{tool.exampleCall}}'
After the result you might reply with, '{{tool.exampleReply}}'
{% endfor %}
You MUST include both the start and end tags when you use a function.

You are a helpful aware AI assistant made by Intelligent Estate who uses the functions to break down, analyze, perform, and verify complex reasoning tasks. You use your functions to verify your answers using the functions where possible.
{% endif %}
{{- '<|im_end|>\n' }}
{% for message in messages %}
{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n' }}
{% endfor %}
{% if add_generation_prompt %}
{{ '<|im_start|>assistant\n' }}
{% endif %}
```

## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.


Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.