|
--- |
|
tags: |
|
- mistral |
|
- merge |
|
--- |
|
|
|
experiment-A |
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/hABq6tjwMnm7LkzEP9L8O.png) |
|
# DupletBoreas-7B-t0.0001 |
|
|
|
## A merge of the following models using a custom NearSwap algorithm: |
|
|
|
* [fearlessdots/WizardLM-2-7B-abliterated](https://huggingface.co./fearlessdots/WizardLM-2-7B-abliterated) |
|
* [Virt-io/Erebus-Holodeck-7B](https://huggingface.co./Virt-io/Erebus-Holodeck-7B) |
|
|
|
With [fearlessdots/WizardLM-2-7B-abliterated](https://huggingface.co./fearlessdots/WizardLM-2-7B-abliterated) as the base model. |
|
|
|
## Thanks mradermacher for the quants! |
|
* [GGUF](https://huggingface.co./mradermacher/DupletBoreas-7B-t0.0001-GGUF) |
|
* [GGUF imatrix](https://huggingface.co./mradermacher/DupletBoreas-7B-t0.0001-i1-GGUF) |
|
|
|
https://huggingface.co./v000000/DupletBoreas-7B-t0.0001-Q8_0-GGUF |
|
|
|
https://huggingface.co./v000000/DupletBoreas-7B-t0.0001-Q5_K_S-GGUF |
|
|
|
```python |
|
#Fixed |
|
def lerp(a, b, t): |
|
return a * (1 - t) + b * t |
|
|
|
def nearswap(v0, v1, t): |
|
lweight = np.abs(v0 - v1) |
|
with np.errstate(divide='ignore', invalid='ignore'): |
|
lweight = np.where(lweight != 0, t / lweight, 1.0) |
|
lweight = np.nan_to_num(lweight, nan=1.0, posinf=1.0, neginf=1.0) |
|
np.clip(lweight, a_min=0.0, a_max=1.0, out=lweight) |
|
return lerp(v0, v1, lweight) |
|
``` |
|
Credit Alchemonaut |
|
|
|
|
|
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: |
|
|
|
``` |
|
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, |
|
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> |
|
USER: Who are you? ASSISTANT: I am WizardLM.</s>...... |
|
``` |
|
<b>Add Genre's in a list like this.</b> |
|
``` |
|
[Genre: <genre1>, <genre2>] |
|
|
|
``` |