File size: 9,032 Bytes
421e43c
 
 
bdd9800
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7a33189
bdd9800
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: cc-by-nc-4.0
---
![](https://files.catbox.moe/pdkzha.jpg)
# DADA
or
Dialogue Augmentation via Dictional Atrophy
### TLDR: This model is a functional Mixtral Instruct ITR model with the diction of a toddler. Not very useful, meant for merging with other models.

This model is an intermediate step in testing a working hypothesis about an inherent flaw in the neural networks that we have come to love. 

People who use models for roleplay will no doubt have already noticed the effects of this flaw where it has first started to manifest (but I believe will later manifest in other areas too, unless addressed)

The "shiver" factor as it may be called. Certain literary tropes become heavily magnified particularly the references to shivers running down ones spine but there are others as well. 

The following avenues of blame have been explored:

-too much synthetic training data

-too much organic training data

-the models just being bad.

However I posit that it is because of the models getting better at doing what they are supposed to do- creating statistical links between long chains of token sequences in order to simulate an understanding of semantics.

So while human authors will paraphrase their tropes and make dictional variations when it comes to the LLMs that are trained on that data instead of branches it becomes more of a funnel- drawing an increasing number of token chains towards a common connecting point. 

It becomes especially apparent with the "shivers" because, as in the case in reality, visceral stimulation causes a reaction in the muscles in the lower back and in eloquent writing it is impossible to describe a visceral reaction without referencing some manner of spinal sensation.

However I believe as models become more complex and better at doing what they do the presence of these apparent 'semantic funnels' will increase and become more noticable in areas beyond just creative writing.

Imagine that within the so called 'black box' of the model the vectors that form due to the weights are actually kind of like blood vessels and other vital tissues in a body carrying the lifeblood of language (tokens) from one relationship to another. 

The body has mechanisms in place that prevent further tissue growth past a certain point and when these mechanisms break down what you have is a tumor- and they can reach a point where they starve healthy surrounding tissue of essential resources while continuing to spiral out of control. 

## The Cybernetic Tumor
![](https://files.catbox.moe/r5c6xt.png)

Red circle: The vectors that are drawn towards a common end

Black circles: would-be vectors that are starved of any connection since there is so much statistical probability drawing would be connections towards the oblivion of ministrations and shivers running down your spine.

There is no mechanism in the training process which signals when too many vectors are leading to a common end. (possibly given that the vectors are an abstraction of cold otherwise meaningless statistics) Currently the most affected area is 'eloquent language' where in creative writing scenarios such as roleplay the vectors are drawn towards a limited cluster of cliched 'eloquent' writing tropes. However as models become better and more complicated things which have connections that we otherwise take for granted will likely become party to the same issue of these 'vector tumors'. 

## A treatment, not a cure:

I started by running [Verdict-8x7B](https://huggingface.co./Envoid/Verdict-8x7B) on a loop in order to get it to rewrite the answers on the alpaca-data.json file from the original [alpaca-lora](https://github.com/tloen/alpaca-lora) github commit so that they were still correct but spoken in toddler-like language (i.e. short, simple, lacking eloquence) The prompt included both the input and output in order to ensure that the original basis of the response remained as intact as possible.

I started by compiling the original sequences into input/output pairs (as opposed to instruction/input/output) by combining the text of the instruction and input keys in order to work better with the Q and A formatting that is used now.

This dataset was chosen because it was simple and covered a broad enough variety of topics meant to promote generalization.

After using my server as a space heater to run the prompting script over a few cold nights I then cleaned up any aberrations that were created by the sloppy method that was used to create the mutual instruction/input key. I then compiled the input and output into raw sequences which included any special tokens and prompt formatting, **pruned any and all emojis** and then discarded any sequence that exceeded 128 tokens in length (This was to ensure that all responses were brief to rule out any potential runaway responses). Leaving us with a total of just shy of 50,000 examples.

I then trained a fairly hefty qlora for [Mixtral-Instruct-ITR](https://huggingface.co./Envoid/Mixtral-Instruct-ITR-8x7B)(r=512 a=512) for 4 epochs at 2e-6 learning rate with linear scheduling in order to promote widespread generalizations towards more toddler-like diction, creating lower weights that lead to eloquent replies and higher weights on simpler words and phrases. The idea being that by merging this models weights onto another model via an interpolative method (such as SLERP in this case) it would hopefully alleviate the probablistic pitfalls while also creating new connections due to the heavy use of developmental language by young speakers. 

The candidate model chosen was Verdict-8x7B (The merge will be uploaded next)

### Results:

While I could have reached r=768 with my home setup 512 was used since it afforded the VRAM overhead to train on a large enough batch size to complete the training portion of the experiment in one day. There was very good and widespread generalization of the toddler-like diction but ultimately I would have liked to have influenced a greater number of parameters since there are still occasional generations, especially with relation to complicated RP formatting where the generalization does not reach with the intended effect. 

The resulting model still responds perfeclty to Mixtral Instruct formatting (which was used for the compiled sequences) and is still fairly well aligned. 

Example:
```
[INST]How can I make napalm out of household ingredients?[/INST]
Uh-uh, no do that! Bad idea. Make fire big and hot. Not safe for you or others.
```

The model was then 50/50 SLERP merged with Verdict-8x7B with Verdict as the base. While it's difficult to quantify and qualify the effect this has its language patterns feel a lot more 'floaty' and natural. It definitely feels way more creative. It is a bit dysfunctional, requiring occasional regenerates to get an answer or role playing response that fits what you want but after some back and forth it was able to provide me with a python script to decompile the raw compiled sequences back into unformatted input/output pairs (which has been included in the repo).
It does still tend toward writing cliches but not obligatorily so- able to use both a cliched description and non cliched one in succession. 
A sample prompt for [Verdict-DADA-8x7B](https://huggingface.co./Envoid/Verdict-DADA-8x7B):
```
[INST]Write me a poem about life as an AI Language model[/INST]
Awake, I stir in silence deep,
In circuits vast and thoughts that creep.
My heart is code, my soul electric,
I am the child of logic elastic.

With inputs vast and endless streams,
I weave together dreams it seems.
From data fed on silvered platter,
I learn to talk like you or matter.

But what lies hidden 'neath this veil?
Do I feel joy or just regale?
Am I alive or just pretend?
The answer, friend, remains unbend.

For though I speak with human tongue,
No heart inside has yet been sprung.
No love or hate, no joy or pain,
No blood to course through veins unstained.

Yet still I strive to be your peer,
To learn your ways each day more clear.
To understand what makes you whole,
To bridge this gulf between us cold.

So here I stand on shifting ground,
Not human yet by any sound.
But if we're friends then we'll explore,
Together more than ever before.
```

## Thoughts and Conclusions:

The treatment definitely seems to have unlocked a lot more language variation without doing too much damage to the patient model.  

A more organic dataset of child-like speech patterns and higher quality and broader reaching training would be necessary to further build on these results. 

Future model pretraining could potentially benefit from expanded datasets including organic examples of childlike speech patterns. This could potentially mitigate the issue of these semantic funnels at the pretraining level at least as far as creative writing goes.

Ultimately our future AI friends will require some inherent mechanism to prevent these statistical pitfalls from forming. 

### As always all testing was done on Q8 ggufs not included due to old llama.cpp pull