Tama

Here's the L3.1 version version: L3.1-8B-Niitama-v1.1

An experimental model using experimental methods.

More detail on it:

Tamamo and Niitama are made from the same data. Literally. The only thing that's changed is how theyre shuffled and formatted. Yet, I get wildly different results.

Interesting, eh?


Surprising, or not so surprising the L3 versions did better than the L3.1 versions. L3.1 felt like a mess.

Have a good day.

Downloads last month
351
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Sao10K/L3-8B-Niitama-v1

Finetunes
7 models
Merges
26 models
Quantizations
9 models

Spaces using Sao10K/L3-8B-Niitama-v1 7