--- license: cc-by-nc-4.0 base_model: [] library_name: transformers tags: - mergekit - merge pipeline_tag: text-generation --- # Credit for the model card's description goes to ddh0, mergekit, and, NeverSleep # Inspired by ddh0/Starling-LM-10.7B-beta and ddh0/Mistral-10.7B-Instruct-v0.2 # Noromaid-10.7B-0.4-DPO This is Noromaid-10.7B-0.4-DPO, a depth-upscaled version of [NeverSleep/Noromaid-10.7B-0.4-DPO](https://huggingface.co./NeverSleep/Noromaid-7B-0.4-DPO). This model is intended to be used as a basis for further fine-tuning, or as a drop-in upgrade from the original 7 billion parameter model. Paper detailing how Depth-Up Scaling works: [SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling](https://arxiv.org/abs/2312.15166) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). # Prompt format same as [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co./NeverSleep/Noromaid-7B-0.4-DPO) ## Prompt format: Chatml ``` <|im_start|>system {sysprompt}<|im_end|> <|im_start|>user {input}<|im_end|> <|im_start|>assistant {output}<|im_end|> ``` ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * /Users/jsarnecki/opt/Workspace/NeverSleep-Noromaid-0.4-DPO/NeverSleep-Noromaid-7B-0.4-DPO ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 24] model: /Users/jsarnecki/opt/Workspace/NeverSleep-Noromaid-0.4-DPO/NeverSleep-Noromaid-7B-0.4-DPO - sources: - layer_range: [8, 32] model: /Users/jsarnecki/opt/Workspace/NeverSleep-Noromaid-0.4-DPO/NeverSleep-Noromaid-7B-0.4-DPO ``` --- license: cc-by-nc-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/VKX2Z2yjZX5J8kXzgeCYO.png) --- # This model is a collab between [IkariDev](https://huggingface.co./IkariDev) and [Undi](https://huggingface.co./Undi95)! ## Description This repo contains fp16 files of Noromaid-7b-v0.4-DPO. [FP16 - by IkariDev and Undi](https://huggingface.co./NeverSleep/Noromaid-7B-0.4-DPO) [GGUF - by IkariDev and Undi](https://huggingface.co./NeverSleep/Noromaid-7B-0.4-DPO-GGUF) ## Ratings: Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". ## Prompt format: Chatml ``` <|im_start|>system {sysprompt}<|im_end|> <|im_start|>user {input}<|im_end|> <|im_start|>assistant {output}<|im_end|> ``` ## Training data used: - [no_robots dataset](https://huggingface.co./Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output. - [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co./MinervaAI) and, in particular, [Gryphe](https://huggingface.co./Gryphe) for letting us use it! - [Another private Aesir dataset] - [Another private Aesir dataset] - [limarp](https://huggingface.co./datasets/lemonilia/LimaRP) ## DPO training data used: - [Intel/orca_dpo_pairs](https://huggingface.co./datasets/Intel/orca_dpo_pairs) - [NobodyExistsOnTheInternet/ToxicDPOqa](https://huggingface.co./datasets/NobodyExistsOnTheInternet/ToxicDPOqa) - [Undi95/toxic-dpo-v0.1-NoWarning](https://huggingface.co./datasets/Undi95/toxic-dpo-v0.1-NoWarning) This is a full finetune. ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek