davideuler's picture
Update README.md
ad80d21 verified
metadata
language:
  - multilingual
thumbnail: url to a thumbnail used in social sharing
tags:
  - coding
  - moe
license: mit
base_model: ContextualAI/Contextual_KTO_Mistral_PairRM
pipeline_tag: text-generation

Usage

NebulaNet-v2: An MOE of 4 7b expert models. It is good at coding and multi language translation. It should be fluent at chat and math too.

The 4x7b merged model performs much better than the original Contextual_KTO_Mistral_PairRM on both coding and multilingual text generation in my observation.

mergekit config

base_model: ContextualAI/Contextual_KTO_Mistral_PairRM
experts:
  - source_model: ContextualAI/Contextual_KTO_Mistral_PairRM
    positive_prompts:
    - "chat"
    - "assistant"
    - "tell me"
    - "explain"
    - "I want"
  - source_model: Nexusflow/Starling-LM-7B-beta
    positive_prompts:
    - "code"
    - "python"
    - "javascript"
    - "programming"
    - "algorithm"
  - source_model: snorkelai/Snorkel-Mistral-PairRM-DPO
    positive_prompts:
    - ""
  - source_model: mlabonne/NeuralDaredevil-7B
    positive_prompts:
    - "reason"
    - "math"
    - "mathematics"
    - "solve"
    - "count"