Steelskull's picture
Update README.md
b5f3521 verified
|
raw
history blame
2.37 kB
metadata
license: apache-2.0
pipeline_tag: text-generation

Lumosia-v2-MoE-4x10.7

The Lumosia Series upgraded with Lumosia V2.

What's New in Lumosia V2?

Lumosia V2 takes the original vision of being an "all-rounder" and refines it with more nuanced capabilities.

Topic/Prompt Based Approach:

Diverging from the keyword-based approach of its counterpart, Umbra.

Context and Coherence:

With a base context of 8k scrolling window and the ability to maintain coherence up to 16k.

Balanced and Versatile:

The core ethos of Lumosia V2 is balance. It's designed to be your go-to assistant.

Experimentation and User-Centric Development:

Lumosia V2 remains an experimental model, a mosaic of the best-performing Solar models, (selected based on user experience). This version is a testament to the idea that innovation is a journey, not a destination.

Come join the Discord: ConvexAI

Template:

### System:

### USER:{prompt}

### Assistant:

Settings:

Temp: 1.0
min-p: 0.02-0.1

Evals:

  • Avg:
  • ARC:
  • HellaSwag:
  • MMLU:
  • T-QA:
  • Winogrande:
  • GSM8K:

Examples:

Example 1:

User:

Lumosia:
Example 2:

User:

Lumosia:

🧩 Configuration

yaml
base_model: DopeorNope/SOLARC-M-10.7B
gate_mode: hidden
dtype: bfloat16
experts:
  - source_model: DopeorNope/SOLARC-M-10.7B
    positive_prompts: [""]
  - source_model: maywell/PiVoT-10.7B-Mistral-v0.2-RP
    positive_prompts: [""]
  - source_model: kyujinpy/Sakura-SOLAR-Instruct
    positive_prompts: [""]
  - source_model: jeonsworld/CarbonVillain-en-10.7B-v1
    positive_prompts: [""]

💻 Usage

python
!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Steelskull/Lumosia-MoE-4x10.7"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])