Model Card

Model Name: BathSalt-1/daedalus-phi-3

Model Type: Large Language Model

Description: This model is a merge of the Or4cl3-1/Daedalus_1 and microsoft/Phi-3-mini-4k-instruct models using the LazyMergekit library. It is designed for general-purpose natural language processing tasks.

Metadata:

  • License: MIT License
  • Language: English
  • Library: Transformers
  • Base Model: microsoft/Phi-3-mini-4k-instruct
  • Merge Method: slerp
  • Layer Range: [0, 32]
  • Parameters:
    • t:
      • filter: self_attn
      • value: [0, 0.5, 0.3, 0.7, 1]
      • filter: mlp
      • value: [1, 0.5, 0.7, 0.3, 0]
      • value: 0.5
    • dtype: bfloat16

Usage:

  • Tokenizer: AutoTokenizer
  • Model: AutoModelForSeq2SeqLM
  • Pipeline: text-generation
  • Device: auto

Example Code:

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "BathSalt-1/daedalus-phi-3"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
1
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.