lawrencewu's picture
Update README.md
a1fa327 verified
metadata
license: apache-2.0
library_name: peft
tags:
  - axolotl
  - generated_from_trainer
base_model: mistralai/Mistral-7B-v0.3
model-index:
  - name: hc-mistral-7B-v0.3-alpaca-first-100
    results: []

Usage

You can use this model with the following code:

First, download the model

from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model_id='lawrencewu/hc-mistral-7B-v0.3-alpaca-first-100'
model = AutoPeftModelForCausalLM.from_pretrained(model_id).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token

Then, construct the prompt template like so:

def prompt(nlq, cols):
    return f"""Honeycomb is an observability platform that allows you to write queries to inspect trace data. You are an assistant that takes a natural language query (NLQ) and a list of valid columns and produce a Honeycomb query.

    ### Instruction:
    
    NLQ: "{nlq}"
    
    Columns: {cols}
    
    ### Response:
"""

def prompt_tok(nlq, cols):
    _p = prompt(nlq, cols)
    input_ids = tokenizer(_p, return_tensors="pt", truncation=True).input_ids.cuda()
    out_ids = model.generate(input_ids=input_ids, max_new_tokens=5000, 
                          do_sample=False)
    return tokenizer.batch_decode(out_ids.detach().cpu().numpy(), 
                                  skip_special_tokens=True)[0][len(_p):]

Finally, you can get predictions like this:

# model inputs
nlq = "Exception count by exception and caller"
cols = ['error', 'exception.message', 'exception.type', 'exception.stacktrace', 'SampleRate', 'name', 'db.user', 'type', 'duration_ms', 'db.name', 'service.name', 'http.method', 'db.system', 'status_code', 'db.operation', 'library.name', 'process.pid', 'net.transport', 'messaging.system', 'rpc.system', 'http.target', 'db.statement', 'library.version', 'status_message', 'parent_name', 'aws.region', 'process.command', 'rpc.method', 'span.kind', 'serializer.name', 'net.peer.name', 'rpc.service', 'http.scheme', 'process.runtime.name', 'serializer.format', 'serializer.renderer', 'net.peer.port', 'process.runtime.version', 'http.status_code', 'telemetry.sdk.language', 'trace.parent_id', 'process.runtime.description', 'span.num_events', 'messaging.destination', 'net.peer.ip', 'trace.trace_id', 'telemetry.instrumentation_library', 'trace.span_id', 'span.num_links', 'meta.signal_type', 'http.route']

# print prediction
out = prompt_tok(nlq, cols)
print(nlq, '\n', out)

This will give you a prediction that looks like this:

"{'breakdowns': ['exception.message', 'exception.type'], 'calculations': [{'op': 'COUNT'}], 'filters': [{'column': 'exception.message', 'op': 'exists'}, {'column': 'exception.type', 'op': 'exists'}], 'orders': [{'op': 'COUNT', 'order': 'descending'}], 'time_range': 7200}"

Built with Axolotl

See axolotl config

axolotl version: 0.4.0

base_model: mistralai/Mistral-7B-v0.3
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true

load_in_8bit: false
load_in_4bit: true
strict: false

lora_fan_in_fan_out: false
data_seed: 49
seed: 49

datasets:
  - path: data/output_first_100.jsonl
    type: sharegpt
    conversation: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./qlora-alpaca-out
hub_model_id: lawrencewu/hc-mistral-7B-v0.3-alpaca-first-100

adapter: qlora
lora_model_dir:

sequence_len: 896
sample_packing: false
pad_to_sequence_len: true

lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj

wandb_project: hc-axolotl-mistral
wandb_entity: law

gradient_accumulation_steps: 4
micro_batch_size: 16
eval_batch_size: 16
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
max_grad_norm: 1.0
adam_beta2: 0.95
adam_epsilon: 0.00001
save_total_limit: 12

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false

loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3

warmup_steps: 20
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 6
debug:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"
save_safetensors: true

hc-mistral-7B-v0.3-alpaca-first-100

This model is a fine-tuned version of mistralai/Mistral-7B-v0.3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0883

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 49
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 20
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
1.2291 0.6667 1 1.1299
1.2111 1.3333 2 1.1216
1.2203 2.0 3 1.0883

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.2
  • Pytorch 2.1.2+cu118
  • Datasets 2.19.1
  • Tokenizers 0.19.1