|
--- |
|
license: apache-2.0 |
|
library_name: peft |
|
tags: |
|
- finetuned |
|
- multimodal |
|
base_model: mistralai/Mistral-7B-Instruct-v0.1 |
|
dataset: sshh12/whisper-gpt-common_voice_15_0-finetune |
|
inference: false |
|
--- |
|
|
|
These are weights for a version of `mistralai/Mistral-7B-Instruct-v0.1` finetuned for multimodal applications. |
|
|
|
### Modalities |
|
|
|
* WhisperAudioModality (use `<speech>` in text and provide `speech_audios`, encoded as 10 tokens) |
|
|
|
### Usage |
|
|
|
GitHub: https://github.com/sshh12/multi_token (includes training scripts and basic inference server) |
|
|
|
### Dataset |
|
|
|
sshh12/whisper-gpt-common_voice_15_0-finetune (300000 examples) |
|
|
|
``` |
|
{'speech_audios': [{'dataset_args': {'name': 'en', 'path': 'mozilla-foundation/common_voice_15_0', 'split': 'train'}, 'idx': 68335}], 'messages': [{'content': '<speech> What is said in the audio?', 'role': 'user'}, {'content': 'The screen fades to black.', 'role': 'assistant'}]} |
|
``` |
|
|
|
### Training Device(s) |
|
|
|
``` |
|
name, pci.bus_id, vbios_version |
|
NVIDIA RTX A6000, 00000000:82:00.0, 94.02.5C.00.02 |
|
``` |
|
|
|
|
|
### Model |
|
|
|
``` |
|
MistralLMMForCausalLM.model = |
|
|
|
PeftModelForCausalLM( |
|
(base_model): LoraModel( |
|
(model): MistralLMMForCausalLM( |
|
(model): MistralLMMModel( |
|
(embed_tokens): Embedding(32000, 4096) |
|
(layers): ModuleList( |
|
(0-31): 32 x MistralDecoderLayer( |
|
(self_attn): MistralAttention( |
|
(q_proj): lora.Linear( |
|
(base_layer): Linear(in_features=4096, out_features=4096, bias=False) |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=64, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=64, out_features=4096, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
) |
|
(k_proj): lora.Linear( |
|
(base_layer): Linear(in_features=4096, out_features=1024, bias=False) |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=64, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=64, out_features=1024, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
) |
|
(v_proj): lora.Linear( |
|
(base_layer): Linear(in_features=4096, out_features=1024, bias=False) |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=64, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=64, out_features=1024, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
) |
|
(o_proj): lora.Linear( |
|
(base_layer): Linear(in_features=4096, out_features=4096, bias=False) |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=64, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=64, out_features=4096, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
) |
|
(rotary_emb): MistralRotaryEmbedding() |
|
) |
|
(mlp): MistralMLP( |
|
(gate_proj): lora.Linear( |
|
(base_layer): Linear(in_features=4096, out_features=14336, bias=False) |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=64, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=64, out_features=14336, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
) |
|
(up_proj): lora.Linear( |
|
(base_layer): Linear(in_features=4096, out_features=14336, bias=False) |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=64, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=64, out_features=14336, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
) |
|
(down_proj): lora.Linear( |
|
(base_layer): Linear(in_features=14336, out_features=4096, bias=False) |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=14336, out_features=64, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=64, out_features=4096, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
) |
|
(act_fn): SiLU() |
|
) |
|
(input_layernorm): MistralRMSNorm() |
|
(post_attention_layernorm): MistralRMSNorm() |
|
) |
|
) |
|
(norm): MistralRMSNorm() |
|
(audio_whisper_lmm_projector): _MLPVectorProjector( |
|
(mlps): ModuleList( |
|
(0-9): 10 x Sequential( |
|
(0): Linear(in_features=768, out_features=4096, bias=True) |
|
(1): GELU(approximate='none') |
|
(2): Linear(in_features=4096, out_features=4096, bias=True) |
|
) |
|
) |
|
) |
|
) |
|
(lm_head): Linear(in_features=4096, out_features=32000, bias=False) |
|
) |
|
) |
|
) |
|
``` |
|
|
|
### Framework versions |
|
|
|
- PEFT 0.7.0 |