YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

PruneSLU-30M: Enhanced Model for On-Device Spoken Language Understanding

PruneSLU-30M is an enhanced version of the openai/whisper-tiny.en model, designed for robust Spoken Language Understanding (SLU) tasks. This model strikes a balance between performance and efficiency, making it suitable for more demanding on-device applications.

Model Overview

  • Base Model: openai/whisper-tiny.en
  • Task: Spoken Language Understanding (SLU)
  • Dataset: Fine-tuned on the STOP dataset
  • Pruning Techniques: Employs vocabulary pruning and layer-wise structural pruning, followed by retraining to create a model that is both efficient and high-performing.

Key Features

  • Optimized Size: PruneSLU-30M contains 30 million parameters, offering a higher capacity for SLU tasks while remaining suitable for on-device deployment.
  • Improved Performance: This model is designed to handle more complex SLU tasks, providing enhanced accuracy and robustness compared to lighter models.
  • Seamless Integration: The model can be easily accessed and utilized through the Hugging Face Transformers library.

Usage

To load the PruneSLU-30M model in Hugging Face, use the following code:

from transformers import WhisperForConditionalGeneration

model = WhisperForConditionalGeneration.from_pretrained("kodiak619/PruneSLU-30M")

Applications

PruneSLU-30M is ideal for applications requiring a balance between computational efficiency and performance, such as voice-enabled AI systems, smart assistants, and SLU tasks in moderately resource-constrained environments.

Downloads last month
3
Safetensors
Model size
31.2M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .