YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co./docs/hub/model-cards#model-card-metadata)
PruneSLU-15M: Efficient Model for On-Device Spoken Language Understanding
PruneSLU-15M is an optimized version of the openai/whisper-tiny.en model, specifically tailored for Spoken Language Understanding (SLU) tasks. This model has been pruned and further fine-tuned to achieve a lightweight yet powerful solution, ideal for deployment in resource-constrained environments.
Model Overview
- Base Model: openai/whisper-tiny.en
- Task: Spoken Language Understanding (SLU)
- Dataset: Fine-tuned on the STOP dataset
- Pruning Techniques: Vocabulary pruning and layer-wise structural pruning have been applied to reduce the model size while maintaining performance. Post-pruning, the model was retrained to ensure robustness.
Key Features
- Compact Size: With only 15 million parameters, PruneSLU-15M is highly efficient, making it suitable for on-device applications.
- High Performance: Despite significant pruning, the model retains strong performance on SLU tasks, as demonstrated by evaluations on the STOP dataset.
- Easy Integration: The model can be effortlessly loaded and used via the Hugging Face Transformers library.
Usage
To load the PruneSLU-15M model in Hugging Face, use the following code:
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("kodiak619/PruneSLU-15M")
Applications
PruneSLU-15M is ideal for scenarios where computational resources are limited, such as mobile devices or embedded systems. It is particularly well-suited for tasks like voice command recognition, intent detection, and other SLU-related applications in low-resource settings.
- Downloads last month
- 2