OWLS: Scaling Laws for Speech Recognition and Translation
Collection
A suite of Whisper-style models from 250M to 18B parameters. Trained on up to 360K hours of data.
•
2 items
•
Updated
OWLS is a suite of Whisper-style models, designed to help researchers understand the scaling properties of speech models. OWLS models range from 0.25B to 18B parameters, and are trained on up to 360K hours of data.
OWLS models are developed using ESPnet, and support multilingual Speech Recognition and Translation.
It is part of the OWSM project, which aims to develop fully open speech foundation models using publicly available data and open-source toolkits.
The model in this repo has 4.66B parameters in total and is trained on 180k hours of public speech data. Specifically, it supports the following speech-to-text tasks:
You can use this model in your projects with the following code:
# make sure espnet is installed: pip install espnet
from espnet2.bin.s2t_inference import Speech2Text
model = Speech2Text.from_pretrained(
"espnet/owls_4B_180K"
)
speech, rate = soundfile.read("speech.wav")
text, *_ = model(speech)[0]
TBA