metadata
datasets:
- lewtun/music_genres_small
base_model:
- facebook/wav2vec2-large
My Music Genre Classification Model 🎶
This model classifies music genres based on audio signals. It was fine-tuned on the music_genres_small
dataset using the Wav2Vec2 architecture.
Metrics
- Validation Accuracy: 69%
- F1 Score: 68%
- Validation Loss: 1.03
Usage
from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor
import torch
# Load the model and feature extractor
model = Wav2Vec2ForSequenceClassification.from_pretrained("username/repo-name")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("username/repo-name")
# Prepare input
audio = ... # Your audio array
inputs = feature_extractor(audio, sampling_rate=16000, return_tensors="pt")
# Make predictions
logits = model(**inputs).logits
predicted_class = torch.argmax(logits, dim=-1).item()
print(predicted_class)