--- datasets: - lewtun/music_genres_small base_model: - facebook/wav2vec2-large metrics: - accuracy - f1 tags: - audio - music - classification - Wav2Vec2 pipeline_tag: audio-classification --- # Music Genre Classification Model 🎶 This model classifies music genres based on audio signals. It was fine-tuned on the `music_genres_small` dataset using the Wav2Vec2 architecture. You can find a GitHub repository with an interface hosted by a Flask API to test the model: **[music-classifier repository](https://github.com/gastonduault/Music-Classifier)** ## Metrics - **Validation Accuracy**: 75% - **F1 Score**: 74% - **Validation Loss**: 0.77 ## Example Usage ```python from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor import torch # Load model and feature extractor model = Wav2Vec2ForSequenceClassification.from_pretrained("gastonduault/music-classifier") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("facebook/wav2vec2-large") # Process audio file audio_path = "path/to/audio.wav" audio_input = feature_extractor(audio_array, sampling_rate=16000, return_tensors="pt", padding=True) # Predict with torch.no_grad(): logits = model(audio_input["input_values"]) predicted_class = torch.argmax(logits.logits, dim=-1) print(predicted_class)