DrishtiSharma commited on
Commit
44c2f92
1 Parent(s): e3ad1ca

Update info.txt

Browse files
Files changed (1) hide show
  1. info.txt +1 -1
info.txt CHANGED
@@ -1,6 +1,6 @@
1
  Gradio demo for sentiment classification of Spanish audios using Wav2Vec2
2
  🔊 Audio Sentiment Classifier
3
- This is a Gradio demo for classifying the sentiment of the speech/audio using Wav2Vec2 fine-tuned on Mexican Emotional Speech Database (MESD) dataset. The MESD dataset contains single-word utterances for the emotive prosodies of anger, disgust, fear, happiness, neutrality, and sadness with Mexican culture shaping. In addition, the utterances in MESD dataset have been contributed by both adult and child non-professional actors: 3 female, 2 male, and 6 child voices are available.
4
  Targeted SGDs: [Goal 3] Good health and well being & [Goal 16] Peace, Justice and Strong Institutions
5
  Potential Applications: [Goal 3- SDG] Presenting, recommending and categorizing the audio libraries or other media in general based on detected mood/preferences via user's speech or user's aural environment. A mood lighting system, in addition to the aforementioned features, can be implemented to make user's environment a bit more user-friendly, and and so contribute a little to maintaining the user's mental health and overall welfare. [Goal 16 -SDG] Additionally, the model can be trained on data with more class labels in order to be useful particularly in detecting brawls, and any other uneventful scenarios. That trained model (audio classifier) can be integrated in a surveillance system to detect brawls and other unsettling events that can be detected using "sound" and subsequently utilized by Peace, Justice and strong Institutions.
6
  To begin with, we didn't have enough Spanish audio data suitable for sentiment classification tasks. We had to make do with whatever data we could find in the MESD database because much of the material we came across was not open-source. Furthermore, in the MESD dataset, augmented versions of audios pre-existed, accounting for up to 25% of the total data, thus we decided not to undertake any more data augmentation to avoid overwhelming the original audio samples.
 
1
  Gradio demo for sentiment classification of Spanish audios using Wav2Vec2
2
  🔊 Audio Sentiment Classifier
3
+ This is a Gradio demo for classifying the sentiment of the speech/audio using Wav2Vec2-base fine-tuned on Mexican Emotional Speech Database (MESD) dataset. The MESD dataset contains single-word utterances for the emotive prosodies of anger, disgust, fear, happiness, neutrality, and sadness with Mexican culture shaping. In addition, the utterances in MESD dataset have been contributed by both adult and child non-professional actors: 3 female, 2 male, and 6 child voices are available.
4
  Targeted SGDs: [Goal 3] Good health and well being & [Goal 16] Peace, Justice and Strong Institutions
5
  Potential Applications: [Goal 3- SDG] Presenting, recommending and categorizing the audio libraries or other media in general based on detected mood/preferences via user's speech or user's aural environment. A mood lighting system, in addition to the aforementioned features, can be implemented to make user's environment a bit more user-friendly, and and so contribute a little to maintaining the user's mental health and overall welfare. [Goal 16 -SDG] Additionally, the model can be trained on data with more class labels in order to be useful particularly in detecting brawls, and any other uneventful scenarios. That trained model (audio classifier) can be integrated in a surveillance system to detect brawls and other unsettling events that can be detected using "sound" and subsequently utilized by Peace, Justice and strong Institutions.
6
  To begin with, we didn't have enough Spanish audio data suitable for sentiment classification tasks. We had to make do with whatever data we could find in the MESD database because much of the material we came across was not open-source. Furthermore, in the MESD dataset, augmented versions of audios pre-existed, accounting for up to 25% of the total data, thus we decided not to undertake any more data augmentation to avoid overwhelming the original audio samples.