vitouphy commited on
Commit
b9192df
1 Parent(s): 5ec17aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -7
README.md CHANGED
@@ -16,21 +16,55 @@ model-index:
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # model
20
 
21
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Timit dataset.
22
 
23
- ## Model description
24
 
25
- More information needed
26
 
27
- ## Intended uses & limitations
 
28
 
29
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ## Training and evaluation data
 
 
 
 
32
 
33
- More information needed
34
 
35
  ## Training procedure
36
 
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ ## Model
20
 
21
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Timit dataset. Check [this notebook](https://www.kaggle.com/code/vitouphy/phoneme-recognition-with-wav2vec2) for training detail.
22
 
23
+ ## Usage
24
 
25
+ **Approach 1:** Using HuggingFace's pipeline, this will cover everything end-to-end from raw audio input to text output.
26
 
27
+ ```python
28
+ from transformers import pipeline
29
 
30
+ # Load the model
31
+ pipe = pipeline(model="vitouphy/wav2vec2-xls-r-300m-phoneme")
32
+ # Process raw audio
33
+ output = pipe("audio_file.wav", chunk_length_s=10, stride_length_s=(4, 2))
34
+ ```
35
+
36
+ **Approach 2:** More custom way to predict phonemes.
37
+ ```python
38
+
39
+ from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
40
+ from datasets import load_dataset
41
+ import torch
42
+ import soundfile as sf
43
+
44
+ # load model and processor
45
+ processor = Wav2Vec2Processor.from_pretrained("vitouphy/wav2vec2-xls-r-300m-phoneme")
46
+ model = Wav2Vec2ForCTC.from_pretrained("vitouphy/wav2vec2-xls-r-300m-phoneme")
47
+
48
+ # Read and process the input
49
+ audio_input, sample_rate = sf.read("audio_file.wav")
50
+ inputs = processor(audio_input, sampling_rate=16_000, return_tensors="pt", padding=True)
51
+
52
+ with torch.no_grad():
53
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
54
+
55
+ # Decode id into string
56
+ predicted_ids = torch.argmax(logits, axis=-1)
57
+ predicted_sentences = processor.batch_decode(predicted_ids)
58
+ print(predicted_sentences)
59
+
60
+ ```
61
 
62
  ## Training and evaluation data
63
+ We use [DARPA TIMIT dataset](https://www.kaggle.com/datasets/mfekadu/darpa-timit-acousticphonetic-continuous-speech) for this model.
64
+ - We split into **80/10/10** for training, validation, and testing respectively.
65
+ - That roughly corresponds to about **137/17/17** minutes.
66
+ - The model obtained **7.996%** on this test set.
67
 
 
68
 
69
  ## Training procedure
70