Datasets:

Languages:
English
ArXiv:
License:
japerez commited on
Commit
26a2f72
·
verified ·
1 Parent(s): 6401f48

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -19,7 +19,7 @@ grammatical errors intact was essential.
19
  Existing speech-to-text (STT) models like Whisper tend to correct grammatical errors due to their strong internal language models, making them unsuitable for this task.
20
  Therefore, SESGE was created to train a custom STT model that could accurately transcribe spoken English with grammatical errors preserved.
21
 
22
- ## Dataset Creation
23
 
24
  Given the absence of a suitable dataset for training an error-preserving STT system, DeMINT fine-tuned a Whisper model with data from two primary sources:
25
 
@@ -40,7 +40,7 @@ SESGE dataset. This means that while COREFL data was used during our training, o
40
 
41
  Training samples comprise 28,592 utterances from C4_200M. Validation and test sets contain 700 samples each.
42
 
43
- ## Models
44
 
45
  Two models were trained on the SESGE dataset by fine-tuning Whisper, enabling error-preserving STT. These models are available on the Hugging Face Hub:
46
 
@@ -50,7 +50,7 @@ Two models were trained on the SESGE dataset by fine-tuning Whisper, enabling er
50
  Both models have been optimized to transcribe spoken English while retaining grammatical errors, making them suitable for language-learning applications
51
  where fidelity to spoken errors is essential.
52
 
53
- ## How to Cite
54
 
55
  If you use the SESGE dataset, please cite the following paper:
56
 
 
19
  Existing speech-to-text (STT) models like Whisper tend to correct grammatical errors due to their strong internal language models, making them unsuitable for this task.
20
  Therefore, SESGE was created to train a custom STT model that could accurately transcribe spoken English with grammatical errors preserved.
21
 
22
+ ## Dataset description
23
 
24
  Given the absence of a suitable dataset for training an error-preserving STT system, DeMINT fine-tuned a Whisper model with data from two primary sources:
25
 
 
40
 
41
  Training samples comprise 28,592 utterances from C4_200M. Validation and test sets contain 700 samples each.
42
 
43
+ ## Derived models
44
 
45
  Two models were trained on the SESGE dataset by fine-tuning Whisper, enabling error-preserving STT. These models are available on the Hugging Face Hub:
46
 
 
50
  Both models have been optimized to transcribe spoken English while retaining grammatical errors, making them suitable for language-learning applications
51
  where fidelity to spoken errors is essential.
52
 
53
+ ## How to cite this work
54
 
55
  If you use the SESGE dataset, please cite the following paper:
56