|
--- |
|
license: mit |
|
datasets: |
|
- ami |
|
language: |
|
- en |
|
library_name: pyannote-audio |
|
pipeline_tag: voice-activity-detection |
|
tags: |
|
- chime7_task1 |
|
--- |
|
|
|
## Pyannote Segmentation model fine-tuned on CHiME-7 DASR data |
|
|
|
This repo contains the [Pyannote Segmentation](https://huggingface.co./pyannote/segmentation/tree/main) model fine-tuned on data from CHiME-7 DASR Challenge. |
|
Only CHiME-6 (train set) data was used for training while Mixer 6 (dev set) was used for validation in order to avoid overfitting CHiME-6 scenario |
|
(Mixer 6 is arguably the most different scenario within the three in CHiME-7 DASR so I used it in validation here as the ultimate score is a macro-average across all scenarios). |
|
|
|
It is used to perform diarization in the CHiME-7 DASR diarization baseline. <br> |
|
**For more information see the [CHiME-7 DASR baseline recipe in ESPNEt2](https://github.com/espnet/espnet/egs2/chime7_task1/diar_asr1).** |
|
|